id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2304.07483
Video Generation Beyond a Single Clip
We tackle the long video generation problem, i.e.~generating videos beyond the output length of video generation models. Due to the computation resource constraints, video generation models can only generate video clips that are relatively short compared with the length of real videos. Existing works apply a sliding window approach to generate long videos at inference time, which is often limited to generating recurrent events or homogeneous content. To generate long videos covering diverse content and multiple events, we propose to use additional guidance to control the video generation process. We further present a two-stage approach to the problem, which allows us to utilize existing video generation models to generate high-quality videos within a small time window while modeling the video holistically based on the input guidance. The proposed approach is complementary to existing efforts on video generation, which focus on generating realistic video within a fixed time window. Extensive experiments on challenging real-world videos validate the benefit of the proposed method, which improves over state-of-the-art by up to 9.5% in objective metrics and is preferred by users more than 80% of time.
Hsin-Ping Huang, Yu-Chuan Su, Ming-Hsuan Yang
2023-04-15T06:17:30Z
http://arxiv.org/abs/2304.07483v1
# Video Generation Beyond a Single Clip ###### Abstract We tackle the long video generation problem, i.e. generating videos beyond the output length of video generation models. Due to the computation resource constraints, video generation models can only generate video clips that are relatively short compared with the length of real videos. Existing works apply a sliding window approach to generate long videos at inference time, which is often limited to generating recurrent events or homogeneous content. To generate long videos covering diverse content and multiple events, we propose to use additional guidance to control the video generation process. We further present a two-stage approach to the problem, which allows us to utilize existing video generation models to generate high-quality videos within a small time window while modeling the video holistically based on the input guidance. The proposed approach is complementary to existing efforts on video generation, which focus on generating realistic video within a fixed time window. Extensive experiments on challenging real-world videos validate the benefit of the proposed method, which improves over state-of-the-art by up to 9.5% in objective metrics and is preferred by users more than 80% of time. ## 1 Introduction Video generation has recently attracted increasing attention. As a natural extension of image generation, most existing works treat video generation as a 3D volume prediction problem and focus on generating realistic video clips. While this paradigm has demonstrated impressive progress in a wide range of video generation tasks such as frame prediction [13, 29, 31, 42, 39], class-conditional generation, and unconditional video generation [6, 32, 41, 46, 33], the length of the video clip that can be generated is inevitably limited by the computation resource. Due to the significant computation and memory overhead incurred by state-of-the-art video generation models, they often generate video clips that are much shorter than real videos. In order to generate long videos that match the length of real videos, existing works adopt a sliding window approach. Specifically, the model generates one video clip at a time in temporal order while taking the previously generated frames as input. The previously generated frames serve as the condition for the model to ensure that the generated video is consistent across video clips. This approach has been successfully applied to generate long videos [15]. However, the results are far from satisfactory, particularly in real data domains. The synthesized videos are often limited to homogeneous videos of natural scenes and videos with recurrent human actions. In contrast, many real videos contain dynamic scenes with multiple, non-recurrent events. The existing sliding window approach is sub-optimal from two aspects. First, it implicitly assumes that video generation is a Markov process, where a new clip only depends on the previous clip or even the last frame of the previous clip. This is not true in many videos, e.g. an object Figure 1: **Long video generation.** Video generation models can only generate a relatively short video clip. Existing works [15] generate long videos using a sliding window approach that extends the previously generated video clip, which often leads to videos with repetitive patterns. Instead, we study the problem of generating long videos using additional guidance, which helps to generate videos covering multiple, non-recurrent events. We also propose a two-stage approach for the long video generation problem, which is complementary to existing efforts on video generation. may leave and re-enter the video which introduces a long-range dependency to the video. Second, it assumes that the initial frames are a sufficient condition for generating the video clip. However, it is well known that there may be multiple valid futures given the initial condition for a video [55], and it is unlikely to infer the correct future purely from the initial frames. To overcome these problems, it is essential to use additional guidance to control the generation process of long videos. In this work, we propose to tackle the _long video generation_ problem. Given a video clip generation model, a series of guidance, and a reference frame representing the initial condition of the video, the goal is to generate a long video that is beyond the capability of the clip generation model. In this work, we choose to use object labels as the input guidance, where the guidance describes the set of objects that will appear in the video clip. While other types of guidance are also possible, we choose object labels because they can be provided by users easily and naturally supports video manipulation such as content insertion or removal. The long video generation problem aims to extend the ability of existing video generation models to generate realistic video covering diverse content and multiple events and is complimentary to existing efforts that focus on generating high-quality videos within a fixed temporal window. To solve the long video generation problem, we propose a two-stage approach by decomposing the problem into a keyframes generation problem followed by a frame interpolation problem. We first predict all the keyframes jointly based on the input guidance and reference frame. These keyframes represent the starting frames of each video clip. We then generate the entire video by predicting the intermediate frames between keyframes, using the video clip generation model. See Fig. 1. The two-stage approach allows us to utilize existing video generation models that are highly optimized and can generate high-quality, realistic videos within a short temporal window. It also allows us to model the full video jointly and capture long-range dependencies through keyframe generation. Our evaluations show that the holistic keyframe modeling help to maintain consistency throughout the video. We conduct extensive quantitative and qualitative studies on both real and synthetic data. In particular, we evaluate our method on the EPIC Kitchen dataset, which is challenging for video generation due to the rapid motion and complex scenes. Empirical results verify the advantage of the proposed long video generation method and show that it outperforms state-of-the-art video generation models by 9.5% on LPIPS. Our main contributions are as follows. First, we study the long video generation problem which aims to extend the capability of existing video generation models to generate long videos with dynamic senses and non-recurrent events. Second, we propose a two-stage approach for the long video generation problem. Finally, we conduct extensive evaluations to validate the efficacy of our proposed framework. ## 2 Related Work **Video synthesis.** Video prediction, class-conditional video generation and unconditional video generation have been widely studied as the sub-tasks of video generation. GAN-based methods [3, 3, 6, 32, 33, 40, 41, 46] have demonstrated early success to generate short video clips, while the generation quality decreases significantly when applying to long videos. Diffusion models [17, 52, 45] are recently introduced for video generation. However, the slow sampling speed of diffusion models limits their ability to generate long videos. Auto-regressive models are first developed to synthesize raw pixels in videos [1, 22, 47]. Thanks to the development of vector quantization [12, 43] and transformer [9] models, auto-regressive methods are adapted to predict discrete tokens in the latent space [51, 30, 15] with impressive visual quality. In this work, we build upon recent advances of VQVAE and non-autoregressive transformer models [4, 53]. Despite the recent success in video generation, these models mostly focus on generating short clips (_e.g_. 16-frame videos) and are limited to synthesizing videos in specific domains [44, 15], such as human actions [34, 38], sky timelapse [50], robotics videos [10]. Most recently, a few text-to-video models [19, 49, 35, 16, 16] are developed to generate videos given natural language inputs. However, these models are limited to videos with single scenes without a meaningful storyline. In contrast, our work aims to generate videos with diverse content and novel events. There are few works exploring complex video generation conditioned on input guidance at multiple timesteps while limited to synthetic environments [2, 15, 54]. Our work focuses on the real-world dataset. **Story visualization and image manipulation.** Story visualization [26, 27, 28, 37] focuses on synthesizing a sequence of images that visualize a story of multiple sentences. Each sentence in the story corresponds to one synthesized image. GeNeVA [11, 14, 56, 7] is a conditional text-to-image generation task developed on CoDraw [24] dataset. It studies the problem of constructing a scene iteratively based on a sequence of descriptions. However, these two lines of work are limited to experiments on synthetic and cartoon data. These approaches focus on generating a few frames of a visual story instead of videos. In addition, the inputs to these methods are natural language descriptions which might exist in ambiguity and do not clearly describe the objects in the image, unlike object labels. In contrast, we focus on experiments on real-world data and generating videos given a series of object labels as inputs. ## 3 Approach In this section, we introduce the proposed method for long video generation. We first give an overview of the approach and then describe the three main components. ### Overview We define our problem as follows. The model takes 1) a series of \(N\) sets of object labels \(\{\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{N}\}\), and 2) a reference frame, \(I_{0}\) as input. The reference frame is the starting frame of the video and is the same initial condition as existing video prediction task. The object label sets serve as the guidance for video generation. Each object label set \(\mathbf{x}_{n}\) contains \(k^{n}\) object labels which indicates the objects that will appear in the video. Note that \(k^{n}\) may vary across different timestep \(n\). Our goal is to synthesize a video \(\mathbf{V}\) that covers the content provided in the guidance while maintaining the same style as the initial frame \(\mathbf{I}_{0}\). We present the overview of the long video generation framework in Fig. 2. To tackle the long video generation problem, our core idea is to model the entire video jointly through keyframe generation and utilize existing models to generate high-quality videos within a short temporal window. We achieve these by 1) generating a series of \(N\) keyframes in the video given the object label sets, each keyframe presents the starting frames of each video clip, and 2) predicting the intermediate frames between adjacent keyframes to obtain the complete video. To reduce the difficulty of keyframe generation, we use layouts as the intermediate representation to bridge the gap of generating 2D images from symbolic object labels [18, 21]. We introduce an additional stage of layout generation that creates an explicit 2D structure of the scene as an intermediate input to constrain the keyframe generation. It brings an additional benefit that the users can manipulate the generated videos by editing the layouts such as content insertion or removal. The proposed approach consists of three steps: layout generation, keyframe generation, and frame interpolation. ### Layout Generation We first generate a series of layouts from the object label sets to explicitly constrain the keyframe generation. Given a series of \(N\) object label sets \(\{\mathbf{x}_{n}\}\), and a reference first frame \(\mathbf{I}_{0}\), we synthesize a series of layouts \(\{\mathbf{L}_{1},\mathbf{L}_{2},\cdots,\mathbf{L}_{N}\}\), which represents the layouts of the \(N\) keyframes in the video. Since the reference first frame \(\mathbf{I}_{0}\) is given, we assume \(\mathbf{L}_{0}\) is known as well and can be used as a reference layout to synthesize \(\{\mathbf{L}_{1},\mathbf{L}_{2},\cdots,\mathbf{L}_{N}\}\). We define the layout \(\mathbf{L}\) as a set of bounding boxes with variable length \(k^{n}\), _i.e_. \(\{\mathbf{b}_{1},\mathbf{b}_{2},\cdots,\mathbf{b}_{k^{n}}\}\). The bounding box contains the attribute of its object label, x-coordinate of the center, y-coordinate of the center, width and height, \(\mathbf{b}=\{\mathbf{c},\mathbf{x},\mathbf{y},\mathbf{w},\mathbf{h}\}\). Our layout generator is motivated by BLT [25]. We first apply tokenization to encode the object label sets \(\{\mathbf{x}_{n}\}\) into discrete tokens. We then learn a transformer model that predicts the layout tokens given the object label sets as input. The ground truth tokens are obtained from tokenizing the layouts \(\{\mathbf{L}_{n}\}\). Though the object label sets and layouts have variable length \(k^{n}\), we pad the token sequences to fixed length in practice. Specifically, the input series of \(N\) object label sets with \(k^{n}\) labels in each set can be flattened into a sequence of tokens. Similarly, the layout sequences can be flattened into token sequences as well, where the values of the bounding box attributes are simply discretized by uniform quantization [25]. The tokens of the reference layout \(\mathbf{L}_{0}\) and the object labels \(\mathbf{c}\) are known and the other attributes of the bounding boxes \(\{\mathbf{x},\mathbf{y},\mathbf{w},\mathbf{h}\}\) are replaced with a [MASK] token. The transformer model is trained to predict the missing tokens for the layout series \(\hat{\mathbf{L}}_{1}-\hat{\mathbf{L}}_{N}\)[25]. Different from BLT [25] which takes a single object label set as input and predicts a single layout, our model takes the reference layout and a se Figure 2: **Approach overview.** The proposed method consists of three stages: 1) generating the series of layouts from the series of object label sets and a reference first frame image, 2) synthesizing the keyframes of the video from the predicted layout sequence and the reference image, and 3) interpolating the synthesized keyframe sequence to obtain the complete video. ries of object label sets as input, and predicts multiple layouts at the same time. Thus, our model preserves the temporal consistency of the layout sequences while BLT fails to. ### Keyframe Generation Next, we generate a sequence of keyframes from the predicted layout sequences. Given a reference first frame \(\mathbf{I}_{0}\) and a series of layouts \(\{\hat{\mathbf{L}}_{1},\hat{\mathbf{L}}_{2},\cdots,\hat{\mathbf{L}}_{N}\}\) synthesized in the previous stage, we aim to generate a sequence of keyframes \(\mathbf{I}_{1}\)-\(\mathbf{I}_{N}\). Each pair of adjacent keyframes will be used as the start and end frame to generate one video clip out of the \(N\) clips in the entire video. Following [4], we convert the keyframe generation into tokenization and sequence prediction. We use a VQVAE [12] model to encode the raw image pixels \(\mathbf{I}\) into discrete visual tokens \(\mathbf{e}\), and a bidirectional transformer model is used to predict the masked image tokens _i.e_. target keyframes. Finally, the decoder of VQVAE is used to map the visual token \(\hat{\mathbf{e}}\) into raw images \(\hat{\mathbf{I}}\). Specifically, the input series of layouts \(\{\mathbf{L}_{0},\hat{\mathbf{L}}_{1},\cdots,\hat{\mathbf{L}}_{N}\}\) are flattened into a sequence of discrete tokens. The reference keyframe \(\mathbf{I}_{0}\) and the target keyframe sequence \(\mathbf{I}_{1}\)-\(\mathbf{I}_{N}\) are transformed into discrete visual tokens and flattened into sequences as well, _i.e_. \(\{\mathbf{e}_{0},\mathbf{e}_{1},\cdots,\mathbf{e}_{N}\}\). At training time, the tokens of input layouts and the visual tokens of the target keyframes are concatenated. The tokens are randomly masked out and the transformer model is trained to predict the missing token. Specifically, given a sequence \(\mathbf{s}=\{\mathbf{e}_{0},\mathbf{L}_{0},\hat{\mathbf{L}}_{1},\cdots,\hat{ \mathbf{L}}_{N},\mathbf{e}_{1},\cdots,\mathbf{e}_{N}\}\) in dataset \(\mathcal{D}\), we replace the tokens in the sequence with the [MASK] token and obtains the masked sequence \(\mathbf{s}_{M}\). We minimize the negative log-likelihood of predicting the masked tokens \(\mathbf{s}_{t},t\in M\). \[\mathcal{L}=-\mathop{\mathbb{E}}_{s\in\mathcal{D}}\sum_{t\in M}\log p(s_{t}|s_ {M}) \tag{1}\] At test time, the tokens of layout sequence and the first frame \(\{\mathbf{e}_{0},\mathbf{L}_{0},\hat{\mathbf{L}}_{1},\cdots,\hat{\mathbf{L}}_{N}\}\) are given, and the model predicts the tokens of the following keyframes \(\hat{\mathbf{e}}_{1}-\hat{\mathbf{e}}_{N}\). Finally, the decoder of VQGAN is used to reconstruct the target keyframes \(\hat{\mathbf{I}}_{1}-\hat{\mathbf{I}}_{N}\). Fig. 3 shows our keyframe generation approach. Compare with [4], our keyframe generation method generates all the frames jointly. This provides a more holistic model for the entire video. As we show in the experiment, this helps to improve the consistency across keyframes and improves the coherency of the video. ### Frame Interpolation Finally, given the reference first frame \(\mathbf{I}_{0}\) and a sequence of generated keyframes \(\hat{\mathbf{I}}_{1}\dots\hat{\mathbf{I}}_{N}\), we apply an existing video generation model to generate the complete video. In particular, we use MAGVIT [53] to generate intermediate frames following the video interpolation task of the original model. Specifically, the model takes the initial and final frame of the video as input. It first converts the input frames into discrete video tokens using 3D-VQVAE. A transformer model is then used to predict the tokens of intermediate frames. Finally, the interpolated video tokens are mapped back to the raw videos by the 3D decoder. We re-train the video generation model on our data. During inference time, given two consecutive keyframes \(\hat{\mathbf{I}}_{n-1}\) and \(\hat{\mathbf{I}}_{n}\), the model predicts the video token sequences that connect between the two keyframes, _i.e_. \(\hat{\mathbf{z}}_{n}\). We predict \(N\) clips of video tokens \(\hat{\mathbf{z}}_{1},\hat{\mathbf{z}}_{2},\cdots,\hat{\mathbf{z}}_{N}\). Finally, the video token sequences are concatenated and then mapped to the raw video pixels \(\mathbf{V}\) by the 3D decoder. ## 4 Experiments We validate our approach on both real and synthetic data. We first evaluate the video generation results on challenging real-world videos. Next, we study the keyframe generation results on both real and synthetic data. **Dataset.** We validate our method on two datasets. _EPIC Kitchen_[8] is a real video dataset consisting of egocentric videos of kitchen activities. It contains 700 videos with a total length of 100 hours. Compared with other commonly used datasets for video generation research, e.g. UCF [38], Kinetics [23], and BAIR [10], the content of EPIC Kitchen videos are more dynamic. The objects move in and out of the camera field-of-view frequently, and the scene and camera viewpoint may change rapidly. To synthesize such video, the video generation model needs to generate multiple, non-recurrent events within a short time window. This aligns with our goal for long video generation, and EPIC Kitchen serves as an ideal test bed for the Figure 3: **Keyframe generation. The reference frame and predicted layouts are first converted into discrete tokens. A transformer takes the tokens as input and predicts the tokens of the keyframes, which are then decoded into the final keyframes. Note that the transformer generates all keyframes jointly, which provides a more holistic model for the video.** problem and approach. Therefore, we choose EPIC Kitchen over other datasets for our experiments. We preprocess the EPIC Kitchen videos as follows. We follow the original train and test split and cut the videos into 64-frame sequences. We first re-sample the sequences with double the frame rate, so that each 64-frame sequence covers a 5-second video with the five keyframes sampled equidistantly. This leads to 276k sequences for training and 3k sequences of non-overlapping frames for test. We use MaskFormer [5] to extract the semantic map for each frame and convert them into the object labels and bounding boxes, which serve as the input guidance and ground truth layout. Please refer to the supplementary material for details. _CoDraw_[24] is a synthetic dataset consisting of virtual scenes made by clip objects. It contains 10k scenes consisting of 58 different objects. Each scene comes with a sequence of images showing the step-by-step construction process of the scene. While it is not a video dataset, it contains diverse object classes and provides complete annotation for the object labels and layouts, which is ideal for controlled experiments. We use the sequence of images as video keyframes and evaluate keyframe and layout generation on CoDraw. To analyze the temporal consistency between the generated keyframes, we extend the original CoDraw data by creating 6 different appearances for each clip object class and re-render the data using the original layouts. This resulting dataset consists of 67k training sequences and 4k test sequences. **Evaluation metrics.** We use the following metrics for performance evaluation: * Frechet Video Distance (FVD)--assesses the quality of generated videos. Specifically, it measures whether the distribution of generated videos is close to that of real videos in the feature space. Following the original paper, we use I3D model trained on Kinetics-400 for video features. * Frechet Inception Distance (FID)--assesses the quality of generated images, similar to FVD. We use inception V3 for image features. FID is used to evaluate the keyframe quality. * Learned Perceptual Image Patch Similarity (LPIPS)--assesses the perceptual similarity between the generated video frames and the ground truth video frames. We compute these metrics for every frame and report the average. * PSNR, SSIM--assess the similarity between the generated frames and ground truth frame, similar to LPIPS. **Implementation details.** In our experiments, each video sequence contains 64 frames, and the videos are generated at 128\(\times\)128. We sample one keyframe every 16 frames in the 64-frame clips so that the keyframe sequence contains \(N=4\) synthesized keyframes at 256\(\times\)256. More implementation details are in the supplementary materials. ### Video Generation First, we evaluate the video generation results on the EPIC Kitchen dataset. The goal is to verify the effectiveness of additional guidance and holistic video modeling in long video generation. **Baselines.** We compare with the following state-of-the-art video generation methods * MAGVIT (frame prediction) [53]: given the reference frame as input, we apply MAGVIT to generate a 16-frame clip. We then take the last predicted frame as input to iteratively generate the entire video. This is the standard sliding window approach for long video generation. * MAGVIT (class conditional) [53]: we condition the MAGVIT model on both the reference frame and the object label. This extends the sliding window approach to take additional guidance similar to our method. To understand how the quality of layout and keyframe generation affects the video generation results, we also compare with two variants of our method that take the ground truth layouts and keyframes as inputs respectively. **Quantitative results.** The results are in Tab. 1. Our method consistently outperforms the baselines in terms of video quality, and the generated videos are closer to the ground \begin{table} \begin{tabular}{l c c c c} \hline \hline Methods & FVD \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline MAGVIT (frame pred.) [53] & 421.7 & 11.316 & 0.175 & 0.482 \\ MAGVIT (class cond.) [53] & 400.9 & 11.398 & 0.178 & 0.476 \\ \hline Ours & 380.8 & 12.037 & 0.206 & 0.431 \\ Ours (GT Layouts) & 363.5 & 13.566 & 0.287 & 0.350 \\ Ours (GT Keyframes) & 314.7 & 15.591 & 0.386 & 0.253 \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative results for video generation. We report the metrics on the EPIC Kitchen dataset [8]. Image metrics are averaged across all video frames. Our method consistently outperforms state-of-the-art video generation methods.** Figure 4: **Per-frame results for video generation. X-axis shows the frame number, ranging from 0 to 64. Y-axis shows the LPIPS of a specific frame averaging over all test videos. The performance of MAGVIT drops rapidly over time, while our approach can slow down the quality degradation. Figures are best viewed in color.** truth videos. The results verify the effectiveness of the proposed approach. Note that MAGVIT (class conditional) performs better than MAGVIT (frame prediction), which shows the benefit of additional guidance. Our approach further improves MAGVIT (class conditional), which suggests that both the additional guidance and the holistic video modeling provided by our approach are helpful for long video generation. On the other hand, Ours (GT Layouts) and Ours (GT Keyframes) significantly improve the output video quality. The results suggest that long video generation has significant room for improvement given the very same video generation model, and improving the video generation model alone may not be sufficient for solving the video generation problem. It verifies the importance of the long video generation problem. Note that our multi-stage approach allows users manually improve the intermediate representations, e.g. provide more detailed layouts, which allows further improvement for the generated videos in an interactive generation setting. Figure 5: **Visual results for video generation.** MAGVIT (frame pred./class cond.) generates video with homogeneous contents, and the quality degrades for long sequences. Our method generates video with scene change (floor to table), object deletion (utensils on left hand) and object insertion (knife on right hand), showing the ability of our model to generate videos with multiple events. We further show that Our (GT Layouts) generates videos close to the upper bound results of Our (GT Keyframes). On the other hand, Ours generates videos that match the content but with different layouts. The results validate the ability of our model to generate videos that satisfy different levels of the input guidance _i.e_. object label sets or (more constrained) layouts. Fig. 4 shows the per-frame LPIPS score. The generated image quality degraded rapidly in MAGVIT, especially when we try to generate videos beyond the length of the training clip (i.e. 16 frames). In contrast, our approach experiences a slower quality degradation, which further verifies its benefit in generating video beyond the training clip. **Qualitative results.** Fig. 5 show the qualitative results. MAGVIT tends to predict a relatively static video, which is consistent with the observations in prior works. On the other hand, our method generates video with scene change (wall to table), object deletion (plate at the bottom) and object insertion (left/right hand), showing the ability of our model to generate videos with multiple events. Compared with the upper bound results of Ours (GT Keyframes), our method generates videos that match the content but with different layouts or locations. Fig. 6 shows that our method allows users to generate different videos by sampling different layouts from the model. It also allows video manipulation through layout, where the user may remove an object, change the size and position of the objects, _etc._ **User study.** We also conduct a user study to augment the quantitative evaluation. In the study, we present two videos generated by different methods together with the ground truth video and ask the raters 1) which video has the better visual quality, and 2) which video better reproduces the content of the ground truth video. We conduct the study with 40 videos and 11 participants. The results are in Tab. 2, which is consistent with the quantitative results and further validates the superior performance of our method ### Keyframes Generation Next, we evaluate the performance of our keyframe generation model. The goal is to verify 1) the importance of generating all the keyframes jointly, and 2) the importance of additional guidance for generating content across a large temporal window. **Baselines.** We compare the following baselines and variants of our method: * MaskGIT [4]: the model takes the reference as input and iteratively predicts the next keyframe. This model represents keyframe generation without input guidance. * HCSS [20]: the model takes a single layout as input and generates a single keyframe. We apply HCSS to generate each keyframe independently from the predicted layouts. This model represents keyframe generation without full video modeling. * Ours: our keyframe generation given the predicted layouts as inputs. * Ours-GT: our keyframe generation using the ground truth layouts as inputs (upper bound performance). * Ours-GT (Single): our keyframe generator that pre \begin{table} \begin{tabular}{l c c} \hline \hline Methods & Quality & Reproduction \\ \hline Ours _vs._ MAGVIT (frame pred.) & 76.3\% & 82.1\% \\ Ours _vs._ MAGVIT (class cond.) & 68.4\% & 66.7\% \\ \hline \hline \end{tabular} \end{table} Table 2: **User study.** We report the percentage of raters that consider our method generates better video quality and better reproduces the ground truth videos respectively. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{CoDraw} & \multicolumn{2}{c}{EPIC Kitchen} \\ \cline{2-5} Methods & FID \(\downarrow\) & LPIPS \(\downarrow\) & FID \(\downarrow\) & LPIPS \(\downarrow\) \\ \hline MaskGIT [4] & 10.8 & 0.304 & 46.9 & 0.633 \\ HCSS [20] & 9.8 & 0.425 & 50.2 & 0.653 \\ \hline Ours & 7.4 & 0.325 & 34.8 & 0.575 \\ Ours-GT & 3.9 & 0.106 & 24.2 & 0.416 \\ Ours-GT (Single) & 4.6 & 0.156 & 27.5 & 0.480 \\ \hline \hline \end{tabular} \end{table} Table 3: **Quantitative results for keyframe generation.** The metrics are averaged across keyframes. Please refer to the supplementary material for additional metrics. Figure 6: **Video manipulation through layouts.** Users can generate different videos by sampling different layouts, as shown in the first two rows. Users can even manipulate the videos by editing the layouts. The yellow cross in the third row shows object deletion, and the yellow arrow in the last row shows object movement. dicts keyframes iteratively conditioning on the previous keyframe and ground truth layouts. This model represents keyframe generation without full video modeling. Quantitative results.The results are in Tab. 3. Our method consistently outperforms HCSS on both real and synthetic data, which verifies the importance of joint prediction for all keyframes. Our method also performs better than MaskGIT except for the perceptual similarity with ground truth frames in the synthetic dataset. After taking a closer look at the generated frames, we observed that MaskGIT tends to predict repetitive keyframes with little changes across frames. This implies that the video will remain static, which is not suitable for video generation. These results show that our method can generate better keyframes for video generation than MaskGIT, which verifies the importance of guidance. We also compare different variants of our method. In particular, the superior performance of Ours-GT over Ours-GT (Single) further verifies that a model that considers the entire video jointly leads to better keyframe generation. Qualitative results.Fig. 7 shows the qualitative results. As mentioned before, MaskGIT tends to generate keyframes with similar content, which shows the importance of providing input guidance at multiple timesteps. Comparing HCSS and our method, we can see that HCSS fails to generate consistent results across the keyframes, e.g. the color of the fire changes. Similarly, when we compare Ours-GT with Ours-GT (Single), we can see the iterative approach fails to generate consistent keyframes. The examples clearly demonstrate the importance of joint modeling for the entire video. Please refer to the supplementary material for additional evaluation, including the evaluation for layout generation. ## 5 Conclusions We tackle the problem of long video generation which aims to generate videos beyond the output length of video generation models. We show that the existing sliding window approach is sub-optimal, and there is significant room for improvement using the same video generation model. To improve long video generation, we propose to use additional guidance to control the generation process. We further propose a two-stage approach which can utilize existing video generation models while capturing long-range dependency within the video. Empirical results validate our model design and show favorable results over state-of-the-art video generation methods. Figure 7: **Qualitative results for keyframe generation.** Compared with our method, MaskGIT tends to predict repetitive content which shows the importance of guidance. On the other hand, HCSS predicts inconsistent keyframes and fails to maintain temporal consistency. When we compare Ours-GT with Ours-GT (Single), we can see the iterative approach fails to generate consistent keyframes. The results show the importance of modeling the entire video jointly.
2306.11498
Conditional Independence Testing with Heteroskedastic Data and Applications to Causal Discovery
Conditional independence (CI) testing is frequently used in data analysis and machine learning for various scientific fields and it forms the basis of constraint-based causal discovery. Oftentimes, CI testing relies on strong, rather unrealistic assumptions. One of these assumptions is homoskedasticity, in other words, a constant conditional variance is assumed. We frame heteroskedasticity in a structural causal model framework and present an adaptation of the partial correlation CI test that works well in the presence of heteroskedastic noise, given that expert knowledge about the heteroskedastic relationships is available. Further, we provide theoretical consistency results for the proposed CI test which carry over to causal discovery under certain assumptions. Numerical causal discovery experiments demonstrate that the adapted partial correlation CI test outperforms the standard test in the presence of heteroskedasticity and is on par for the homoskedastic case. Finally, we discuss the general challenges and limits as to how expert knowledge about heteroskedasticity can be accounted for in causal discovery.
Wiebke Günther, Urmi Ninad, jonas Wahl, Jakob Runge
2023-06-20T12:36:38Z
http://arxiv.org/abs/2306.11498v1
# Conditional Independence Testing with Heteroskedastic Data and Applications to Causal Discovery ###### Abstract Conditional independence (CI) testing is frequently used in data analysis and machine learning for various scientific fields and it forms the basis of constraint-based causal discovery. Oftentimes, CI testing relies on strong, rather unrealistic assumptions. One of these assumptions is homoskedasticity, in other words, a constant conditional variance is assumed. We frame heteroskedasticity in a structural causal model framework and present an adaptation of the partial correlation CI test that works well in the presence of heteroskedastic noise, given that expert knowledge about the heteroskedastic relationships is available. Further, we provide theoretical consistency results for the proposed CI test which carry over to causal discovery under certain assumptions. Numerical causal discovery experiments demonstrate that the adapted partial correlation CI test outperforms the standard test in the presence of heteroskedasticity and is on par for the homoskedastic case. Finally, we discuss the general challenges and limits as to how expert knowledge about heteroskedasticity can be accounted for in causal discovery. ## 1 Introduction Conditional independence (CI) testing is a frequently used step across a wide range of machine learning tasks for various scientific fields. It is also very challenging. Discovering causal relationships from purely observational data is an even more challenging task and an important topic in sciences where real experiments are infeasible, e.g. in climate research (Ebert-Uphoff and Deng, 2012; Runge et al., 2019). One can distinguish several frameworks that address this problem: score-based approaches (Chickering, 2002), restricted structural causal models (Peters et al., 2017), and constraint-based methods (Spirtes et al., 2000) that rely on CI testing. A typical representative is the PC algorithm (Spirtes and Glymour, 1991) which can be combined with any conditional independence test and is thus adaptable to a wide range of data distributions. It utilizes the Faithfulness assumption to conclude that no causal link can exist between two variables \(X\) and \(Y\) if a CI test suggests that they are independent given a set \(Z\). When a linear additive noise model may be assumed, a popular CI test is the partial correlation test (Lawrance, 1976) which has the advantages of fast computation time and that the null distribution is known analytically. As a disadvantage rather strict and often unrealistic assumptions have to be satisfied. One of these assumptions is homoskedasticity, meaning that the variance of the error term is constant. When this assumption is violated, i.e., in the case of heteroskedasticity, the variance can, for instance, depend on the sampling index or the value of one or multiple influencing variables. In regression analysis one distinguishes between _impure_ and _pure_ heteroskedasticity. Impure heteroskedasticity stems from an insufficient model specification, e.g. if unobserved variables or confounders are present. Another reason might be that the model fails to capture the full relationships of the variables, e.g. the underlying model might be linear with multiplicative noise rather than additive. On the other hand, pure heteroskedasticity is non-constant noise variance that is present despite a correct model. The sources for heteroskedasticity in real data are manifold. For example, in environmental sciences precipitation in different areas might exhibit different variances that are unaccounted for by other variables in the system, i.e. location-scaled noise. Such a problem could be introduced by aggregating data of different catchments and not adding a variable that is well enough correlated with catchment location (Merz et al., 2021). An example for sampling index-dependent heteroskedasticity in the time series case are seasonal effects that are present in many climate variables (Proietti, 2004). Finally, an example where the noise variance of a variable is dependent on an observed cause is the distance to sea influencing the variability of temperature. This is a special case of state-dependent noise. One assumption of the PC algorithm is causal sufficiency, meaning that there are no unobserved confounders, formally excluding impure heteroskedasticity. However, in practice this assumption is often violated which can lead to heteroskedastic noise. If unaccounted heteroskedasticity is present in the data, the estimator of the ordinary least squares (OLS) regression slope parameter is still unbiased. However, the estimator of the covariance matrix of the parameter estimates can be biased and inconsistent under heteroskedasticity (Long and Ervin, 2000). This can skew subsequent partial correlation significance tests and affect the link detection rate of a causal discovery method. Furthermore, heteroskedasticity might even lead to the detection of wrong links (false positives). Moreover, the Gauss-Markov theorem assumes homoskedasticity and, hence, with heteroskedastic data the OLS slope estimator is no longer guaranteed to be the most efficient linear unbiased estimator, which can further harm power of the CI test. There are multiple ways to treat heteroskedasticity, either already during the modeling step by addressing possible model misspecification, by pre-processing the data, e.g. by applying a log-transform, or post-hoc using robust statistics for CI testing. Adapting the model, for example, by using CI tests allowing for multiplicative dependencies (Runge, 2018) might not always be feasible for limited sample sizes. Pre-processing in real-world problems also comes with drawbacks: The transformed variables are difficult to interpret and can introduce dependencies or spurious links. In this work we propose an adapted weighted least-squares (WLS) partial correlation variant as a CI test for the PC algorithm that is able to deal with particular forms of heteroskedasticity. **Our contributions** are theoretical consistency results as well as numerical experiments demonstrating that this approach yields well-calibrated CI tests leading to controlled false positive rates and also improves upon detection power as compared to the standard partial correlation CI test. Our approach requires expert knowledge in that it needs to be known which of the variables the heteroskedasticity depends on, or if it depends on the sampling index. ## 2 Related Work The effect of heteroskedasticity on the standard Pearson correlation test has been investigated, for example, by Wilcox and Muska (2001). Remedies for heteroskedasticity also have been extensively studied. Hayes (2007) discuss and evaluate heteroskedasticity-consistent standard errors. Practical approaches to choose weights for WLS, consistency and asymptotic results have been obtained, e.g. by Neumann (1994); Fan and Yao (1998); Carroll (1982); Robinson (1987); Brown and Levine (2007). Romano and Wolf (2017) propose a method that combines WLS with heteroskedasticity-consistent standard errors. Recently, within the causal discovery framework of restricted structural causal models (Peters et al., 2017) several authors (Xu et al., 2022; Tagasovska et al., 2020) have relaxed the assumption of homoskedasticity. In these works the authors focus on identifying cause from effect only in the bivariate setting. Specifically, Xu et al. (2022) base their inference score on the log-likelihood of regression residuals and apply a binning-scheme on regions where variance is approximately constant. More closely related to our work is the robust-PC method of Kalisch and Buhlmann (2008) who use an estimator in a recursive partial correlation formula to robustify the PC algorithm against outliers and otherwise contaminated data. Non-constant conditional variance can also be regarded as a specific kind of distributional shift or context change. The effects of distributional shifts on causal discovery have been investigated, for instance, in Huang et al. (2020); Mooij et al. (2020) where the authors propose a framework that includes the environment into the structural causal model formulation using context variables. ## 3 Problem setting ### Heteroskedasticity in causal models In this work, we consider discovering causal relationships in linear models in the presence of non-constant error variance which is potentially dependent on the parents or on the sampling index. To translate this into a structural causal model (SCM), we represent heteroskedasticity as a scaling function of the noise variable. In this way, it can also be viewed as state-dependent or multiplicative noise. Consider finitely many random variables \(V=(X^{1},\ldots,X^{d})\) with joint distribution \(\mathcal{P}_{X}\) over a domain \(\mathcal{X}=\mathcal{X}_{1}\times\ldots\times\mathcal{X}_{d}\). Then we are interested in \(n\) samples from the following SCM with assignments \[X_{t}^{i}:=f_{i}(Pa(X_{t}^{i}))+h_{i}(H(X_{t}^{i}))\cdot N_{i},\qquad i=1, \ldots,d \tag{1}\] where \(f_{i}\) are linear functions, \(t\in\mathcal{T}\) stands for the sample index, and we have the heteroskedasticity functions \(h_{i}:\mathcal{X}\times\mathcal{T}\rightarrow\mathbf{R}_{\geq 0}\). The noise variables \(N_{i}\) are assumed independent standard Gaussian distributions. The parent set of the variable \(X_{t}^{i}\) is denoted by \(Pa(X_{t}^{i})\), and \(H(X_{t}^{i})\subset Pa(X_{t}^{i})\cup\{t\}\) which can also be the empty set. Furthermore, we make the restriction that the causal relationships are stable over time, i.e. the parent sets \(Pa(X_{t}^{i})\) as well as the functions \(f_{i}\) are not time-dependent. For \(h_{i}\equiv const.\) the homoskedastic version of this SCM is obtained, which simply is a linear SCM with Gaussian noise. To learn such a causal model from data, a well-known and easy to implement method is to use a constraint-based causal discovery method with CI tests based on partial correlation. To recap the partial correlation test and to illustrate our adaptations, consider the following simple example model for a time-series, or otherwise indexed, SCM (1). The general multivariate case is discussed in section 4.2. \[\begin{aligned} X_{t}&=aZ_{t}+cE_{t}+h_{X}(Z_{t},t) \cdot N_{X}&& Z_{t}=N_{Z}\\ Y_{t}&=bZ_{t}+cE_{t}+h_{Y}(Z_{t},t)\cdot N_{Y}&& E_{t}=N_{E}\end{aligned} \tag{2}\] for some constants \(a\), \(b\) and \(c\), and standard normal independent noise terms \(N_{X}\), \(N_{Y}\), \(N_{Z}\), and \(N_{E}\). The constant \(c\) is set to zero for (conditionally) independent \(X\) and \(Y\). Note that here we are only interested in discovering the relationship between \(X\) and \(Y\) and not in learning the whole causal graph. An intuitive way to define and understand partial correlation is in terms of the correlation between residuals. The partial correlation between \(X\) and \(Y\) given a controlling variable \(Z\), or a set thereof, is the correlation between the residuals \(r_{X}\) and \(r_{Y}\) resulting from the linear regression of \(X\) on \(Z\) and of \(Y\) on \(Z\), respectively. The linear regression that is used in the standard variant of the partial correlation test is ordinary least squares (OLS) regression, thus we refer to this CI test as _ParCorr-OLS_. The Pearson correlation coefficient \(\rho\) is then estimated by \(\widehat{\rho}(X,Y|Z)=\frac{Cov(\widehat{r_{X}},r_{Y})}{\sqrt{Var(r_{X})Var( r_{Y})}}\). We use the hat operator to indicate estimators. For testing the null hypothesis \(H_{0}:\rho=0\) versus the alternative \(H_{1}:\rho\neq 0\), we use the studentized version of the partial correlation as a test statistic \(T(\hat{\rho})=\frac{\hat{\rho}\sqrt{n-2}}{\sqrt{1-\hat{\rho}^{2}}}\). This statistic is \(t(n-2-k)\)-distributed, where \(k\) is the number of variables we are conditioning on, and \(n\) is the sample size. However, for the partial correlation test to be correct, the assumptions of OLS, in particular homoskedasticity, have to be fulfilled. In the next section, we investigate what happens if this assumption is violated. ### Effects of heteroskedasticity on partial correlation Under heteroskedasticity, the estimator of slope regression parameters in OLS regression is still unbiased. However, the estimator of the covariance matrix of the parameter estimates can be biased and inconsistent under heteroskedasticity (Wooldridge, 2009, p.268). This also affects the subsequent residual-based correlation test in ways that we summarize below. Proofs for and further discussions of the following statements are provided in the Supplement A.1. **Effect 1**.: _Under the null hypothesis, the studentized Pearson correlation coefficient is \(t(n-2-k)\) distributed if \(X\) is independent of the variables inducing heteroskedasticity in \(Y\)._ This means that if only one node is compromised by heteroskedasticity, or, more generally, the heteroskedasticity in \(X\) is independent of that in \(Y\), the type-I error rate of the t-test will not be affected. Please refer to the middle plot in figure 1 for a visualization of the associated null distribution in comparison to the analytical one used by the partial correlation test. On the other hand, in the left plot the case where both \(X\) and \(Y\) are affected by the same kind of heteroskedasticity is shown, leading to a different null distribution. **Effect 2**.: _If \(X\) and \(Y\) are dependent and at least one is affected by heteroskedasticity, the detection power of the t-test might be degraded._ In particular, if \(X\) and \(Y\) are dependent, \(X\) is affected by linear heteroskedasticity, the mean of the \(t\)-distribution is closer to zero compared to the distribution of weighted least squares based studentized partial correlation coefficient, which is introduced in the next section and essentially transforms the data to be homoskedastic, for fixed sample size. See the right plot in figure 1. This effect is an immediate consequence of the reduced efficiency of the OLS slope estimate. Intuitively, heteroskedasticity masks the relationship between \(X\) and \(Y\). ## 4 Weighted least squares partial correlation test and causal discovery We have seen that the standard partial correlation test is sensitive to heteroskedastic noise since it is based on an OLS regression step. Therefore, we propose to replace the OLS regression by the weighted least squares (WLS) approach which is known to be able to handle non constant error variance. We will refer to the resulting CI test as _ParCorr-WLS_. Figure 1: Data generated with SCM (2). (Left) Null distribution when \(X\) and \(Y\) are affected by the same kind of heteroskedasticity. (Middle) Null distribution when only \(X\) is affected by heteroskedasticity. (Right) Alternative distribution (assuming dependence with \(c\neq 0\)) when only \(X\) is affected by heteroskedasticity. In all plots the heteroskedasticity is a linear function of the value of \(Z\) and the solid black line depicts the \(t\)-distribution under the null hypothesis. ### Weighted least squares partial correlation test The idea of WLS is to perform a re-weighting of each data point depending on how far it is from the true regression line. It is reasonable to assume data points where the error has low variance to be more informative than those with high error variance. Therefore, ideally the weights are chosen as the inverse variance of the associated error. To formalize this idea, consider the linear model \(y=X\beta+\varepsilon\) with \(\mathrm{E}[\varepsilon\mid X]=0,\ \mathrm{Cov}[\varepsilon\mid X]=\mathrm{diag}( \sigma_{i}^{2})_{i=1,\ldots,n}\) for \(n\) observations. The variance of the error \(\varepsilon_{i}\) is denoted by \(\sigma_{i}^{2}\). Note, as opposed to the assumptions of OLS, the entries of the conditional variance matrix are allowed to differ from each other. Denote the weight matrix by \(W:=\mathrm{diag}(\frac{1}{\sigma_{i}^{2}})_{i=1,\ldots,n}\). The WLS method estimates \(\beta\) by solving the adjusted optimization problem \[\hat{\beta}=\underset{b}{\mathrm{argmin}}(y-Xb)^{\mathsf{T}}W(y-Xb).\] This objective is quadratic, thus we can write down the solution in closed form \[\hat{\beta}=\left(X^{\mathsf{T}}WX\right)^{-1}X^{\mathsf{T}}Wy.\] If the true weights are known, WLS is equivalent to applying OLS to a linearly transformed, homoskedastic version of the data as the weights have the effect of standardizing the scale of the errors. Thus, the following lemma holds. Proofs for these statement can be found, for instance, in Greene (2003). **Lemma 1**.: _If the weights are chosen as the reciprocal of the error variance per sample, the WLS estimator is consistent, efficient, and asymptotically normal, as well as BLUE._ Moreover, note that the weighted residuals \(W(y-X\hat{\beta})\) are homoskedastic. The goal now is to approximate the conditional variance function of both \(X\) and \(Y\) in SCM (2). Since it works analogously for both cases, we illustrate the approach for \(X_{t}=aZ+s(Z,t)\cdot N_{X}\), i.e. we approximate \(\sigma^{2}(z,t)=Var(X_{t}|Z=z)\). For that, we use a residual-based non-parametric estimator for the conditional variance, similar to the approach of Robinson (1987). Motivated by the identity \(Var(X|Z)=\mathbb{E}[(X-\mathbb{E}[X|Z])^{2}|Z]\), and noting that this is the regression of \((X-aZ)^{2}\) on \(Z\), the first step is using OLS regression to obtain the squared residuals \((X-\hat{a}Z)^{2}\). Afterwards, we use a non-parametric regression method to regress these residuals on \(Z\) and thereby predict the conditional mean by using a linear combination of the \(k\) residuals closest in \(Z\) value. For sampling index-dependent heteroskedasticity this turns into a windowing approach, which essentially smoothes the squared residuals. Algorithm 1 details the proposed partial correlation CI test based on the feasible WLS approach that employs our weight approximation method. For the test to perform well, it is crucial to know the type of heteroskedasticity, more precisely, which of the predictors the variance depends on. In practice, this kind of expert knowledge could be obtained by performing a test for heteroskedasticity, e.g. as suggested in Wooldridge (2009, p.277) or by investigating plots of the residuals. Here we make the limiting assumption that the heteroskedasticity only depends on one of the predictors or on the sampling index. Further extensions are considered in the section 6. Now, we formulate our main assumption under which it is possible to obtain a consistency result for the WLS method that uses our weight approximation method. **Assumption 1** (Heteroskedastic relationships).: _For each node \(X^{i}\), \(i=1,\ldots,d\) in SCM (1), the skedasticity or noise scaling function \(h_{i}\) only depends on one of the predictors \(H\) or on the sampling index \(t\), i.e. its domain is one dimensional, or it is constant. Furthermore, it is known what it depends on._ We also have to impose a rather technical assumption on the functions \(h_{i}\). **Assumption 2** (Weight approximation).: _Assumptions (3.3) - (3.5) from Robinson (1987)._ The proof of the following lemma can be found in Robinson (1987). **Lemma 2**.: _Under assumptions 1 and 2, the WLS estimator that uses the reciprocal of the approximated variance as weights is consistent and attains the correct covariance matrix asymptotically._ **Assumption 3** (Technical assumptions).: _See Supplement A.2._ We now state the first main theorem. **Theorem 1**.: _Under the assumptions 1 - 3 ParCorr-WLS CI test with estimated weights (Algorithm 1) is consistent for testing the conditional independence between two potentially heteroskedastic variables \(X\) and \(Y\) conditioned on a set of variables \(Z\)._ ``` Data: Expert knowledge \(\mathcal{E}\) as map with keys \(X\) and \(Y\) and values in {false,'sampling index', 'heteroskedastic parent \(H^{\prime}\)'}, where \(H\in V\), observational data with sample size \(n\) for nodes \(X\), \(Y\), \(Z_{1},\ldots,Z_{k}\), \(H\), window length \(\lambda\), significance level \(\alpha\) Result: boolean indicating whether there is a (conditional) dependence between \(X\) and \(Y\) given \(Z_{1},\ldots,Z_{k}\) \(resid:=[]\); for\(node\) in (\(X\), \(Y\))do if\(k==0\)then \(\tilde{r}=node\) else obtain residuals \(\tilde{r}=(\tilde{r}_{1},\ldots,\tilde{r}_{n})\) by regressing \(node\) on \(Z_{1},\ldots,Z_{k}\) using OLS; end if if\(\mathcal{E}[node]\)== 'false'then append \(\tilde{r}\) to \(resid\); else if\(\mathcal{E}[node]\)=='sampling index'then compute weights \(w_{i}=(\frac{1}{\lambda}\sum_{j=\max(1,i-\frac{\lambda}{2})}^{\min(n,i+\frac{ \lambda}{2})}\tilde{r}_{j}^{2})^{-1}\), for \(i=1,\ldots,n\); obtain residuals \(r\) by regressing \(node\) on \(Z_{1},\ldots,Z_{k}\) using WLS with the weights \(w\); append \(w\cdot r\) to \(resid\); else sort \(\tilde{r}\) such that their corresponding values of \(H\) increase; compute weights \(w_{i}=(\frac{1}{\lambda}\sum_{j=\max(1,i-\frac{\lambda}{2})}^{\min(n,i+\frac{ \lambda}{2})}\tilde{r}_{j}^{2})^{-1}\), for \(i=1,\ldots,n\); revert the sorting in the indices of \(w\); obtain residuals \(r\) by regressing \(node\) on \(Z_{1},\ldots,Z_{k}\) using WLS with the weights \(w\); append \(w\cdot r\) to \(resid\); end if calculate studentized Pearson correlation \(t\) between \(r_{X}=resid[0]\) and \(r_{Y}=resid[1]\); perform \(t\)-test with (two-sided) significance level \(\alpha\), i.e. reject if \(|t|>t(1-\frac{\alpha}{2},n-2-k)\) ``` **Algorithm 1**ParCorr-WLS ### Extension to the PC algorithm A well-known and widely used algorithm for discovering causal relationships in terms of the completed partially directed acyclic graph (CPDAG) from observational data is the PC algorithm as introduced in Spirtes and Glymour (1991). It consists of two phases: The first one is concerned with learning the skeleton of adjacencies based on iterative CI testing. Subsequently, a set of rules is applied to determine the orientation of the found links. To ensure consistency of this method, the following assumptions have to be fulfilled. Details can be found in Spirtes et al. (2000). Let \(G=(V,E)\) be a graph consisting of a set of vertices \(V\) and a set of edges \(E\subset V\times V\). Let \(\mathcal{P}\) denote the probability distribution of \(V\). **Assumption 4** (PC algorithm).: _The Causal Markov condition, Causal Faithfulness, and Causal Sufficiency are fulfilled for the SCM (1) with graph \(G\)._ Under these assumptions and if the utilized conditional independence test is consistent, it can be shown that the PC algorithm converges in probability to the correct causal structure, i.e. \[\lim_{n\rightarrow\infty}\mathcal{P}(\hat{G}_{n}\neq G)=0,\] where \(G\) denotes the ground truth CPDAG and \(\hat{G}_{n}\) is the finite sample output of the PC algorithm. See Kalisch and Buhlmann (2007) for a proof. In the following, we need to further restrict Assumption 1 to prove consistency of the PC algorithm under heteroskedasticity. **Assumption 5** (Heteroskedastic relationships regarding PC algorithm).: _Assumption 1 is further limited to the case where there is only sampling index dependent heteroskedasticity or homoskedasticity._ Given Assumption 5, we can apply our proposed method ParCorr-WLS in every CI test of the PC algorithm. Using Lemma 1, we thus can establish consistency of the PC algorithm with ParCorr-WLS if the true weights are known. Lemma 2 yields the same result for ParCorr-WLS with estimated weights. **Theorem 2**.: _Under Assumptions 2,3,4,5 the output of the PC algorithm, with ParCorr-WLS as a CI test, converges in probability to the correct causal graph._ Note that we prove consistency of the PC algorithm with ParCorr-WLS only in the case of sampling index dependent heteroskedasticity. We discuss the challenges to extending this to more general types of heteroskedasticity in section 6. ## 5 Experiments In the following, we conduct experiments evaluating our proposed CI test separately and in conjunction with the PC algorithm. Throughout the experiments, heteroskedasticity strength refers to the parameter \(s\) in the scaling functions \(h\) of linear and periodic type given by \[h(x) =1+se^{T}x\cdot\mathbbm{1}_{x\geq 0}\quad\text{ (linear)} \tag{3}\] \[h(x) =1+se^{T}\sin(x)+s\quad\text{ (periodic)}\,.\] In other words, in case of linear heteroskedasticity strength is the slope of the variance function, and for periodic heteroskedasticity it refers to the amplitude of the variance function. ### Conditional independence testing We generate the data from the SCM (2) where we consider various types of heteroskedasticity, i.e. functions \(h\), namely linear and periodic as given in Eq. (3). Plots of the simulated data are provided in the Supplement. Each of the types can either be \(Z\)- or sampling index-dependent. A visualization of the considered heteroskedasticity-types can be found in the Supplement A.5. We use the Kolmogorov-Smirnov (KS) statistic to quantify how uniform the distribution of p-values is, and therefore as a metric for type-I errors, as in Runge (2018). Type-II errors are measured by the area under the power curve (AUPC). The metrics were evaluated with a sample size of \(500\) from \(100\) realizations of the SCM. Error bars indicate the bootstrapped standard errors. Figure 2 shows that ParCorr-WLS is well calibrated in the presence of heteroskedasticity, regardless if it affects only one or both of the variables \(X\) and \(Y\). On the other hand, the ParCorr-OLS test becomes ill-calibrated in the case of heteroskedastic \(X\) and \(Y\) as expected by Effect 1. In particular, this means that if we can choose the weights reasonably well, we are able to overcome multiplicative confounding with our proposed CI test. Regarding power as measured by AUPC, we observe a rather rapid decrease for ParCorr-OLS as the heteroskedasticity increases (compare to Effect 2). Our proposed method ParCorr-WLS has higher power in the heteroskedastic scenario, and even for homoskedastic noise (heteroskedasticity strength equal zero) the power is comparable to that of ParCorr-OLS. The drop in power for ParCorr-WLS can be explained by an overall increase in noisiness of the data as heteroskedasticity strength is growing. Refer to the Supplement A.5 for a plot of AUPC of ParCorr-OLS and ParCorr-WLS on data with homoskedastic but increasing noise. Similar effects are present for all considered types of heteroskedasticity (see Supplement). The results remain very similar if we do not use the ground truth weights but estimate them using the window approach with a reasonable window length as detailed in Section 4.1. ### Causal discovery To test our proposed CI test in a more realistic setting, we apply it within the PC algorithm to recover the causal graph from simulated observational data. We build upon the PC-stable algorithm implementation within the Tigramite software package (Runge et al., 2019b) which is published under the GNU General Public License. The data for these experiments was generated with SCM (1) in the following way. Given a random ground truth graph, we fix a percentage of heteroskedasticity-affected nodes which are then selected uniformly at random. Throughout the experiments this percentage is set to \(0.3\) to reduce the chance of parent and child being affected by the same kind of heteroskedasticity. For these affected nodes, we choose as a heteroskedasticity type linear or periodic with equal probability (Eq. (3)) and let the noise variance \(h\) either depend on one randomly selected parent or the sampling index. We also set a fixed strength \(s\) per experiment and investigated the effect of increasing the strength on the performance of our method compared to the PC algorithm with ParCorr-OLS. All linear dependencies have a coefficient \(c=0.5\). We tested for various small to medium sized causal graphs (see also Supplement). In these experiments, we estimate the weights based on the expert knowledge as required by Assumption 1, namely which node is affected by heteroskedasticity, and whether the noise variance depends on the sampling index or another node. In the case of the heteroskedasticity depending on another node, we also need to know which node it is. We also compare with the PC algorithm which uses ParCorr-OLS based on the ground truth weights for all direct heteroskedastic relations. Note, however, that this does not take care of indirect heteroskedasticity due to heteroskedastic parents that are not part of the conditioning set. Note that this challenging setting is outside of the stricter Assumption 5 for which the PC algorithm is consistent. Figure 3 shows, similar to the experiments of the previous section, that the PC algorithm with ParCorr-WLS continues to have a rather small false positive rate (FPR) even though the heteroskedasticity strength increases. In contrast, ParCorr-OLS shows an increase in FPR as the heteroskedasticity strength increases. Additionally, even in this rather complicated setting, we see that the true positive rate (TPR) can be improved by using our method. The PC algorithm with ParCorr-WLS has an average runtime of \(0.29\) seconds on homoskedastic data compared to \(0.14\) seconds for ParCorr-OLS evaluated on AMD 7763. ## 6 Discussion and Outlook In this work we relaxed the common assumption of a constant variance by explicitly allowing for heteroskedasticity as a multiplicative scaling of noise in an otherwise linear SCM. Our proposed partial correlation test based on weighted least squares regression is a linear, computationally fast and Figure 2: Performance of partial correlation CI tests for dependence (\(c=0.5\)) and conditional independence (\(c=0\)) between \(X\) and \(Y\) given \(Z\). In all plots the heteroskedasticity is a function of the confounder \(Z\) and only affects \(X\) (left two columns) or both \(X\) and \(Y\) (right two columns). Shown are KS (top row) and AUPC (bottom row) for different strengths of heteroskedasticity for linear (left), periodic (right) noise scaling functions. The ground truth weights are used for WLS, or the weights are estimated using the window approach with window length \(10\) as detailed in section 4.1. A sample size of \(500\) is used and the experiments are repeated \(100\) times. easy to implement method and constitutes a useful standalone method for various data science tasks, but our focus here is on its use in causal discovery. The main **strengths** of our approach are that it is able to produce more reliable results both as a CI test and in causal discovery in the presence of heteroskedasticity than the standard partial correlation variant. More reliable results here refers to controlled false positives as well as higher detection power. Furthermore, the suggested adaptations do not compromise calibratedness and power of the CI test on homoskedastic data. The main **weakness** of our method is that the CI test requires substantial expert knowledge in Assumption 1 and for the consistency of the PC algorithm the even stricter Assumption 5. Assumption 1 requires that the heteroskedasticity only depends on one of the predictors or on the sampling index, i.e. its domain is one dimensional. Furthermore, one needs to know which of these types is the case. Assumption 5 only allows for sampling-index heteroskedasticity. The reason for this is that there can be indirect heteroskedasticity at the node \(X\) or \(Y\) that is not induced by a parent of \(X\) or \(Y\), but by some other ancestor. This heteroskedasticity then propagates through the causal graph and essentially makes \(X\) or \(Y\) heteroskedastic whenever this path is not blocked by the conditioning set. In this case, the expert knowledge does not tell us about this heteroskedasticity and we would not be able to apply ParCorr-WLS to remove it. See also Section A.4 in the Supplement for a detailed treatment of cases in which the CI tests within the PC algorithm are consistent under more general heteroskedasticity forms. In the following, we discuss alternatives and further avenues of research. **In future work** one may alter the required expert knowledge and weight approximation scheme to overcome the issues induced by indirect heteroskedasticity. Here, a possible remedy could be an iterative approach using information about causal relationships from earlier steps. An alternative would be to use the CMIknn CI test (Runge, 2018) that is able to fully treat heteroskedastic multiplicative confounding. However, this increased generality comes at the price of reduced detection power, also see Figure 10 in the Supplement. Another open question is how to further improve the weight approximation method. For instance, by iteratively repeating the regression and smoothing steps. Another important consideration is the choice of the window length. Linear properties of the variance function allow us to use a larger window length. However, if the variance function shows a high variability, crucial information might be lost if the chosen window length is too large. Model selection criteria might be employed to alleviate this problem. Furthermore, one can consider extensions of ParCorr-WLS to multiple heteroskedastic influencing factors, e.g., Spokoiny (2002) extend the residual-based conditional variance function estimation to dimensions larger than one. Another multidimensional method based on differences is discussed in Cai et al. (2009). It would also be interesting to explore using generalized least squares to be able to account for additional correlation of the residuals. Potentially, the weighted least squares partial correlation Figure 3: Results for the PC algorithm with the standard ParCorr-OLS CI test compared to that with the proposed ParCorr-WLS test. Shown are adjacency TPR (left) and FPR (middle), as well as the adjacency precision (right) for increasing strengths of heteroskedasticity. The graph has \(10\) nodes and \(10\) edges, a sample size of \(500\) is used. The significance level \(\alpha\) is set to \(0.05\). The experiment is repeated \(500\) times. Errorbars show standard errors. Estimated weights with a window length of \(5\) or ground truth weights are used for WLS. coefficient could also be combined with a permutation-based test to extend the method to problems with non-Gaussian noise. **Concluding**, our proposed ParCorr-WLS CI test makes constraint-based causal discovery methods better applicable to real world problems where the assumption of homoskedasticity is violated, such as in climate research or neuroscience. Ethically, we believe that our rather fundamental work has a low potential for misuse. ## Acknowledgments and Disclosure of Funding WG was supported by the Helmholtz AI project _CausalFlood_. UN and JW were supported by grant no. 948112 _Causal Earth_ of the European Research Council (ERC). This work used resources of the Deutsches Klimarechenzentrum (DKRZ) granted by its Scientific Steering Committee (WLA) under project ID bd1083. We thank the anonymous reviewers for their helpful comments.
2310.14206
Manifold-Preserving Transformers are Effective for Short-Long Range Encoding
Multi-head self-attention-based Transformers have shown promise in different learning tasks. Albeit these models exhibit significant improvement in understanding short-term and long-term contexts from sequences, encoders of Transformers and their variants fail to preserve layer-wise contextual information. Transformers usually project tokens onto sparse manifolds and fail to preserve mathematical equivalence among the token representations. In this work, we propose TransJect, an encoder model that guarantees a theoretical bound for layer-wise distance preservation between a pair of tokens. We propose a simple alternative to dot-product attention to ensure Lipschitz continuity. This allows TransJect to learn injective mappings to transform token representations to different manifolds with similar topology and preserve Euclidean distance between every pair of tokens in subsequent layers. Evaluations across multiple benchmark short- and long-sequence classification tasks show maximum improvements of 6.8% and 5.9%, respectively, over the variants of Transformers. Additionally, TransJect displays 79% better performance than Transformer on the language modeling task. We further highlight the shortcomings of multi-head self-attention from the statistical physics viewpoint. Although multi-head self-attention was incepted to learn different abstraction levels within the networks, our empirical analyses suggest that different attention heads learn randomly and unorderly. In contrast, TransJect adapts a mixture of experts for regularization; these experts are more orderly and balanced and learn different sparse representations from the input sequences. TransJect exhibits very low entropy and can be efficiently scaled to larger depths.
Ayan Sengupta, Md Shad Akhtar, Tanmoy Chakraborty
2023-10-22T06:58:28Z
http://arxiv.org/abs/2310.14206v1
# Manifold-Preserving Transformers are Effective ###### Abstract Multi-head self-attention-based Transformers have shown promise in different learning tasks. Albeit these models exhibit significant improvement in understanding short-term and long-term contexts from sequences, encoders of Transformers and their variants fail to preserve layer-wise contextual information. Transformers usually project tokens onto sparse manifolds and fail to preserve mathematical equivalence among the token representations. In this work, we propose TransJect, an encoder model that guarantees a theoretical bound for layer-wise distance preservation between a pair of tokens. We propose a simple alternative to dot-product attention to ensure Lipschitz continuity. This allows TransJect to learn injective mappings to transform token representations to different manifolds with similar topology and preserve Euclidean distance between every pair of tokens in subsequent layers. Evaluations across multiple benchmark short- and long-sequence classification tasks show maximum improvements of \(6.8\%\) and \(5.9\%\), respectively, over the variants of Transformers. Additionally, TransJect displays \(79\%\) better performance than Transformer on the language modeling task. We further highlight the shortcomings of multi-head self-attention from the statistical physics viewpoint. Although multi-head self-attention was incepted to learn different abstraction levels within the networks, our empirical analyses suggest that different attention heads learn randomly and unorderly. In contrast, TransJect adapts a mixture of experts for regularization; these experts are more orderly and balanced and learn different sparse representations from the input sequences. TransJect exhibits very low entropy and can be efficiently scaled to larger depths. ## 1 Introduction Over the past few decades, Deep Neural Networks have greatly improved the performance of various downstream applications. Stacking multiple layers has been proven effective in extracting features at different levels of abstraction, thereby learning more complex patterns (Brightwell et al., 1996; Poole et al., 2016). Since then, tremendous efforts have been made in building larger depth models and in making them faster (Bachlechner et al., 2020; Xiao et al.). Self-attention-based Transformer model (Vaswani et al., 2017) was proposed to parallelize the computation of longer sequences; it has achieved state-of-the-art performance in various sequence modeling tasks. Following this, numerous efforts have been made to reduce computation and make the Transformer model suitable even for longer sequences (Katharopoulos et al., 2020; Peng et al., 2020; Kitaev et al., 2020; Beltagy et al., 2020; Press et al., 2021; Choromanski et al., 2021; Tay et al., 2021). However, very few of these studies discuss information propagation in large-depth models. A recent study (Voita et al., 2019) characterized how Transformer token representations change across layers for different training objectives. The dynamics of layer-wise learning are essential to understand different abstract levels a model could learn, preserve and forget to continue learning throughout the layers. However, recent Figure 1: Layer-wise distances between a few selected tokens from the text “_This movie is terrible but it has some good effects._” We use a trained Transformer model to extract token representations from different layers. studies need to shed more light on how Transformers preserve contextual information across different layers Wu et al. (2020); Voita et al. (2019). To understand how Transformer encodes contextual similarities between tokens and preserves the similarities layer-wise and across different attention heads, we highlight an example in Figure 1. We select three pairs of semantically-related tokens. We observe the Euclidean and cosine distances of the representations learned at different layers of the Transformer trained on the IMDb sentiment classification task. Although the trained Transformer model preserves the semantic similarity among the same tokens across layers, the Euclidean distance between their representations increases in the upper layers. This indicates that Transformer projects the representations to different and sparse subspaces (a subspace with low density), albeit preserving the angle between them. Additionally, we observe that the distances between different token representations vary across different attention heads at different encoder layers in a haphazard manner. Preserving distance and semantic similarity across layers is vital to ensure the continual learning capabilities of deep neural models, which Transformer evidently fails to do. Neuroscientists have been working for years to understand how the human brain functions and simulates the behaviours of physical sciences Koch and Hepp (2006); Vertes et al. (2012). Arguably, the human brain is more capable of 'associative learning' and 'behavioural formation', which can be attributed to the number of neurons and their inter-connectedness (synapses) rather than the size of the brain, or the number of layers through which the information propagates Dicke and Roth (2016). Although a fully-grown human brain can have hundreds of billions of neurons, it has been found that at a time, only a tiny fraction of neurons fire Ahmed et al. (2020); Poo and Isaacson (2009). This sparse firing can help the brain sustain low entropy (energy). _Entropy_, a measure that quantifies a state's randomness, has thus become an essential tool to understand the factors behind human intelligence Saxe et al. (2018); Keshmiri (2020) and how the human brain operates. Unfortunately, Transformer and its variants are yet to be studied from these interdisciplinary viewpoints. Our study explores the connection between model sparsity and entropy. Towards this, we propose a complete redesign of self-attention-based **Trans**former with enforced **injectivity**, _aka_**TransJect**. With injectivity, our model imposes the constraint in which the representations of two distinct tokens always differ across all the layers. Unlike Transformer, which only injects regularization through multi-head self-attention and dropout, TransJect does not require explicit regularizers and can be regularized implicitly due to its inherent injectivity. The backbone of TransJect is a non-normalized linear orthogonal attention and an injective residual connection; both ensure Lipschitz continuity. By enforcing Lipschitz continuity in Euclidean and dot-product space, the model can preserve the manifold structure between tokens and perform well on the final predictive task. To validate our hypotheses and empirically justify the superiority of our model, we use two short and five long sequence classification tasks. TransJect outperforms Transformer and other benchmark variants with an average margin of \(3.4\%\) and \(2.2\%\) on the short- and long-sequence classification tasks, respectively. Our model performs best on the long sequence classification (LRA) benchmark, achieving \(0.2\%\) better accuracy than the best baseline, Skyformer Chen et al. (2021). We further demonstrate TransJect on language modeling task on the Penn TreeBank dataset, in which our model achieves \(79\%\) better test perplexity than the vanilla Transformer. Empirical analyses suggest a very low entropy of TransJect representations, indicating that TransJect captures sparse and more orderly representations compared to Transformer. Moreover, TransJect shows \(13\times\) lower inference runtime than Transformer, indicating its efficiency in encoding long input sequences.1 Footnote 1: The source codes of TransJect can be found at [https://github.com/victor7246/TransJect.git](https://github.com/victor7246/TransJect.git). ## 2 Related Works Despite being a ubiquitous topic of study across different disciplines of deep learning, Transformer Vaswani et al. (2017) models still require better mathematical formalization. On a recent development, Vuckovic et al. (2020) formalized the inner workings of self-attention maps through the lens of measure theory and established the Lipschitz continuity of self-attention under suitable assumptions. However, the Lipschitz condition depends on the boundedness of the representation space and the Lipschitz bound of the fully-connected feed-forward layer (FFN). A similar study Kim et al. (2021) also concluded that the dot-product self-attention is neither Lipschitz nor injective under the standard conditions. Injectivity of the transformation map is essential to ensure that the function is bijective and, therefore, reversible. Reversibility within deep neural networks has always been an active area of study Gomez et al. (2017); Arora et al. (2015); Chang et al. (2018). A reversible network ensures better scaling in large depths and is more efficient than non-reversible structures. Recently, Mangalam et al. (2022) designed a reversible Transformer and empirically highlighted its effectiveness in several image and video classification tasks. However, any similar development has yet to be made for developing scalable reversible sequential models. _Dynamical isometry_ is a property that mandates the singular values of the input-output Jacobian matrix to be closer to one. Pennington et al. (2017) showed that dynamical isometry could aid faster convergence and efficient learning. Following their idea, Bachlechner et al. (2020) showed that residual connections in Transformers do not often satisfy dynamical isometry, leading to poor signal propagation through the models. To overcome this, they proposed residual with zero initialization (ReZero) and claimed dynamical isometry for developing faster and more efficient large-depth Transformers. Previously, Qi et al. (2020) enforced a stronger condition of _isometry_ to develop deep convolution networks efficiently. However, isometric assumptions are not valid for sequential modeling due to different levels of abstraction within the input signals. Moreover, isometry may not hold between contextually-dissimilar tokens. Another essential aspect behind designing efficient large-depth models is ensuring _model sparsity_. Several notable contributions have been made Baykal et al. (2022); Jaszczur et al. (2021); Li et al. (2023); Tay et al. (2020) to enforce sparse activations within Transformers to make them more efficient and scalable. Li et al. (2023) argued that sparse networks often resemble the sparse activations by the human brain, bearing a similarity between artificial and biological networks. They empirically showed that the trained Transformers are inherently sparse, and the sparsity emerges from all the layers. As discussed in the previous section, Transformers project the representations sparsely onto sparse subspaces. Although sparse models are inherently regularized and display lower entropy, projecting the representations onto a sparse subspace pushes them further from being Lipschitz, preventing them from reversible. This work proposes an injective and Lipschitz continuous alternative to vanilla Transformer, namely, TransJect. With the enforced injectivity, our model establishes a theoretical guarantee for reversibility and thus can be scaled to larger depths. Further, TransJect displays significantly lower entropy than Transformer, indicating a lower energy footprint and more efficiency. ## 3 Designing Injective Transformer This section formally describes our proposed model, TransJect. It inherits the structure from the vanilla Transformer and achieves a smoother activation plane by utilizing injective maps for transforming token representations across layers. For an \(L\)-layered stacked encoder, we aim to learn the representation of a sequence \(\mathbf{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{N}\}\) at each layer \(l\) that preserves the pairwise distance between every pair of words within a theoretical bound. We illustrate the components of TransJect in Figure 2. All the proofs presented in the paper are supplied in Appendix A. ### Background **Activation bound.** For any function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), we define the activation bound \(K_{f}\) as \(\sup_{\mathbf{x}\neq\mathbf{0}}\frac{||f(\mathbf{x})||_{p}}{||\mathbf{x}||_{p}}\), for a suitable integer \(p\). A linear map \(\mathbf{M}\) equals the induced matrix norm \(||\mathbf{M}||_{p}\). Intuitively, this is the maximum scale factor by which a mapping expands a vector \(\mathbf{x}\). In Euclidean space, we usually choose \(p=2\). Lipschitz Continuity.A function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) under \(||.||_{p}\) norm is called _Lipschitz continuous_ if there exists a real number \(K\geq 0\) such that \[||f(\mathbf{x})-f(\mathbf{y})||_{p}\leq K||\mathbf{x}-\mathbf{y}||_{p}. \tag{1}\] for any \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}\). Lipschitz continuity can be defined over any metric space, but in this paper, we restrict its definition to only Euclidean space with \(p=2\). \(K\) is called _Lipschitz bound_. ### Space-Preserving Orthogonal Attention The backbone of TransJect is the space-preserving orthogonal attention. Theorem 1(Space-Preserving Orthogonal Attention). Replacing \(\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{W}^{V}\) with real square orthogonal matrices in non-normalized linear self-attention reduces the activation bound to \(\sigma_{1}^{2}(\mathbf{X})\), where \(\sigma_{1}(\mathbf{X})\) is the largest singular value of \(\mathbf{X}\). This reduces the attention operation \(Attn(\mathbf{X})=\mathbf{X}\mathbf{U}\mathbf{\Sigma}\mathbf{V}\), for learnable orthogonal matrices \(\mathbf{U}\) and \(\mathbf{V}\) and the singular values \(\mathbf{\Sigma}\). Notice that the activation bound of the modified attention mechanism does not depend on any learnable parameters; instead can be bounded by the largest eigenvalue of \(\mathbf{X}^{T}\mathbf{X}\). Therefore, we assume a stochastic \(\mathbf{X}^{T}\mathbf{X}\) to ensure that the largest eigenvalue is always \(1\) (see Corollary 2 in Appendix A.1), and the attention operator preserves the pairwise distance between any two tokens. We learn orthogonal projection matrices in each layer, \(\mathbf{U}\) and \(\mathbf{V}\). In contrast, the diagonal matrix containing eigenvalues \(\mathbf{\Sigma}\) is learned on the initial embedding obtained from the initial embedding layer defined in Section 3.5, also denoted as \(l=0\). Therefore, in our proposed non-normalized orthogonal linear attention, we compute the attention matrix (encoded in \(\mathbf{\Sigma}\)) only once and learn different projections of it in different layers. **Approximating eigenvalues.** Eigenvalue decomposition is computationally expensive with a runtime complexity of \(\mathcal{O}(Bd^{3})\), with \(B\) being the batch size and \(d\) being the hidden dimension. This work uses a simple approximation to compute \(\tilde{\mathbf{\Sigma}}\), the eigenvalues of \(\mathbf{X}^{T}\mathbf{X}\). Formally, we compute \(\tilde{\mathbf{U}}=\operatorname*{arg\,min}_{\mathbf{U}}||\mathbf{X}^{T}\mathbf{X}-\mathbf{U}\mathbf{ \Sigma}\mathbf{U}^{T}||\), and \(\tilde{\mathbf{\Sigma}}=\operatorname*{arg\,min}_{\mathbf{\Sigma}}||\mathbf{X}^{T}\mathbf{X}- \mathbf{U}\mathbf{\Sigma}\mathbf{U}^{T}||\). To learn the approximate eigenvalues, we can minimize the reconstruction loss \(||\mathbf{X}^{T}\mathbf{X}-\mathbf{U}\mathbf{\Sigma}\mathbf{U}^{T}||\) for a learnable orthogonal eigenvector \(U\). We use standardization to enforce the stochasticity constraint on \(\tilde{\mathbf{\Sigma}}\). Further, instead of deriving the eigenvalues, we can also initialize a random diagonal matrix \(\tilde{\mathbf{\Sigma}}\), without any approximation that optimizes the only task-specific training objective, without enforcing the reconstruction. We denote this version of the model as **Random-TransJect**. We compute \(\tilde{\mathbf{\Sigma}}\) once, only on the initial token embeddings. ### Injective Residual (IR) We fine-tune the hidden representation for every layer \(l\) by learning a new attention projection on the hidden state learned in the previous layer. Formally, we define, \[\mathbf{X}^{(l)}=\mathbf{X}^{(l-1)}+\frac{\alpha_{l}}{L}F(\mathbf{X}^{(l-1)}). \tag{2}\] Here, \(F\) is the self-attention operator, followed by a suitable non-linear activation function, and \(\alpha_{i}\in(0,1)\) is the residual weight. We use a learnable residual weight and a sigmoid activation to Figure 2: Internals of TransJect with (a) an \(L\)-layered encoder containing a Mixture of Experts (MOE) with \(E\) experts, (b) approximated eigenvalue computation, and (c) an orthogonal attention with injective residual. (d) The outputs of the intermediate encoder are fed to the orthogonal residual FFN. The hidden dimension used for representing each token is denoted by \(d\). scale it in \((0,1)\). In the previous studies, ReLU and GELU (Hendrycks and Gimpel, 2016) have been popular choices for the activation function. In this work, we choose ELU (Clevert et al., 2015), a nonlinear \(\mathcal{C}^{1}\) (continuous and differentiable) activation function with a Lipschitz bound of \(1\). Although ReLU is a Lipschitz function with \(K=1\), it is not everywhere differentiable and injective. Following Bachlechner et al. (2020), we adopt ReZero (residual with zero initialization) to enforce dynamical isometry and stable convergence. **Lemma 1** (**Residual contractions are injective).: \(f:\mathbf{X}\rightarrow\mathbf{X}+\frac{\alpha_{l}}{L}F(\mathbf{X})\) is injective for \(L\geq 3\). To maintain the dimensionality, Transformer projects the representations to a lower-dimensional space, which reduces the number of synapses among the neurons by a factor of \(H\), the number of heads. As opposed to this, we devise a **M**ixture **of**Expert (MOE) attention (motivated by Shazeer et al. (2017)). With this, we compute \(\mathbf{X}^{(l,e)}\) for each expert \(e\in\{1,2,\cdots,E\}\) in each layer \(l\) using Equation 2, learnable expert weights \(\lambda_{i}^{(l)}\)s, and use a convex combination of them to compute, \[\mathbf{X}^{(l)}=\sum_{e=1}^{E}\lambda_{e}^{(l)}\mathbf{X}^{(l,e)},\quad s.t.\sum_{e= 1}^{E}\lambda_{e}^{(l)}=1. \tag{3}\] Note that \(\lambda_{e}^{(l)}\) is computed for each sample, and the same expert weights are used for all tokens within a sample. **Corollary 1** (**Injectivity of MOE**).: The mapping function defined in Equation 3 is injective. ### Orthogonal Residual FFN (ORF) We reformulate the position-wise FFNs with orthogonal parameterization. FFN layers in Transformer emulate a key-value memory (Geva et al., 2021). We enforce Lipschitz continuity on the feed-forward sublayer to preserve the layer-wise memory. Formally, we define, \[ORF(\mathbf{X}^{(l)})=\\ \mathbf{X}^{(l)}\!+\!\frac{\alpha_{l}}{L}ELU\Big{(}ELU(\mathbf{X}^{(l)} \mathbf{W}_{1}\!+\!\mathbf{b}_{1})\mathbf{W}_{2}\!+\!\mathbf{b}_{2}\Big{)}.\] With both \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\) being square orthogonal matrices. **Corollary 2** (**Injectivity of ORF**).: Orthogonal residual FFNs are injective. Proof.: Using the Lipschitz continuity of ELU, we can prove the corollary directly using Lemma 1. ### Injective Token Embedding Transformer introduced conditional encoding to inject tokens' relative and absolute positional information into the self-attention layer. It leverages sinusoidal representations of every position and adds to the original token embeddings to infuse the positional information. To ensure injectivity at each layer, we need to ensure that the initialization of the token embeddings is also injective, _i.e._, no two tokens should have the same embedding. Unfortunately, the addition operator is not injective. Therefore, we compute the initial embedding of token \(x_{i}\) as \(\mathbf{X}_{i}{}^{(0)}=Concat(Emb(x_{i}),PE_{i,})\). Here \(PE\) is defined similarly to the positional encoding proposed by Vaswani et al. (2017). The embedding matrix is orthogonally parameterized. Concatenation ensures the injectivity of embeddings. However, to maintain the dimensionality, we learn the initial embedding and positional encoding at a lower dimensional space, \(\mathbb{R}^{\frac{d}{2}}\), in which \(d\) is the hidden size in the encoder. We define the final encoder mapping for each sublayer \(l\) as a composite mapping defined by, \[SubLayer^{(l)}(\mathbf{X}^{(l-1)})=\\ ORF\circ MOE\circ IR(\mathbf{X}^{(l-1)}). \tag{4}\] **Theorem 2**.: The composite map defined in Equation 4 is an injective Lipschitz with a fixed (data-independent) upper bound. A bounded activation bound ensures that the incremental learning of our encoder model reduces with large depth, which makes our model scalable to larger depths. It further enforces the importance of learning better embeddings in the initial embedding layer, which drives the entire encoding. The runtime complexity of our orthogonal non-normalized attention is \(\mathcal{O}(Nd^{2})\), whereas dot-product self-attention has a runtime complexity of \(\mathcal{O}(N^{2}d+Nd^{2})\). In a comparable setting where \(N>>d\), TransJect should have a lower runtime complexity than Transformer. ## 4 Experimental Setup ### Tasks and Datasets We evaluate TransJect and its variants on seven short- and long-sequence classification tasks and one language modeling task. We choose the IMDb movie review sentiment classification (Maas et al., 2011) and the AGnews topic classification (Zhang et al., 2015) datasets for short text classification - the former one is a binary classification task, whereas the latter one contains four classes. To further highlight the effectiveness of TransJect on longer sequences, we evaluate our model on the LRA benchmark [14]. LRA benchmark consists of five long sequence classification tasks - ListOps [20], Byte-level text classification on IMDb review (CharlMDb) dataset [11], Byte-level document retrieval on AAN dataset [15], Pathfinder [17], and Image classification on the CIFAR-10 dataset [16]. For language modeling, we use Penn TreeBank (PTB) [15] dataset, containing \(100\)M tokens. We provide the details of hyperparameter in Appendix B.1. ### Results **Short sequence classification.** Table 1 shows the performance of the competing models. On IMDb classification, TransJect outperforms Transformer with a \(6.8\%\) margin. TransJect achieves \(3.5\%\) better accuracy than Synthesizer, the best baseline. With zero residual initialization, Transformer can achieve \(2.1\%\) better accuracy in the IMDb classification task. We use an additional ablation of Transformer, in which all the learnable weight matrices are parameterized orthogonally. This improves accuracy on IMDb classification by \(3.8\%\). On the AGnews topic classification task, Random-TransJect achieves \(90.2\%\) accuracy, \(1.1\%\) better than the best baseline. Interestingly, Random-TransJect performs better than TransJect on the AGnews classification task; injecting randomness through randomly initialized eigenvalues aids in \(1.4\%\) performance improvement. Limited contextual information can create difficulty reconstructing \(\mathbf{X}^{T}\mathbf{X}\) from the approximate eigenvalues. Therefore, having randomly-initialized eigenvalues can aid in learning better context when the context itself is limited. To highlight the effectiveness of the MOE module, we evaluate an ablation of our model by dropping the MOE module (reducing the number of experts to 1). Dropping the MOE module reduces the validation accuracy on the IMDb classification task by \(4.7\%\). On the other hand, using only a single head in the original Transformer model reduces its performance on the IMDb classification task by \(2.1\%\). **Long sequence classification.** We evaluate TransJect against Transformer along with several of its recent variants and report the test accuracy in Table 2. Similar to short sequence classification tasks, TransJect is very effective for long sequences and consistently outperforms all the baselines. Out of five tasks, TransJect achieves the best performance in three and beats the best baseline, Skyformer by \(0.2\%\) on the leaderboard. TransJect achieves \(2.3\%\) and \(0.2\%\) better test accuracies on ListOps and byte-level text classification tasks than the corresponding best baselines, Big Bird and Skyformer, respectively. Interestingly, Random-TransJect achieves the best performance on the Pathfinder task, with a wide margin of \(1.9\%\). The ListOps task evaluates the ability to learn long-range hierarchical dependencies, whereas the Pathfinder task evaluates the ability to learn spatial dependencies. As argued by Chen et al. (2021), learning both these dimensions poses difficulty for self-attention-based methods. With a superior performance on both tasks, TransJect showcases its effectiveness in learning long-term patterns from both temporal sequences and sequences with different hierarchical dimensions. Moreover, Random-TransJect performs better than TransJect on both Pathfinder and Image classification tasks with a margin of \(1.1\%\) and \(1.3\%\), respectively. We argue that the sparsity in inputs in these two visual tasks is difficult to be approximated by the eigenvalues, which costs TransJect in these tasks. **Language modeling.** We report validation and test perplexity on PTB in Table 3 for TransJect and other baselines. Our model achieves \(79\%\) lower test perplexity than the vanilla Transformer. As argued by Bachlechner et al. (2020), loss-free information propagation is required for training large depth models. Due to this, Transformer with ReZero initialization achieves \(75\%\) better performance than the vanilla Transformer that uses uni \begin{table} \begin{tabular}{l|c|c} \hline \hline **Model** & **IMDb** & **AGnews** \\ \hline Transformer & 81.3 & 88.8 \\ Transformer+ReZero & 83.4 & 89.6 \\ Orthogonal Transformer & 85.1 & 86.3 \\ Linformer [20]\(\dagger\) & 82.8 & 86.5 \\ Synthesizer [14]\(\dagger\) & 84.6 & 89.1 \\ \hline TransJect & **88.1** & 88.8 \\ Random-TransJect & 86.5 & **90.2** \\ \hline \hline \end{tabular} \end{table} Table 1: Text classification accuracy on IMDb and AGnews (results highlighted with \(\dagger\) are taken from Tay et al. (2021)). form residual weights. However, it is worth noting that TransJect achieves \(14\%\) better performance than ReZero initialization, which could be attributed to the inherent injectivity, that ReZero fails to ensure. On the other hand, TransJect with random eigenvalues performs poorly due to its inability to encode the inter-dependence between tokens. ## 5 Analysis To understand the connections behind model depth, activation bounds and entropies, we conduct detailed statistical analyses on our model and Transformer. We use the IMDb and CharIMdb classification tasks for these studies as part of short long-range classification tasks, respectively. We use the outputs inferred by our models on a subsample of the test data for these analyses. **Activation bounds and entropy.** Continuing our initial discussion on preserving layer-wise distances between tokens, we calculate the distribution of activation bounds at different encoding layers. As defined in Section 3.1, we compute the _activation factor_ for each encoding layer for TransJect and Transformer. Formally, for \(l^{th}\) layer, we compute the activation factor \[\mathbb{A}^{(l)}=\mathbb{E}_{X}\mathbb{E}_{i\neq j}\Big{[}\frac{||\mathbf{X}_{i}^ {(l)}-\mathbf{X}_{j}^{(l)}||}{||\mathbf{X}_{i}^{(0)}-\mathbf{X}_{j}^{(0)}||}\Big{]}. \tag{5}\] Here \(\mathbf{X}^{(l)}\in\mathbb{R}^{N\times d}\) is the hidden representation of the sequence \(\mathbf{X}\) at \(l^{th}\) layer. Similarly, we compare the _differential entropy_ (_aka_ entropy) of the hidden representations learned by TransJect at each layer to understand how the entropy state changes across the layers. We calculate differential entropy as, \[entropy^{(l)}\Big{(}\mathbf{X}^{(l)}\Big{)}=\mathbb{E}_{j,h}\Big{[}- \log P(\mathbf{X}_{j,h}^{(l)})\Big{]}\\ =-\mathbb{E}_{j}\Big{[}\int_{\mathbb{H}}P(\mathbf{X}_{j,h}^{(l)})\log P (\mathbf{X}_{j,h}^{(l)})d\mathbf{h}\Big{]} \tag{6}\] where, \(P\) is the empirical probability density function of \(\mathbf{X}\). Note that \(\mathbf{X}_{j,h_{i}}^{(l)}\underset{h_{i}\neq h_{j}}{=}\mathbf{X}_{j,h_{j}}^{(l)}\) leads the entropy to \(-\infty\). Therefore, sparser states always have lower entropy and are more deterministic. At the same time, unorderly, random and stochastic states have higher entropy. We highlight the distribution of activation factors and entropy of token embeddings at every layer of TransJect and Transformer in Figure 2(a). Under Euclidean and dot-product, TransJect displays an empirical activation bound of \(\approx 1\). Unlike TransJect, Transformer has much higher empirical activation bounds. Although orthogonal parameterization and ReZero lead to much lower activation bounds, they are still higher than our model. Interestingly, Transformer aims to preserve the semantic similarity at the later layers at the expense of distance; however, TransJect can preserve both of them with a tighter bound, leading to a more robust representation for each token. We hypothesize that restricting the distance between a pair of tokens acts as a regularization, improving the encoder's final predictive \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline **Model** & **ListOps** & **Text** & **Retrieval** & **Pathfinder** & **Image** & **Avg.** \\ \hline Transformer & 38.4 & 61.9 & 80.7 & 65.2 & 40.6 & 57.4 \\ Transformer+ReZero & 38.3 & 59.1 & 79.2 & 68.3 & 38.6 & 56.7 \\ Orthogonal Transformer & 39.6 & 58.1 & 81.4 & 68.9 & 42.2 & 58.0 \\ Reformer Kitaev et al. (2020)\(\dagger\) & 37.7 & 62.9 & 79.0 & 66.5 & **48.9** & 59.0 \\ Big Bird Zaheer et al. (2020)\(\dagger\) & 39.3 & 63.9 & 80.3 & 68.7 & 43.2 & 59.1 \\ Linformer Wang et al. (2020)\(\dagger\) & 37.4 & 58.9 & 78.2 & 60.9 & 38.0 & 54.7 \\ Informer Zhou et al. (2021)\(\dagger\) & 32.5 & 62.6 & 77.6 & 57.8 & 38.1 & 53.7 \\ Nystromformer Xiong et al. (2021)\(\dagger\) & 38.5 & 64.8 & 80.5 & 69.5 & 41.3 & 58.9 \\ Performers Choromanski et al. (2021)\(\dagger\) & 38.0 & 64.2 & 80.0 & 66.3 & 41.4 & 58.0 \\ Skyformer Chen et al. (2021)\(\dagger\) & 38.7 & 64.7 & **82.1** & 70.7 & 40.8 & 59.4 \\ \hline TransJect & **42.2** & **64.9** & 80.3 & 71.5 & 38.9 & **59.6** \\ Random-TransJect & 40.1 & 64.6 & 80.2 & **72.6** & 40.2 & 59.5 \\ \hline \hline \end{tabular} \end{table} Table 2: Test accuracy on the LRA benchmark (results highlighted with \(\dagger\) are taken from Chen et al. (2021)). \begin{table} \begin{tabular}{l|c|c} \hline \hline **Model** & **Val** & **Test** \\ \hline Transformer & 636.55 & 598.93 \\ Transformer+ReZero & 161.52 & 147.80 \\ Orthogonal Transformer & 179.98 & 165.78 \\ \hline TransJect & **134.82** & **127.59** \\ Random-TransJect & 215.68 & 199.72 \\ \hline \hline \end{tabular} \end{table} Table 3: Perplexity (lower value is better) calculated for language modeling on PTB dataset on validation (‘Val’) and testing (‘Test’) splits. performance. We computed the Pearson's correlation between the activation bound of different encoder models (TransJect, Transformer, Orthogonal Transformer and Transformer+ReZero) and the final predictive performance and obtained a correlation score of \(-0.81\) with pvalue \(0.09\). A similar correlation score of \(-0.68\) is obtained on the CharIMdb classification task. These results highlight the negative linear relationship between the final predictive performance and the average activation bound of the encoders. Therefore, we conclude that a tighter activation bound could lead to better predictive performances on short- and long-range encoding tasks. The average entropy obtained by TransJect on IMDb classification is \(-1.3\) with a standard deviation of \(0.12\). On the other hand, Transformer obtains an average entropy of \(5.80\) with a standard deviation of \(0.13\). With a higher activation factor, Transformer projects the tokens onto much sparser and random subspaces, increasing the system's overall entropy. On the other hand, representations learned by TransJect are more orderly and thus have low entropy. Moreover, the entropy of TransJect does not increase perpetually, which according to the second law of thermodynamics, suggests a reversible process. We compute Spearman's rank and Pearson correlation to understand the relationship between average activation bound and average entropy. We observe a Spearman's rank correlation value of \(1.0\) on the IMDb classification task, with a p-value of \(0.001\). The Pearson's correlation also stands at \(0.69\). These analyses indicate the positive relationships between the two measures, i.e. having a lower activation bound lowers the overall entropy of the model representations. The same argument can be used to explain the higher entropy in later layers of the Transformer. Our analyses also confirm the positive correlation between model depth, activation and entropy for Transformer(see Figure 6 in Appendix C). It is noteworthy that dot-product self-attention computes the query-key similarity between different tokens. Contrarily, in our proposed attention mapping, we compute the similarities between neurons _i.e._, similarities between different projection vectors, which in turn, enforces our model to learn different projection dimensions and reduces the overall entropy of the system. Additionally, we report the entropy of the representations at each attention head and expert level in Figure 2(b). Although TransJect displays higher entropy at the expert level, it is still lesser than the corresponding attention heads of the Transformer model. A sparse load-balancing capability of the expert model (see Figure 7 in Appendix C) ensures lower entropy throughout the model training. Further, the changes in inter-quartile ranges in the later layers of the Transformer show increasing randomness in larger depths, which can also be attributed to the higher activation factors. On the other hand, TransJect stabilises the entropy in the later layers. **Preserving distances between tokens.** Figure 4 shows representations obtained on tokens of a sample text at different encoder layers, projected onto \(2\)-D. We use isometric mapping (Tenenbaum et al., 2000) for projecting the high dimensional Figure 3: We observe higher activation factors for Transformer, whereas the median empirical activation factor for TransJect is \(\approx 1\). Lower entropy for TransJect indicates that at the neuron level as well as at the expert level, the representations are more orderly. vectors to the \(2-\)D space. TransJect maintains the initial embedding space throughout the layers, showing robustness in learning initial embeddings. On the other hand, Transformer expands the projection subspaces to more sparse subspaces, even though they project semantically similar tokens closer. **Efficiency comparison.** We report the test-time speed on the CharIMDb classification task with different lengths of input sequences in Table 4. Albeit having \(50\%\) more parameters (see Table 5 in Appendix C) than vanilla Transformer, on average, we observe \(13\times\) speedup for TransJect, which increases to \(26\times\) for longer sequences. From the definitions of thermodynamics, a higher entropy leads an irreversible process. This means that a model with a high activation bound is more irreversible and, therefore, be less efficient. On the other hand, TransJect exhibits lower entropy, and has higher available energy (from principles of thermodynamics), leading to more efficiency. ## 6 Conclusion In this work, we introduced TransJect, a new learning paradigm in language understanding by enforcing a distance-preserving criterion in the multi-layer encoder models. We derived that by enforcing orthogonal parameterization and utilizing smoother activation maps, TransJect can preserve layer-wise information propagation within a theoretical bound, allowing the models to regularize inherently. We further argued in favor of injectivity and Lipschitz continuity for better generalization and efficiency. Our empirical analyses suggested a superior performance of TransJect over other self-attention-based baselines. We observed lower entropy with TransJect with low variance, confirming the reversible process of statistical mechanics. These findings will encourage practitioners to explore the natural laws of science for building better, more intuitive, and cognitive AI. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**Speed \(\uparrow\)**} \\ \cline{2-5} & \(1k\) & \(2k\) & \(3k\) & \(4k\) \\ \hline Transformer & 1.0 & 1.0 & 1.0 & 1.0 \\ Transformer+ReZero & 1.0 & 1.0 & 1.0 & 1.1 \\ Orthogonal Transformer & 1.0 & 1.0 & 1.1 & 1.4 \\ \hline TransJect & **5.0** & **9.6** & **12.8** & **26.3** \\ \hline \end{tabular} \end{table} Table 4: Test-time inference speed on long-range text classification tasks on various text lengths. The reported numbers are speedups w.r.t. Transformers. Figure 4: Isomap plot of layer-wise token embeddings learned by (a) TransJect and (b) Transformer on the text “_This movie is terrible, but it has some good effects._” We highlight four semantically-relevant words and visualize their positions to understand how different models project tokens onto different subspaces in different encoding layers. TransJect preserves the relative distances between the tokens and their interconnectedness over the layers, indicating that the projection manifolds are topologically similar. Ethical Considerations **Limitations.** As discussed previously, \(\mathsf{TransJect}\) uses approximated eigenvalues to approximate the gram matrix \(\mathbf{X}^{T}\mathbf{X}\). Therefore, our proposed model could be ineffective for sequences with limited context. Additionally, our model usually exhibits higher forward pass time than non-parameterized models due to orthogonal parametrization. **Intended Use and Broader Impact.** The empirical analyses and insights gathered in this work encourage us to explore the dependence between sparsity, entropy and model representations. Our study can be further utilized in developing more efficient larger-depth language models. **User Privacy.** There is no known user privacy concern with this work. **Potential Misuse.** There is no known potential misuse of this work. **Reproducibility.** We furnish the experimental details Appendix B.1 for reproducing the results reported in this paper. We have open-sourced the source code at [https://github.com/victor7246/TransJect.git](https://github.com/victor7246/TransJect.git). **Environmental Impact.** On the language modeling task, each training iteration takes \(\sim 0.6\) seconds on a single Tesla T4 GPU. Total CO2 emissions are estimated to be 0.06 kg. Estimations were conducted using the MachineLearning Impact calculator presented in Lacoste et al. (2019).
2301.02325
Electronic and structural properties of Möbius boron-nitride and carbon nanobelts
Using the semiempirical tight binding method as implemented in the xTB program, we characterized M\"obius boron-nitride and carbon-based nanobelts with different sizes and compared them with normal nanobelts. The calculated properties include the infrared spectra, the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), the energy gap, the chemical potential, and the molecular hardness. The agreement between the peaks positions from theoretical infrared spectra compared with experimental ones for all systems, validate the used methodology. Our findings show that for the boron-nitride based nanobelts, the calculated properties have opposite monotonic relationship with the size of the systems whereas, for the carbon-based, the properties show the same monotonic relationship for both types of nanobelts. Also, the torsion presented on the M\"obius nanobelts, in the case of boron-nitride, induced an inhomogeneous surface distribution for the HOMO orbitals. In all cases, the properties vary with the increase in the size of the nanobelts indicating that it is possible to choose the desired values by changing the size and type of the systems.
C. Aguiar, N. Dattani, I. Camps
2023-01-05T23:08:33Z
http://arxiv.org/abs/2301.02325v1
# Electronic and structural properties of Mobius boron-nitride and carbon nanobelts ###### Abstract Using the semiempirical tight binding method as implemented in the xTB program, we characterized Mobius boron-nitride and carbon-based nanobelts with different sizes and compared them with normal nanobelts. The calculated properties include the infrared spectra, the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), the energy gap, the chemical potential, and the molecular hardness. The agreement between the peaks positions from theoretical infrared spectra compared with experimental ones for all systems, validate the used methodology. Our findings show that for the boron-nitride based nanobelts, the calculated properties have opposite monotonic relationship with the size of the systems whereas, for the carbon-based, the properties show the same monotonic relationship for both types of nanobelts. Also, the torsion presented on the Mobius nanobelts, in the case of boron-nitride, induced an inhomogeneous surface distribution for the HOMO orbitals. In all cases, the properties vary with the increase in the size of the nanobelts indicating that it is possible to choose the desired values by changing the size and type of the systems. keywords: nanotechnology, nanobelts, boron nitride, carbon + Footnote †: journal: ## 1 Introduction Carbon (C) can present three different forms of hybridization (\(sp\), \(sp^{2}\) and \(sp^{3}\)) and the way in which they can combine with other elements is the basis of countless researches [1; 2]. The combination of this element on a nanometric scale gave rise to structures called nanocarbons [3]. They are characterized by showing different geometric and dimensional configurations, such as fullerenes, carbon nanotubes, graphene nanoribbons, graphene oxides and nanodiamonds [1; 4]. Such structures have attracted interest for promising applications in nanobiomedicine [5; 6] and optoelectronics [7; 8]. According to the size and/or different topologies these nanomaterials cause different reactivity in contact with other materials [2; 9], favored or not, according to the geometry of the nanomaterial, which varies according to the increase in the pyramidalization angle and misalignment of the \(\pi\) orbitals between the C atoms [2]. Thus, different syntheses and functionalization have been carried out to obtain different nanostructures to achieve greater compatibility with other materials, seeking to improve and expand nanotechnological applications [5; 9; 10; 11]. Povic et al., made a breakthrough with the synthesis of carbon nanobelts (CNBs), whose simple structure in the form of a ring or belt generates two faces, one internal and one external, not convertible to each other [12]. Carbon nanobelts represent segments of single walled carbon nanotubes containing a benzene ring cycle with \(p\) orbitals aligned in a plane [13]. Such behavior allows them to be classified as armchair, zigzag or chiral nanoribbons according to the chirality index [13; 14]. In addition to the ring shape, carbon nanobelts are attractive molecules due to their synthetic challenges and differentiated properties [15; 16]. Its formulation covers concepts of conjugation, aromaticity and strain, also providing important information on chirality and bottom-up synthesis of C nanotubes, which continues to be a challenge and has caused limitations in its applications [17]. For the synthesis of nanobelts, three steps must be considered, the first consists of macrocyclization, from carbon sources, the second must occur with the formation of the belt in order to create the double-stranded structure, and finally the induction step stress required to bend the \(spr\) hybridized carbon skeleton into a cylindrical topology [12; 18]. But recently, Segawa et al., obtained a new structure, which are structurally uniform, and can be obtained by the bottom-up method in 14 steps of synthesis [17]. These are Mobius carbon nanobelts (MCNBs), whose CNBs structure, when subjected to torsion, should manifest different properties and molecular movements when compared to nanocarbons with a common belt topology [12]. Density functional theory (DFT) calculations show that MCNBs have a higher strain energy than CNBs of the same size [17]. However, producing torsion in CNBs can be difficult to control as strain energy is the major obstacle in the synthesis of MCNBs [12]. For this, saturated ligands (-CH\({}_{2}\)O-) or chalcogen atom ligands (-S-) are used to reduce and control the stress caused by the Mobius shape [19; 20]. Nonetheless, calculations of strain energies showed that MCNBs are synthetically accessible and that strain decreases with increasing MCNBs size [21]. Data from nuclear magnetic resonance spectroscopy and theoretical calculations show that the torsion structure of the Mobius band moves rapidly in solution [17]. Furthermore, chirality arising from the Mobius structure has been demonstrated experimentally using chiral separation by high performance liquid chromatography (HPLC) and circular dichroism (DC) spectroscopy [17]. Besides, spectroscopy data on excitation at 380 nm showed blue-green fluorescence beyond 10% quantum yield [17]. Given the above, new synthetic route strategies, different topologies and varied functionalization, combined with deformation calculations may contribute to the improvement of new nanocarbon materials from CNBs, including Mobius. This would make it possible to properly relate structure and function for applications in several areas. Thus, the objective of this work is to determine the electronic and structural properties of carbon and boron-nitride nanobelts and Mobius nanobelts with different sizes. The knowledge of such properties can help to develop sensors and filters for heavy metal, green house and hazardous gases. ## 2 Materials and Methods In this work two type of nanobelts were used for C and B-N systems. The first one consist in nanobelts whereas the second one are Mobius nanobelts (twisted nanobelts). The structures were generated starting with a cell with 2 units of (10,0) nanosheet repeated N times (10, 15, 20, 25 and 30) in the z direction and then wrapped 360\({}^{o}\). After that, the periodicity was removed and the atoms of the borders were passivated with Hydrogen. In case of Mobius nanobelts, after repetition, the nanobelts were twisted 180\({}^{o}\) and then wrapped. All the structures were generated using the Virtual NanoLab Atomistix ToolKit software [22]. The nomenclature to identify the systems is described as follows. BNNB\({}_{\rm N}\) (CNB\({}_{\rm N}\)) for boron-nitride (carbon) nanobelts and MBNNB\({}_{\rm N}\) (MCNB\({}_{\rm N}\)) for Mobius boron-nitride (carbon) nanobelt. In all cases, N indicates the number of repetitions. Using the semiempirical tight binding method as implemented in the xTB program [23], the electronic and structural parameters were calculated. The calculations were done using the GFN2-xTB method, an accurate self-consistent method that includes multipole electrostatics and density-dependent dispersion contributions [24] with the extreme optimization level that ensures convergence energy of \(5\times 10^{-8}\) E\({}_{\rm h}\) and gradient norm convergence of \(5\times 10^{-5}\) E\({}_{\rm h}\)/a\({}_{0}\) (a\({}_{0}\) is the Bohr radii). For each optimized structure, the highest occupied molecular orbital, HOMO (\(\varepsilon_{H}\)); the lowest unoccupied molecular orbital, LUMO (\(\varepsilon_{L}\)); the energy gap (\(\Delta\varepsilon\)) between HOMO and LUMO orbitals (\(\Delta\varepsilon=\varepsilon_{H}-\varepsilon_{L}\)); the chemical potential (\(\mu\)); the molecular hardness (\(\eta\)); and the infrared spectra were determined. Considering the approximation that ignores orbital relaxation after an electron is removed from the system (Koopmans' s theorem [25, 26, 27, 28] together with Janak' s theorem [29]) it is possible to estimate the chemical potential (\(\mu\)), the molecular hardness (\(\eta\)) [30], and the electrophilicity index (\(\omega\)) [31] from the HOMO and LUMO energies \(\epsilon_{H}\) and \(\epsilon_{L}\) as follows: \[\mu\cong\frac{\varepsilon_{L}+\varepsilon_{H}}{2} \tag{1}\] \[\eta\cong\frac{\varepsilon_{L}-\varepsilon_{H}}{2}\] (2) \[\omega=\frac{\mu^{2}}{2\eta} \tag{3}\] ## 3 Results and discussion Figures 1 and 2 shown the top view of the optimized structures. Panel A shows the nanobelts and panel B shows the Mobius nanobelts. With the increase of repetitions (n), the diameter increase too, starting from 13.81 A (13.77 A) to 40.79 A (40.77 A) for boron-nitride (carbon) nanobelts. The minimum number of repetition used was 10 to avoid stressed structures. The calculated infrared spectra for each system are shown in figures 3 and 4. At first sight (figure 3), the spectra for both systems, BNNB and MBNNB are very similar, indicating that the torsion on the Mobius nanobelts did not appreciable change the principal oscillation modes. In both cases, the B-N stretching (in-plane and out-of-plane) and radial R mode (out-of-plane buckling) are observed. The oscillations around 800 cm-1 correspond to the out-of-plane buckling mode. All the resonances in the high-frequency regime above 1200 cm-1 consists of transverse optical (T) and longitudinal optical (L) phonon modes. Oscillations around 1200 cm-1 and 1380 cm-1 correspond to bond-bending or T modes and around 1340 cm-1 and 1420 cm-1 correspond to bond-stretching or L modes [33; 34; 35; 36]. Figure 1: Top view of: A) Boron–nitride nanobelts (BNNB) with minimum/maximum diameter and repetition. B) Möbius boron–nitride nanobelts (MBNNB). Image rendered with VMD [32] software. The case for the carbon nanobelts systems, CNB and MCNB, is different. For both systems, the fingerprint region (\(600-1500\) cm-1) is visible but the peaks are with very dissimilar intensities, being greater for the MCNB structures. For carbon based organic systems, the resonances around \(900-675\) cm-1 consist on out-of-plane C-H oscillation, around \(1500-1400\) cm-1 were identified as C-C stretch (in-ring modes), and around \(3100-3000\) cm-1, as C-H bond stretch [37]. These results indicate that the MCNB are more freely to oscillate than the CNB. The electronic properties for boron-nitride and carbon based systems are shown in figures 5. Figures 5(a) and 5(b) show the energies of HOMO and LUMO boundary orbitals. To some extent, from the energies of these orbitals, it is possible to know how reactive the system is. The electron-donor character (electron-donor capacity) is measured by the HOMO energy whereas the electron-acceptor character (resistance to accepting electrons) is measured by the LUMO energy. From these figures, we can see that, in case of HOMO energy, the BNNB and MBNNB have opposite behavior. With the increase of the number of repeat units, the capacity to donate electrons is decreased for the MBNNB system and increased for the BNNB. On the other hand, the behavior of LUMO energy with the increase Figure 2: Top view of: A) Carbon nanobelts (CNB) with minimum/maximum diameter and repetition. B) Möbius carbon nanobelts (MCNB). Image rendered with VMD [32] software. Figure 4: Calculated infrared spectra for carbon nanobelts. Figure 3: Calculated infrared spectra for boron–nitride nanobelts. of systems size is similar, increasing the resistance to accept electrons. Top view of the 3-D diagrams of the HOMO and LUMO surfaces for the BNNB\({}_{20}\) and MBNNB\({}_{20}\) are shown in figures 6 (the 3-D diagrams for all the structures are shown in figures S1 and S2 of Supplementary material). The distribution of HOMO and LUMO surfaces for the BNNB are, as expected, in accordance with the symmetry of the system being distributed homogeneously over all the structure. In case of the MBNNB, the torsion on the nanobelt induced an strain on the structure that in turn, modify how the orbitals are distributed. This modification is stronger for the HOMO where the orbital volume is redistributed having regions with smaller/greater volumes. As the volume of the orbitals are proportional to the probability to find the electrons, the electron-donor regions changed too, with low/high localized HOMO zones. On the other hand, the LUMO surface shows very little inhomogeneities. This behavior is the same for all the MBNNB as shown in figure S2. The gap (\(\Delta\varepsilon\)), chemical potential (\(\mu\)), molecular hardness (\(\eta\)), and electrophilicity index (\(\omega\)) can be used to estimate the chemical reactivity and hardness of the system together with Figure 5: Calculated electronic properties for boron–nitride nanobelts. Figure 6: Structures, HOMO and LUMO surfaces for BNNB\({}_{20}\) and MBNNB\({}_{20}\) systems. Red (green) color represents negative (positive) values. Orbital surfaces rendered with with isovalue equal to 0.001 and with VMD [32] software. its molecular stability. Molecules with high (low) gap are those with high (low) molecular stability [38]. Positive values for \(\eta\) indicates that the redistribution of electrons in the molecule is energetically unfavorable. Also, the higher \(\eta\) value is, the more chemically stable the molecule is and, therefore, harder the rearrangement of its electrons [39]. The electrophilicity index (\(\omega\)), can be used as a measure of the molecule energy lowering due to the maximal electron flow between the environment and the molecule [31]. Comparing the values of \(\Delta\varepsilon\), \(\mu\), \(\eta\) and \(\omega\) for BNNB and MBNNB systems, we can see that the values are on the same order of magnitude. The main difference is in how these properties change with the increase of the number of repeated units used to build the nanobelts. These can be used to design structures with a fine control of these properties. Figure 7 shows the electronic properties for the carbon nanobelts. All the graphs shown a different behavior when compare to boron-nitride nanobelts. In this case, the properties for both systems, CNB and MCNB, behave with the same monotonic variation. The 3-D top view diagrams of the HOMO and LUMO surfaces for the CNB\({}_{20}\) and Figure 7: Calculated electronic properties for carbon nanobelts. Figure 8: Structures, HOMO and LUMO surfaces for CNB\({}_{20}\) and MCNB\({}_{20}\) systems. Red (green) color represents negative (positive) values. Orbital surfaces rendered with with isovalue equal to 0.001 and with VMD [32] software. MCNB\({}_{20}\) are shown in figures 8 (the 3-D diagrams for all the structures are shown in figures S3 and S4 of Supplementary material). In this case, the torsion on the nanobelt did not modify substantially either orbitals surfaces for any system. The values of \(\Delta\varepsilon\), \(\mu\), \(\eta\) and \(\omega\) for CNB and MCNB systems, are similar among them having small variations with the increase of the number of repeat units used to build the nanobelts. As the properties also vary with the increase of repeat units, the change in n can tailor the electronic properties of the carbon nanobelts. The inhomogeneous distribution of the HOMO surfaces can be correlated with the molecular hardness, \(\eta\). As \(\eta\) is related with how energetically unfavorable is to redistribute the electrons in a molecule, higher values implies a harder rearrangement of them. Comparing the values of \(\eta\) for the MBNNB (5(e)) and MCNB (7(e)) systems, we can see that \(\eta\) is almost eight times bigger for boron-nitride nanobelts than for carbon based nanobelts. A lower value of \(\eta\) imply that the electrons can redistribute easily through the whole structure resulting in a more homogeneous HOMO surface. ## 4 Conclusions In this work we studied the structural and electronic properties of boron-nitride and carbon based Mobius nanobelts and compare their properties with simple nanobelts. For all systems, the main peaks in the infrared calculated spectra are in accordance with the experimental one, indicating that the theoretical methodology used here is suitable to determine other properties. The electronic properties shown differences between both boron-nitride nanobelts. Whereas the LUMO energy for both systems, BNNB and MBNNB, have similar monotony, the other properties shown opposite behavior (different monotony). All the properties for the carbon based nanobelts have the same monotonic behavior for all the properties. Finally, the inhomogeneous distribution of the HOMO surface for the MBNNB was related with the high values of the molecular hardness. In all cases, the properties vary with the increase of the number of repeat units indicating that it is possible to choose the desired values changing the size and type of the systems. ## Acknowledgements We would like to acknowledge financial support from the Brazilian agencies CNPq, CAPES and FAPEMIG. Part of the results presented here were developed with the help of a CENAPAD-SP (Centro Nacional de Processamento de Alto Desempenho em Sao Paulo) grant UNICAMP/FINEP-MCT, CENAPAD-UFC (Centro Nacional de Processamento de Alto Desempenho, at Universidade Federal do Ceara), and Digital Research Alliance of Canada (via project bmh-491-09 belonging to Dr. Nike Dattani), for the computational support.
2308.01327
Careful Whisper -- leveraging advances in automatic speech recognition for robust and interpretable aphasia subtype classification
This paper presents a fully automated approach for identifying speech anomalies from voice recordings to aid in the assessment of speech impairments. By combining Connectionist Temporal Classification (CTC) and encoder-decoder-based automatic speech recognition models, we generate rich acoustic and clean transcripts. We then apply several natural language processing methods to extract features from these transcripts to produce prototypes of healthy speech. Basic distance measures from these prototypes serve as input features for standard machine learning classifiers, yielding human-level accuracy for the distinction between recordings of people with aphasia and a healthy control group. Furthermore, the most frequently occurring aphasia types can be distinguished with 90% accuracy. The pipeline is directly applicable to other diseases and languages, showing promise for robustly extracting diagnostic speech biomarkers.
Laurin Wagner, Mario Zusag, Theresa Bloder
2023-08-02T15:53:59Z
http://arxiv.org/abs/2308.01327v1
Careful Whisper - leveraging advances in automatic speech recognition for robust and interpretable aphasia subtype classification ###### Abstract This paper presents a fully automated approach for identifying speech anomalies from voice recordings to aid in the assessment of speech impairments. By combining Connectionist Temporal Classification (CTC) and encoder-decoder-based automatic speech recognition models, we generate rich acoustic and clean transcripts. We then apply several natural language processing methods to extract features from these transcripts to produce prototypes of healthy speech. Basic distance measures from these prototypes serve as input features for standard machine learning classifiers, yielding human-level accuracy for the distinction between recordings of people with aphasia and a healthy control group. Furthermore, the most frequently occurring aphasia types can be distinguished with \(90\%\) accuracy. The pipeline is directly applicable to other diseases and languages, showing promise for robustly extracting diagnostic speech biomarkers. Laurin Wagner\({}^{1}\), Mario Zusag\({}^{1}\), Theresa Bloder\({}^{1}\)\({}^{1}\)nyra health GmbH, Austria [email protected], [email protected], [email protected] **Index Terms**: speech recognition, aphasia classification, interpretable speech biomarkers ## 1 Introduction Aphasia is defined as an acquired language disorder following brain injury [1], for instance, after stroke or traumatic head injury. Following a stroke, the most frequently occurring aphasia types are global, anomic, Wernicke's, and Broca's aphasia [2]. Kirshner and Wilson [3] summarized the characteristics according to these syndromes. In Broca's aphasia, the speech pattern is nonfluent, often referred to as "agrammatic" or "telegraphic". Patients with Broca's aphasia often make phonemic errors that are inconsistent from utterance to utterance. In contrast, the speech pattern of patients with Wernicke's aphasia is effortless, sometimes even overly fluent, containing verbal paraphasiasi, neologisms, and jargon productions. Auditory comprehension is impaired, which often renders patients unconscious to their nonsense speech productions. Global aphasia might be considered a summation of the symptoms observed in Broca's and Wernicke's aphasia, characterized by nonfluent speech paired with poor comprehension skills. Finally, in anomic aphasia, the main deficit concerns word access or retrieval from the mental lexicon which is characterized by pauses and circumlocutions. The distinction between these four aphasia types is, however, not always that clear cut and, several mixed aphasia types exist. In clinical practice, therapeutic strategies to approach any language-related deficits have to be aligned with the particular symptoms to maximize rehabilitative potential [4]. The accurate classification of aphasia is therefore highly relevant. However, the process of correctly evaluating these subtleties of speech and language can be quite time-consuming. Employing automated approaches on speech recordings with interpretable insights would enable clinicians to initiate speech therapy more quickly, thus expediting rehabilitation. Several attempts have been made to classify different aphasia types and other speech pathologies from manual transcripts [5, 6]. Fraser et al. [5] conducted a study with the objective of discarning between individuals diagnosed with semantic dementia, progressive nonfluent aphasia and healthy controls, utilizing features extracted from manual transcripts for syntactic complexity and measures of fluency. Fromm et al. [6] employed the CHAT format manual transcriptions [7] to extract features on subtasks of AphasiaBank [8] data and study k-means clusters of these features. Other works have developed approaches to detect aphasia automatically from spontaneous speech [9][10]. Notably, Chatzoudis et al. [11] use the XLSR-53 model from Conneau et al. [12], which was pre-trained on multiple languages using the contrastive self-supervised task and wav2vec2.0 architecture from Baewski et al. [13] to extract language-invariant linguistic features for classifying aphasia. These models learn high-quality audio representations using Transformer encoder blocks [14] and are successively fine-tuned for downstream tasks. To fine-tune these models, Conneau et al. [12] have added a classifier representing the output vocabulary of the respective downstream task on top of the XLSR-53 model by training on labeled datasets using a Connectionist Temporal Classification (CTC) loss [15]. Even though the underlying wav2vec2.0 architecture contains multiple Transformer layers, which can encode contextual information, fine-tuned XLSR-53 models without an additional language model tend to produce more literal, acoustic outputs than the encoder-decoder Whisper model from Radford et al. [16], therefore leaving filler words and recurring utterances as artifacts. Whisper was trained in a supervised fashion on weakly-supervised large-scale raw text labels without significant text standardization, such that the decoder part of Whisper works as an audio-conditioned language model and eliminates many filler words, recurring utterances and other artifacts. We propose to combine the output of these two models and apply carefully selected feature extraction methods on aligned and annotated transcripts to build prototypes of healthy speech. These prototypes are used to normalize features for inference in order to create interpretable insights for anomalous speech recordings. The contributions of our work are as follows: * With a fully automated approach, we achieve human-level accuracy on distinguishing speech recordings of aphasic patients from a healthy control group. * To the best of our knowledge, it is the first approach exploiting the different decoding mechanisms of CTC and encoder-decoder based methods to create rich annotations for automatic feature extraction. * With a fully automated approach, we achieve over 90% accuracy on classifying speech between a healthy control group, anomic, Wernicke's and Broca's aphasia. * We provide several unseen scores for improving the accuracy of automatically assessing speech anomalies. ## 2 Proposed method In this section, we describe the proposed pipeline for creating robust speech characteristics for assessing speech impairments. The pipeline consists of a prototype construction and classification phase. Both phases use feature extraction methods in three successive steps with the difference that the prototype phase uses extracted features from a population of healthy speech samples to estimate healthy speech distributions, while the classification phase uses distance measures from these distributions to generate classifiable features. We have used the 89 hours of free speech samples from the Europarl speech translation corpus [17] to construct the healthy prototypes. The three successive steps consist of first constructing acoustic and clean transcripts leveraging the speech recognition models XLSR-53 and Whisper on speech chunks as described in Section 2.1. We have used the large version of XLSR-53 with 24 Transformer encoder blocks and the large version of Whisper. Second, we enrich the aligned transcripts with fully-automated annotations for pauses, filler words, and part-of-speech (POS) tags. Third, we are computing scores for coherence, fluency, syntax, lexical richness, and pronunciation as described in Section 2.2. For classification, speech recordings are fed into the alignment, annotation and score pipeline to compute the distance between each score and the score prototypes. These distances are then used for training simple machine learning classifiers and are used for the experiments in Section 3. The steps for computing prototypes are portrayed in Figure 1. ### Alignment and annotation The whole pipeline operates directly on speech waves, which are sampled at \(16\,\mathrm{kHz}\). The speech wave is fed into Whisper to produce syntactically correct transcripts containing punctuations, removing single utterances, repetitions and other artifacts, and into XLSR-53 to produce the more literal and standardized outputs. In the following, we will call the decoded XLSR-53 output _acoustic_ and the Whisper output _clean_ transcripts. Due to the configuration of wav2vec2.0, tokens are decoded with a \(20\,\mathrm{ms}\) stride resulting in precise timing information on token level and by extension on word level [18]. Whisper provides timings on a sentence level only. We combine the strengths of these two transcripts by aligning their standardized output via the minimization of the Levenshtein distance [19] as portrayed in Figure 1. This enables us to identify unmatched words or utterances and tokens as potential filler words or recurring utterances and allows us to augment the clean transcripts with accurate timings for the matched tokens from the acoustic transcripts in a fully-automated fashion. The recordings were segmented into contiguous speech chunks, each containing a minimum of 200 words, based on clean transcriptions. To ensure accurate syntax scores, we confirmed that the final word in each chunk corresponded to the end of a sentence, with Whisper providing punctuation marks. The aligned acoustic, accurate timings, and clean text data were used in our annotation process, illustrated in Figure 1. In this step, we are augmenting the aligned transcripts with POS tags using RoBERTa from Liu et al. [20], detect pauses from silence tokens with more than \(300\,\mathrm{ms}\) of duration and filler words from timings and unmatched utterances. These augmented transcripts are then fed into our score pipeline, as described in Section 2.2. ### Scores for language abilities The goal of the proposed scores is to capture multiple aspects of language abilities as shown in Table 1. We accumulate the distribution of each score and approximate it with a normal distribution, since they fit the empiric distributions well and serve as prototypes for healthy speech. #### 2.2.1 Fluency The **words per second** score is computed on clean transcripts, while **percentage time spoken** uses the aggregated time of the acoustic transcripts divided by the total time of the provided audio chunk. For measuring the **productive time ratio**, we divide the strict spoken time (discounting pauses) of the acoustic transcript by the strict spoken time of the clean transcript. We count every successive set of silence tokens from acoustic transcripts that are longer than \(300\,\mathrm{ms}\) as a pause to compute **pause length** scores. Sampling all pause lengths as a distribution gives insights into the frequency and duration of speech breaks. We use different quantiles (10, 25, 50, 75, 95) of this distribution as features. **Pause distance** by contrast computes the time between successive pauses and again, samples the pause distances as a distribution and uses quantiles (10, 25, 50, 75, 95) as features. The **pause per word** score is defined as the ratio of pauses longer than \(300\,\mathrm{ms}\) and the total number of words from the clean transcript. We also included two phoneme scores, for which we use the automatic phonemizer from Bernard and Titieux [21] to transform the clean and acoustic text into phonemes. For **mean phoneme length nouns**, we aggregate and average the lengths of all words tagged as nouns from clean transcripts. Finally, the **phonemes per second** score is computed on the phonemized version of clean transcripts. \begin{table} \begin{tabular}{l l} \hline \hline **Language ability** & **Feature** \\ \hline Fluency & Words and phonemes per second \\ & Percentage time spoken \\ & Productive time ratio \\ & Pause length and distance \\ & Pause per word \\ & Mean phoneme length nouns \\ \hline Lexical richness & TTR, MATTR \\ & gzip ratio \\ & HD-D, MTLD \\ & Word information \\ \hline Syntax & Noun, verb and adjective ratio \\ & Grammar acceptance \\ & Mean sentence length \\ \hline Pronunciation & Word error rate acoustic \\ & Character error rate acoustic \\ \hline Coherence & CTRLEval \\ & Word vector coherence \\ & GPT2 perplexity \\ \hline \hline \end{tabular} \end{table} Table 1: Scores for identifying speech characteristics. #### 2.2.2 Lexical richness All lexical richness scores are computed on clean transcripts. We have computed **type token ratio** (TTR) and **MATTR**[22] with window sizes of 10, 25 and 50 words. Using the intuition that redundant text is easier to compress, we have used the gzip algorithm, creating **gzip ratio** as a measurement for redundancy, dividing the length of compressed by uncompressed byte string representations. Further we use **HD-D** and **MTLD** as described in [23], which were proven to capture lexical diversity well, even under varying text length. **Word information** is defined as the entropy of the normalized token count distribution of the lemmatized tokens [24]. #### 2.2.3 Syntax All syntax scores are computed on clean transcripts. **Noun, verb and adjective ratio** are simply the ratio of the specified POS tags to the number of words in the clean transcript. The **grammar acceptance** feature is defined as the softmax probability of a RoBERTa-large model [20] fine-tuned for grammar acceptability on the CoLA Corpus [25]. Finally, we use the **mean sentence length** as an indicator of noticeable short or sprawling phrases. #### 2.2.4 Pronunciation The **word/character error rate acoustic** scores are the word/character error rates between the acoustic and clean transcripts, intuitively capturing speech intelligibility proportional to the diverging outputs of Whisper and XLSR-53. #### 2.2.5 Coherence All coherence scores are evaluated on Whisper transcripts. **CTRLEval** is a reference-free metric to assess the coherence for controlled text generation tasks as detailed by Ke et al. [26]. For **word vector coherence**, we use the cosine similarity to assess the local coherence between averaged word vectors of neighboring sentences. Word vectors are taken from spaCy's [27] en_core_web_sm corpus. At last, with the **GPT2 perplexity** score, we evaluate the perplexity from GPT2 by Radford et al. [28] with a sliding window size of 512 tokens using the standard GPT2 tokenizer. ## 3 Experiments and results ### Experimental setup To validate our approach, we are using speech samples extracted from the AphasiaBank database [8] for English. AphasiaBank comprises a collection of interviews between clinicians and subjects afflicted with aphasia, as well as healthy control subjects. These interviews are transcribed in the CHAT transcription format [7] and include timestamps for interviewer and patient speech segments. Most interviews contain a label indicating which form of aphasia the subject is suffering from. There are \(272\) individuals from the control group, \(136\) with atomic, \(80\) with Broca's, \(60\) conduction, \(29\) with Wernicke's, \(9\) with transmotor, \(3\) individuals with global aphasia and \(35\) individuals who do not exhibit symptoms of aphasia as determined by the Western Aphasia Battery (WAB-R) [29]. For \(348\) subjects the (WAB-R) Aphasia Quotient (AQ) is included. We remove all speech uttered by the interviewer as given by the timestamps from the CHAT protocols and concatenate the resulting audio clips, which are input to our pipeline. For each speech chunk, we compute the scores from Section 2.2 as outlined in Figure 1 and average multiple score vectors from the same interview. The distance for each averaged score \(s_{i}\) to the healthy prototype of this score, estimated by \(X\sim\mathcal{N}(\mu_{s},\,\sigma_{s}^{2})\), are then used as features for classification and computed by \(s_{i}=\frac{\sigma_{s}}{|\mu_{s}-\mu_{s}|}\), if \(|s_{i}-\mu_{s}|>\sigma_{s}\) else it is set to 1. Figure 1: Alignment, annotation and score procedure To ensure comparability with the results from Chatzoudis et al. [11], we adopt a Leave-One-Subject-Out (LOSO) cross-validation strategy for all classification experiments. We use a standard linear Support Vector Classifier SVC [30] with regularization parameter \(C=0.1\) and \(\text{max\_iter}=50000\) for classification. We do not use task-specific feature selection and train on all of our features. All experiments were run on a virtual machine using an NVIDIA-A100 GPU. ### Results #### 3.2.1 Aphasia versus Control To facilitate a direct comparison with Chatzoudis et al. [11], we have included the classification results of our approach for aphasic (including all subtypes) versus healthy control patients in Table 2. The outcome of this analysis reveals that our fully automatic method exhibits superior performance relative to the one of [11], even when comparing ourselves to features extracted directly from the manually annotated CHAT transcriptions. Furthermore, since we classify on averaged chunks of similar length, instead of the full transcript, we are not benefit-ing from the discriminatory information provided by total word output. These findings provide compelling evidence of the robustness and efficacy of our fully-automatic approach for real-world applications. #### 3.2.2 Aphasia subtype classification To validate the effectiveness and robustness of the feature extraction pipeline, we use a linear SVC to differentiate three types of aphasia and the healthy control group as a multi-class classification problem. We have used three of the four main aphasia subtypes, i.e. anomic, Wernicke's, and Broca's aphasia. We exclude global aphasia since there are only 3 people with this diagnosis in AphasiaBank. Further details on the classification performance are summarized in Table 3. #### 3.2.3 Classification results per score category Each score category should contribute meaningful information to the multi-class subtype classification problem from section 3.2.2. To this end, we trained a linear SVC that only utilizes the subset of scores from the respective score category to differentiate the three aphasia subtypes and the healthy control group. The results for the weighted average precision, recall and \(F_{1}\) scores for the aphasia subtypes and control group per score category are condensed in Table 4. #### 3.2.4 WAB-R AQ Regression The automatic estimation of the aphasia quotient (AQ) from spontaneous speech has numerous potential benefits, including progress monitoring without the need for frequent repetition of the WAB-R assessment procedures. We follow the evaluation procedure from Le et al. [32] and frame the WAB-R AQ prediction as a regression problem, with our proposed score distances as features and the AQ scores as the target labels for the \(348\) individuals, for which the AQ was included. The results are portrayed in Table 5. ## 4 Conclusions The proposed approach showed high level of robustness for identifying aphasia and aphasic subtypes from free speech recordings and presents a pipeline that can be adapted to other diseases and languages with minimal effort. We used the same pipeline to evaluate the proposed set of features for a dementia classification challenge, beating the baseline by a large margin. In the near future, we want to investigate the distance measures from healthy prototypes to build accurate and interpretable speech biomarkers for multiple diseases. Also, the optimal number of words per chunk for practical applications, the transferability of features across multiple languages and the correlation between multiple features need further investigation. \begin{table} \begin{tabular}{l l l l} \hline \hline **Score Category** & **Precision** & **Recall** & \(\mathbf{F_{1}}\) \\ \hline Fluency & \(81.8\) & \(82.0\) & \(81.6\) \\ Coherence & \(68.9\) & \(71.0\) & \(69.1\) \\ Lexical Richness & \(76.5\) & \(77.9\) & \(76.5\) \\ Syntax & \(76.9\) & \(78.3\) & \(77.4\) \\ Pronunciation & \(61.7\) & \(67.3\) & \(63.9\) \\ \hline \hline \end{tabular} \end{table} Table 4: Weighted average precision, recall and \(F_{1}\) scores for a 4-class classification problem differentiating between aphasia subtypes and the control group of the AphasiaBank dataset using only features from a single score category. \begin{table} \begin{tabular}{l l l l} \hline \hline **Method** & **Manual annotation** & **Accuracy** & \(\mathbf{F_{1}}\) \\ \hline SVC [11] & ✗ & \(93.9\) & - \\ SVC [11] & ✓ & \(97.4\) & - \\ SVC [31] & ✓ & - & \(85.9\) \\ \hline **Ours (SVC)** & ✗ & \(\mathbf{98.6}\) & \(\mathbf{98.5}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Classification metrics for aphasia versus healthy control group using linguistic features on AphasiaBank. \begin{table} \begin{tabular}{l l l l} \hline \hline **Class** & **Precision** & **Recall** & \(\mathbf{F_{1}}\) \\ \hline Control & \(97.5\) & \(98.5\) & \(98.0\) \\ Anomic & \(83.1\) & \(86.8\) & \(84.9\) \\ Broca & \(80.0\) & \(75.0\) & \(77.5\) \\ Wernicke & \(91.7\) & \(68.8\) & \(78.6\) \\ \hline Weighted average & \(90.6\) & \(90.6\) & \(90.6\) \\ \hline \hline \end{tabular} \end{table} Table 3: Classification metrics for a 4-class classification problem differentiating between aphasia subtypes and the control group of the AphasiaBank dataset. \begin{table} \begin{tabular}{l l l l} \hline \hline **Method** & **Manual annotation** & **PC** & **MAE** \\ \hline SVR [32] & ✗ & \(0.776\) & \(9.90\) \\ SVR [32] & ✓ & \(0.787\) & \(9.54\) \\ \hline **Ours (SVR)** & ✗ & \(\mathbf{0.815}\) & \(\mathbf{8.19}\) \\ \hline \hline \end{tabular} \end{table} Table 5: WAB-R AQ Prediction. MAE=Mean Absolute Error, PC = Pearsons’s Correlation, SVR = Support Vector Regression
2307.00864
Ground state EIT cooling of $^{171}$Yb$^+$ ion
The work propose a scheme of deep laser cooling of $^{171}$Yb$^{+}$. The cooling is based on the effect of electromagnetically induced transparency (EIT) in a polychromatic field with three frequency components are resonant to optical transitions of the $^2S_{1/2} \to \, ^2P_{1/2}$ line. The deep cooling down to the ground motional state in a trap allows for a significant suppression of the second order Doppler shift in frequency standards. Moreover, there is no need to use a magnetic field, which is required for Doppler cooling of $^{171}$Yb$^{+}$ in a field with two-frequency component. The cooling without use of magnetic field is important for deep suppression of quadratic Zeeman shifts of clock transitions from uncontrolled residual magnetic fields.
D. S. Krysenko, O. N. Prudnikov, A. V. Taichenachev, V. I. Yudin, S. V. Chepurov, S. N. Bagaev
2023-07-03T09:06:28Z
http://arxiv.org/abs/2307.00864v2
# Ground state EIT cooling of \({}^{171}\)Yb\({}^{+}\) ion in polychromatic field ###### Abstract The work propose a scheme of deep laser cooling of \({}^{171}\)Yb\({}^{+}\). The cooling is based on the effect of electromagnetically induced transparency (EIT) in a polychromatic field with three frequency components are resonant to optical transitions of the \({}^{2}S_{1/2}\rightarrow\,^{2}P_{1/2}\) line. The deep cooling down to the ground motional state in a trap allows for a significant suppression of the second order Doppler shift in frequency standards. Moreover, there is no need to use a magnetic field, which is required for Doppler cooling of \({}^{171}\)Yb\({}^{+}\) in a field with two-frequency component. The cooling without use of magnetic field is important for deep suppression of quadratic Zeeman shifts of clock transitions from uncontrolled residual magnetic fields. ## I Introduction Laser cooling is a necessary step for modern experiments with quantum systems based on neutral atoms and ions that have wide applications, including the study of fundamental properties of cold atomic Bose and Fermi condensates [1; 2; 3; 4], implementation of quantum logic elements and quantum computing [5]. The development of modern frequency standards using cold atoms [6; 7; 8] and ions [9; 10; 11] has become highly relevant. The achieved level of accuracy in optical frequency standards, \(\Delta\nu/\nu<10^{-18}\), opens up new horizons for developments of modern fundamental researches, such as the study of the effects of Earth's gravitation on the space-time continuum [7; 12; 13], the test of fundamental constants [14; 15], verification of the general theory of relativity, Lorentz invariance of space [16; 17; 18], the search for dark matter [19; 20], etc. To achieve high precision level of frequency standards, it is necessary to take into account systematic shifts of atomic levels of different nature. Therefore, works are aimed at suppression of these shifts are highly relevant. For example, in the context of the \({}^{171}\)Yb\({}^{+}\) ion-based frequency standard, further progress can be linked to the control and suppression of systematic shifts caused by residual magnetic field, black-body radiation (BBR) shifts, and shifts related to the quadratic Doppler effect [10; 17]. However, the main challenge here is that the transition \({}^{2}S_{1/2}\rightarrow^{2}P_{1/2}\) is used for laser cooling is not closed, that require to use a laser field with at least two frequency components [21; 22; 23] (see Fig.1). In this case, a relatively large magnetic field of of \(\sim\)1-10 G is required to destroy coherent population trapping (CPT) effect appeared at the \({}^{2}S_{1/2}(F=1)\) state for the cooling stage. The laser cooling here can reach minimum temperature that corresponds to the Doppler limit \(k_{B}\,T_{D}\simeq\hbar\gamma/2\), where \(\gamma\) is the natural linewidth of the optical transition \({}^{2}S_{1/2}\rightarrow\,^{2}P_{1/2}\). The hysteresis effects during the switching off of the magnetic field is required for cooling create certain difficulties in minimizing the residual magnetic field and keeping it constant in various cycles of cooling and clock operation. The similar difficulties arise when implementing quantum logic and quantum computing elements based on \({}^{171}\)Yb\({}^{+}\) ions [24]. In this work, we propose an alternative method of laser cooling that makes it possible to eliminate the use of a magnetic field and, in contrast to the standard [21; 22; 23], allows atoms to be cooled significantly below the Doppler limit \(T_{D}\), which allows to significantly suppress the second-order Doppler shift in frequency standard. ## II Deep laser cooling of \({}^{171}\)Yb\({}^{+}\) For laser cooling of \({}^{171}\)Yb\({}^{+}\) ion the light field with at least two frequency components have to be used [21; 22; 23] (see Fig. 1). Here, one of the frequency components is close to the resonance of \({}^{2}S_{1/2}(F=0)\rightarrow\,^{2}P_{1/2}(F=1)\) transition, and the other one to the \({}^{2}S_{1/2}(F=1)\rightarrow\,^{2}P_{1/2}(F=0)\). In this case, an additional magnetic Figure 1: The energy level structure of the hyperfine sublevels \({}^{2}S_{1/2}\) and \({}^{2}P_{1/2}\) of ion \({}^{171}\)Yb\({}^{+}\) are used for laser cooling. The solid lines represent the transitions induced by two-frequency components of the field. The wavy lines represent the main channels of spontaneous decay. The magnetic field is required here to destroy the CPT effect on the \({}^{2}S_{1/2}(F=1)\) state. field is required to destroy the CPT effect arising at the \({}^{2}S_{1/2}(F=1)\) level. Let us note that in this scheme, the laser cooling arises as a result of the action of the dissipative Doppler force on a moving ion, which leads to cooling down to the temperature of the Doppler limit [22; 23]. Deeper laser cooling, down to the ground motional state, can be achieved under conditions of resolved sideband cooling [25; 26; 27], when the ion is localized in a trap on scales smaller than the wavelength (the Lamb-Dicke parameter \(\eta=\sqrt{E_{R}/\hbar\,\omega_{osc}}\ll 1\), where \(E_{R}=\hbar^{2}k^{2}/2M\) is the recoil energy, \(M\) is the ion mass), and \(\omega_{osc}\) is the ion oscillations frequency in the trap is enough large \(\omega_{osc}\gg\gamma\) (i.e., transitions between different motional states of the ion have to be spectrally resolvable). Let us note, that these conditions are hard to fulfill for \({}^{171}\)Yb\({}^{+}\) ions \({}^{2}S_{1/2}\rightarrow\,^{2}P_{1/2}\) cooling transition, where the natural line width \(\gamma/2\pi=23\,MHz\) and the typical oscillation frequency in the trap \(\omega_{osc}/2\pi\simeq 400-600\,kHz\). The laser cooling by using the Electromagnetically Induced Transparency (EIT) technique [28] is a further development of deep laser cooling methods that do not require \(\omega_{osc}\gg\gamma\). To implement it, a three-level \(\Lambda\) system is required, in which transitions are induced by a pair of light waves. Under conditions when the detunings of light waves are equal, the atoms are pumped into a dark state that does not interact with the field, which allows to substantially suppress the heating processes associated with the emission of spontaneous field photons. Furthermore, the presence of a narrow EIT resonance, with a width much smaller than linewidth \(\gamma\) of excited state, enables cooling via two-photon transitions between different motional states of the ground levels in the \(\Lambda\)-scheme, similar to the Raman cooling technique [28; 29; 30]. The choice of the interaction scheme for the implementation of the EIT cooling of the \({}^{171}\)Yb\({}^{+}\) ion is a non-trivial task. In the work by [31], it was proposed to use three frequency components near the resonance of the optical transition \({}^{2}S_{1/2}(F=1)\rightarrow^{2}P_{1/2}(F=0)\), known as the double EIT scheme [32]. In this case, each frequency component determines transitions between different Zeeman levels of the ground state \(|^{2}S_{1/2},F=1,\mu=0,\pm 1\rangle\) and the excited state \(|^{2}P_{1/2},F=0,\mu=0\rangle\). In addition, to pump the \({}^{2}S_{1/2},(F=1)\) state, as well as to return to a closed interaction cycle, it is necessary to use an additional laser field resonant to the \({}^{2}S_{1/2}(F=0)\)\(to^{2}P_{1/2}(F=1)\), which significantly complicates the overall scheme of laser cooling. In this paper, to implement deep EIT cooling, as well as the preliminary Doppler cooling preceding it, we propose to use a polychromatic field of running waves with three frequencies \[\mathbf{E}(\mathbf{r},t)=Re\left\{\sum_{n=1,2,3}\mathbf{E}_{n}e^{i\mathbf{k} _{n}\mathbf{r}}e^{-i\omega_{n}t}\right\}, \tag{1}\] where \(\mathbf{E}_{n}\) are complex vectors defining the polarization and amplitude of each frequency component of the field \(n=1,2,3\). For definiteness, let us assume that the field components have linear polarizations Fig. 2(a). The wave vectors of the \(\mathbf{E}_{1}\) and \(\mathbf{E}_{3}\) components are directed along the \(x\)-axis, their polarization vectors are along the \(z\)-axis. The wave vector of the \(\mathbf{E}_{2}\) component is in the \(xz\)-plane at some angle \(\theta\) to the \(x\)-axis. The orientation of the polarization vector \(\mathbf{E}_{2}\) may vary. The frequencies of the field components are chosen so that they provide light-induced transitions between different hyperfine components of the \({}^{2}S_{1/2}\) and \({}^{2}P_{1/2}\) levels according to the scheme shown in Fig. 2 (b). ### Doppler cooling As the EIT ground-state cooling technique is applicable to ions already prepared at low temperatures [28; 29; 30], the preliminary Doppler cooling is required. For the considered field configuration (1), the effective Doppler cooling can be realized with linear codirectional polarizations of the light field components \(\mathbf{E}_{1}\,||\,\mathbf{E}_{2}\,||\,\mathbf{E}_{3}\). In [33] we carried out a detailed analysis of laser cooling in such a field. It was shown that the minimum temperature corresponds to Doppler cooling limit \[k_{B}T=\hbar\gamma/3\,, \tag{2}\] and is achieved for the low intensities, under the condition that the Rabi frequencies of each frequency component are equal \(\Omega_{1}=\Omega_{2}=\Omega_{3}\) (\(\Omega_{n}=|\mathbf{E}_{n}|d/\hbar\), \(d\) is the transition dipole moment \({}^{2}S_{1/2}\rightarrow\,^{2}P_{1/2}\)), and the detunings of each frequency component have to be chosen \[\delta_{1}=\delta_{2}=\delta_{3}=-\gamma/2\,. \tag{3}\] Here, the detunings are defined as \(\delta_{n}=\omega_{n}-\omega_{0n}\) the difference between the frequency of the field component \(\mathbf{E}_{n}\) and the frequency of the corresponding resonant transition \(\omega_{0n}\) (see Fig. 2(b)). Figure 2: Configuration of a polychromatic field formed by three running waves (a). The transitions induced by each field frequency components (b). ### Ground state EIT cooling To implement the second stage of deep laser cooling in proposed field configuration Fig. 2, we direct the polarization vector \(\mathbf{E}_{2}\) along the \(y\) axis so that \(\mathbf{E}_{2}\perp\mathbf{E}_{1,3}\). In this case, the interaction with light components forms a double \(\Lambda\) scheme for the Zeeman sublevels of hyperfine states \({}^{2}S_{1/2}\) and \({}^{2}P_{1/2}\) (see Fig. 3) that allows to implement EIT ground state cooling technique. For EIT cooling in the considered scheme, it is necessary to choose the detuning of the driven field component \(\mathbf{E}_{1}\) to be blue-detuned \(\delta_{1}>0\). The intensity of this component have to be chosen so that ac Stark shift of dressed states that corresponds to bare states Zeeman sublevels \(|^{2}S_{1/2},F=1,m=\pm 1\rangle\) are light-shifted upwards by amount is equal to the trap frequency \(\omega_{osc}\). The field components \(\mathbf{E}_{2}\) and \(\mathbf{E}_{1}\) drive two-photon transitions between the ground states \(|^{2}S_{1/2},F=1,m=\pm 1\rangle\) and \(|^{2}S_{1/2},F=1,m=0\rangle\), and for the detuning \(\delta_{2}=\delta_{1}\), the efficient EIT cooling down to the ground motional state can be achieved, similar to thee-level \(\Lambda\) atomic system [28; 29; 30]. The third field frequency component \(\mathbf{E}_{3}\) here plays the role of optical pumping for depopulating the state \(|^{2}S_{1/2},F=1,m=0\rangle\). The dynamics of the average vibrational quantum number \(\bar{N}=\sum_{n=0}^{\infty}nP_{n}\) (where \(P_{n}\) is the population of the ion's n-th motional state in the trap) is determined by the rate balance equation [30; 27] \[\frac{d}{dt}\bar{N}=-\left(A_{-}-A_{+}\right)\bar{N}+A_{+}\,, \tag{4}\] where the rate coefficients \[A_{\pm}=\eta^{2}\left(W_{0}+W_{\mp}\right)\,, \tag{5}\] are given by the spontaneous decays rate, and similar to [27]\(W=\gamma\,\rho^{ee}\), where \(\rho^{ee}\) is the total population of excited state \({}^{2}P_{1/2}\): \[W_{0}=W|_{\delta_{2}=\delta_{1}}\,, \tag{6}\] is spontaneous decay rate at \(\delta_{2}=\delta_{1}\), and \[W_{\pm}=W|_{\delta_{2}=\delta_{1}\pm\omega_{osc}} \tag{7}\] are spontaneous decay rates at \(\delta_{2}=\delta_{1}\pm\omega_{osc}\). Ion cooling is achieved under conditions \(A_{-}>A_{+}\). In this case, the stationary solution of the equation (4) has the form: \[\bar{N}_{f}=\frac{A_{+}}{A_{-}-A_{+}}=\frac{W_{0}+W_{-}}{W_{+}-W_{-}}\,, \tag{8}\] and determines the minimum laser cooling temperature of the ion in the trap. The Lamb-Dicke parameter for two-photon transitions is determined by the difference between the wave vectors \(\mathbf{k}_{1}\) and \(\mathbf{k}_{2}\) \[\eta=|(\mathbf{k}_{1}-\mathbf{k}_{2})\cdot\mathbf{e}_{m}|\,\sqrt{\frac{\hbar }{2M\omega_{osc}}}\,, \tag{9}\] where \(\mathbf{e}_{m}\) denotes the unit vector describing the oscillation direction of the mode to be cooled. Thus, by varying the angle \(\theta\) between the wave vectors \(\mathbf{k}_{1}\) and \(\mathbf{k}_{2}\) (see Fig.2), as well as the overall direction of the waves relative to the principal axes of the trap, it is possible to control the laser cooling rate for different motional modes. The relations for the spontaneous decay rates \(W\), can be obtained by solving the density matrix (Bloch equations) for Yb ion [33]. For the field configuration Fig. 2, where \(\mathbf{E}_{2}\perp\mathbf{E}_{1,3}\), we get the following expression for the total population of the excited state \({}^{2}P_{1/2}\) and respectively \(W\): \[W=\gamma\frac{108\,\Delta^{2}\,\Omega_{1}^{2}\,\Omega_{2}^{2}}{D}\,, \tag{10}\] where \(\Delta=\delta_{2}-\delta_{1}\), Figure 3: The Zeeman sublevels of hyperfine states \({}^{2}S_{1/2}\) and \({}^{2}P_{1/2}\) of \({}^{171}\)Yb\({}^{+}\). The transitions induced by the frequency components of the field are indicated by double arrows (green - transitions caused by \(\mathbf{E}_{1}\) component, blue - transitions caused by \(\mathbf{E}_{2}\) component and red - transitions caused by \(\mathbf{E}_{3}\) component of the light field). Wavy arrows indicate transitions caused by spontaneous decays. Here \(\delta_{1}\), \(\delta_{2}\), \(\delta_{3}\) are the corresponding detunings for the frequency components of the field. Here \(S_{3}=\Omega_{3}^{2}/(\gamma^{2}+4\,\delta_{2}^{2})\) is the saturation parameter of the \({\bf E}_{3}\) component. For \(\Omega_{1}\gg\Omega_{2}\) and \(\delta_{1}\gg\gamma\) (\(\delta_{1}>0\)), the scattering rate has the form Fig. 4, which is typical of EIT laser cooling [27]. As can be seen from Fig.4, the condition \(\delta_{2}=\delta_{1}\) corresponds to CPT. The narrow peak at detuning \(\delta_{2}=\left(\sqrt{\Omega_{1}^{2}/3+\delta_{1}^{2}}+\delta_{1}\right)/2\) corresponds to the two-photon resonance. Thus the condition under which the peak position corresponds to \(\delta_{2}=\delta_{1}+\omega_{osc}\) leads to the highest laser cooling rate and the lowest temperature [27; 28]. This condition determines the intensity of the \({\bf E}_{1}\) component for a given \(\delta_{1}\) and a trap frequency \(\omega_{osc}\) \[\Omega_{1}=2\,\sqrt{3}\sqrt{\omega_{osc}(\delta_{1}+\omega_{osc})}\,. \tag{12}\] The obtained analytical expression (10) allows us to estimate the limit of laser cooling of the \({}^{171}\)Yb\({}^{+}\) ion in the proposed EIT scheme. Thus, the average population of vibrational states (4) in the limit \(\Omega_{1}\gg\Omega_{2}\) takes the form: \[\bar{N}_{f}=\frac{6\,\Omega_{2}^{2}+S_{3}\gamma^{2}}{16\,\delta_{1}^{2}\,S_{3}} \tag{13}\] which, in the limit of low intensity \({\bf E}_{2}\) component, specifically, \(\Omega_{2}\ll\sqrt{\gamma S_{3}/6}\), leads to \[\bar{N}_{f}=\frac{\gamma^{2}}{16\,\delta_{1}^{2}}\,. \tag{14}\] The obtained expression (14) corresponds to the limit of EIT cooling in the standard \(\Lambda\) scheme [27]. Therefore, deep laser cooling of Yb ion \(\bar{N}_{f}\ll 1\) is achieved under the conditions \(\delta_{1}\gg\gamma\). ## III Conclusion The work propose an scheme for ground state laser cooling of \({}^{171}\)Yb\({}^{+}\) ions that does not require the use of a magnetic field. For laser cooling, a polychromatic configuration of the light field is used, consisting of three monochromatic running waves resonant to optical transitions of the \({}^{2}S_{1/2}\rightarrow{}^{2}P_{1/2}\) line. For the first stage of Doppler cooling, the light frequency components are running waves with codirectional linear polarizations. In this case, each of the frequency components of the field has a mechanical action on the ion, which finally leads to the the cooling down to temperature of the Doppler limit. In our case, for a trap with \(\omega_{osc}/2\pi\simeq 600kHz\), this temperature corresponds to the average vibrational quantum number \(\bar{N}\simeq 20\). To implement the second stage of deep laser cooling to the motional ground state, i.e. \(\bar{N}\ll 1\), the polarization of one of the frequency components have to be oriented relative to the others by an angle of \(90^{o}\). The exclusion of the magnetic field from the cycle: laser cooling - clock operations, on the one hand, allows to reduce the cycle time by eliminating the time interval required to turn off and attenuate the magnetic field used in standard two-frequency component cooling scheme. Reducing the cycle time contributes to faster accumulation of measurement statistics in optical frequency standards. On the other hand, the absence of the need to use a magnetic field allows for more accurate control of the residual magnetic field and eliminates its fluctuations in various measurement cycles, which is important for further increasing the accuracy of optical standards based on \({}^{171}\)Yb\({}^{+}\). In addition, deep ground state cooling of the ion to \(\bar{N}<1\) significantly suppress the second-order Doppler shift to a level bellow \(\Delta\nu/\nu<10^{-19}\), which make it possible to remove it from the uncertainty budget of frequency standards based on \({}^{171}\)Yb\({}^{+}\). Note, that suggested method of ground-state cooling is also important for the developments of quantum logic elements based on cold \({}^{171}\)Yb\({}^{+}\) ions.
2310.04015
Anonymous Learning via Look-Alike Clustering: A Precise Analysis of Model Generalization
While personalized recommendations systems have become increasingly popular, ensuring user data protection remains a top concern in the development of these learning systems. A common approach to enhancing privacy involves training models using anonymous data rather than individual data. In this paper, we explore a natural technique called \emph{look-alike clustering}, which involves replacing sensitive features of individuals with the cluster's average values. We provide a precise analysis of how training models using anonymous cluster centers affects their generalization capabilities. We focus on an asymptotic regime where the size of the training set grows in proportion to the features dimension. Our analysis is based on the Convex Gaussian Minimax Theorem (CGMT) and allows us to theoretically understand the role of different model components on the generalization error. In addition, we demonstrate that in certain high-dimensional regimes, training over anonymous cluster centers acts as a regularization and improves generalization error of the trained models. Finally, we corroborate our asymptotic theory with finite-sample numerical experiments where we observe a perfect match when the sample size is only of order of a few hundreds.
Adel Javanmard, Vahab Mirrokni
2023-10-06T04:52:46Z
http://arxiv.org/abs/2310.04015v3
# Anonymous Learning via Look-Alike Clustering: A Precise Analysis of Model Generalization ###### Abstract While personalized recommendations systems have become increasingly popular, ensuring user data protection remains a top concern in the development of these learning systems. A common approach to enhancing privacy involves training models using anonymous data rather than individual data. In this paper, we explore a natural technique called _look-alike clustering_, which involves replacing sensitive features of individuals with the cluster's average values. We provide a precise analysis of how training models using anonymous cluster centers affects their generalization capabilities. We focus on an asymptotic regime where the size of the training set grows in proportion to the features dimension. Our analysis is based on the Convex Gaussian Minimax Theorem (CGMT) and allows us to theoretically understand the role of different model components on the generalization error. In addition, we demonstrate that in certain high-dimensional regimes, training over anonymous cluster centers acts as a regularization and improves generalization error of the trained models. Finally, we corroborate our asymptotic theory with finite-sample numerical experiments where we observe a perfect match when the sample size is only of order of a few hundreds. ## 1 Introduction Look-alike modeling in machine learning encompasses a range of techniques that focus on identifying users who possess similar characteristics, behaviors, or preferences to a specific target individual. This approach primarily relies on the principle that individuals with shared attributes are likely to exhibit comparable interests and behaviors. By analyzing the behavior of these look-alike users, look-alike modeling enables accurate predictions for the target user. This technique has been widely used in various domains, including targeted marketing and personalized recommendations, where it plays a crucial role in enhancing user experiences and driving tailored outcomes [1, 2, 3, 4]. In this paper, we use look-alike clustering for a different purpose, namely to anonymize sensitive information of users. Consider a supervised regression setup where the training set contains \(n\) pairs \((\mathbf{x}_{i},\mathbf{y}_{i})\), for \(i\in[n]\), with \(y_{i}\in\mathbb{R}\) denoting the response and \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) representing a high-dimensional vector of features. We consider two groups of features: sensitive features, which contain some personal information about users and should be protected from the leaner, and the non-sensitive features. We assume that that the learner has access to a clustering structure on users, which is non-private information (e.g. based on non-sensitive features or other non-sensitive data set on users). We propose a look-alike clustering approach, where we anonymize the individuals' sensitive features by replacing them with the cluster's average values. Only the anonymized dataset will be shared with the learner who then uses it to train a model. We refer to Figure 1 for an illustration of this approach. Note that the learner never gets access to the individuals' sensitive features and so this approach is safe from re-identification attacks where the learner is given access to the pool of individuals' sensitive information (up to permutation) and may use the non-sensitive features to re-identify the users. Also note that since a common representation (average sensitive features) is used for all the users in a cluster, this approach offers \(m\)-anonymity provided that each cluster is of size at least \(m\) (minimum size clustering) [10]. Minimum size clustering has received an increased attention mainly as a tool for anonymization and when privacy considerations are in place [1, 1, 2]. A particular application is for providing anonymity for user targeting in online advertising with the goal of replacing the use of third-party cookies with a more privacy-respecting entity [1]. There are a variety of approximation algorithms for clustering with minimum size constraint [14, 1, 1, 2], as well as parallel and dynamic implementation [1]. In this paper, we focus on linear regression and derive a precise characterization of model generalization1 using the look-alike clustering approach, in the so-called _proportional regime_ where the size of training set grows in proportion to the number of parameters (which for the linear regression is equal to the number of features). The proportional regime has attracted a significant attention as overparametrized models have become greatly prevalent. It allows to understand the effect under/overparametrization in feature-rich models, providing insights to several intriguing phenomena, including double-descent behavior in the generalization error [15, 16, 17]. Footnote 1: the ability of the model to generalize to new, unseen data from the same distribution as the training data Our precise asymptotic theory allows us to demystify the effect of different factors on the model generalization under look-alike clustering, such as the role of cluster size, number of clusters, signal-to Figure 1: Schematic illustration of look-alike clustering on features data. Within each cluster, the sensitive features of users are replaced by a common look-alike representation (center of the cluster). In this example, \(\mu_{1}\), \(\mu_{2}\), \(\mu_{3}\) represent the average of the sensitive features vectors for users in cluster 1, 2, 3. noise ratio of the model as well as the strength of sensitive and non-sensitive features. A key tool in our analysis is a powerful extension of Gordon's Gaussian process inequality [14] known as the Convex Gaussian Minimax Theorem (CGMT), which was developed in [13] and has been used for studying different learning problems; see e.g, [12, 11, 13, 14, 15]. Initially, it might be presumed that look-alike clustering would hinder model generalization by suppressing sensitive features of individuals, suggesting a possible tradeoff between anonymity (privacy) and model performance. However, our analysis uncovers scenarios in which look-alike clustering actually enhances model generalization! We will develop further insights on these results by arguing that the proposed look-alike clustering can serve as a form of regularization, mitigating model overfitting and consequently improving the model generalization. Before summarizing our key contributions in this paper, we conclude this section by discussing some of the recent work on the tradeoff between privacy and model generalization at large. An approach to study such potential tradeoff is via the lens of memorization. Modern deep neural networks, with remarkable generalization property, operate in the overparametrized regime where there are more tunable parameters than the number of training samples. Such overparametrized models tend to interpolate the training data and are known to fit well even random labels [13, 14]. Similar phenomenon has been observed in other models, such as random forest [12], Adaboost [15, 16], and kernel methods [1, 1]. Beyond label memorization, [13] studies setting where learning algorithms with near-optimal generalization must encode most of the information about the entire high-dimensional (and high-entropy) covariates of the training examples. Clearly, memorization of training data imposes significant privacy risks when this data contains sensitive personal information, and therefore these results hint to a potential trade-off between privacy protection and model generalization [17, 18, 19]. Lastly, [10] studies settings where data is sampled from a mixture of subpopulations, and shows that label memorization is _necessary_ for achieving near-optimal generalization error, whenever the distribution of subpopulation frequencies is long-tailed. Intuitively, this corresponds to datasets with many small distinct subpopulations. In order to predict more accurately on a subpopulation from which only a very few examples are observed, the learning algorithm needs to memorize the labels of those examples. ### Summary of contributions We consider a linear regression setting for response variable \(y\) given feature \(\mathbf{x}\), and posit a Gaussian Mixture Model on the features to model the clustering structure on the samples. We focus on the high-dimensional asymptotic regime where the number of training samples \(n\), the dimension of sensitive features (\(p\)), and the dimension of non-sensitive features (\(d-p\)) grow in proportion (\(p/n\rightarrow\psi_{p}\) and \(d/n\rightarrow\psi_{d}\), for some constants \(0<\psi_{p}\leq\psi_{d}\)). Asymptotic analysis in this particular regime, characterized by a fixed sample size to feature size ratio, has recently garnered significant attention due to its relevance to the regime where modern neural networks operate. This analysis allows for the study of various intriguing phenomena related to both statistical properties (such as double-descent) and the tractability of optimizing the learning process in such networks [17, 18, 19], where the population analysis \(n/d\rightarrow\infty\) fails to capture. Let \(\mathcal{T}^{n}=\{(\mathbf{x}_{i},y_{i}),i\in[n]\}\) denote the (unanonymized) training set and \(\mathcal{T}^{n}_{L}\) be the set obtained after replacing the sensitive features with the look-alike representations of clusters. We denote by \(\mathbf{\widehat{\theta}}\) and \(\mathbf{\widehat{\theta}}_{L}\) the min-norm estimators fit to \(\mathcal{T}^{n}\) and \(\mathcal{T}^{n}_{L}\), respectively. Under this asymptotic setting: * We provide a precise characterization of the generalization error of \(\mathbf{\widehat{\theta}}\) and \(\mathbf{\widehat{\theta}}_{L}\). Despite the randomness in data generating model, we show that in the high-dimensional asymptotic, the generalization errors of these estimators converge in probability to deterministic limits for which we provide explicit expressions. * Our characterizations reveal several interesting facts about the generalization of the estimators: * For the min-norm estimator \(\widehat{\mathbf{\theta}}\) we observe significantly different behavior in the underparametrized regime (\(\psi_{d}\leq 1\)) than in the overparametrized regime (\(\psi_{d}>1\)). Note that in the underparametrized regime, the min-norm estimator coincides with the standard least squares estimator. For the look-alike estimator \(\widehat{\mathbf{\theta}}_{L}\) our analysis identifies the underparametrized regime as \(\psi_{d}-\psi_{p}\leq 1\) and the overparametrized regime as \(\psi_{d}-\psi_{p}>1\). * In the underparametrized regime, our analysis shows that, somewhat surprisingly, the generalization error (for both estimators) does not depend on the number or size of the clusters, nor the scaling of the cluster centers. * In the overparametrized regime, our analysis provides a precise understanding of the role of different factors, including the number of clusters, energy of cluster centers, and the alignment of the model with the constellation of cluster centers, on the generalization error. * Using our characterizations, we discuss settings where the look-alike estimator \(\widehat{\mathbf{\theta}}_{L}\) has better generalization than its non-private counterpart \(\widehat{\mathbf{\theta}}\). A relevant quantity that shows up in our analysis is the ratio of the norm of the model component on the sensitive features over the noise in the response, which we refer to as signal-to-noise ratio (SNR). Using our theory, we show that if SNR is below a certain threshold, then look-alike estimator \(\widehat{\mathbf{\theta}}_{L}\) has lower generalization error than \(\widehat{\mathbf{\theta}}\). This demonstrates scenarios where anonymizing sensitive features via look-alike clustering does 'not' hinder model generalization. We give an interpretation for this result, after Theorem 5.1, by arguing that at low-SNR, look-alike clustering acts as a regularization and mitigates overfitting, which consequently improves model generalization. * In our analysis in the previous parts, we assume that the learner has access to the exact underlying clustering structure on the users, to disentangle the clustering estimation error from look-alike modeling. However, in practice the learner needs to estimate the clustering structure from data. In Section 3.5, we combine our analysis with a perturbation analysis to extend our results to the case of imperfect clustering estimation. Lastly, we refer to Section 7 for an overview of our proof techniques. ## 2 Model We consider a linear regression setting, where we are given \(n\) i.i.d pairs \((\mathbf{x}_{i},y_{i})\), where the response \(y_{i}\) is given by \[y_{i}=\langle\mathbf{x}_{i},\mathbf{\theta}_{0}\rangle+\varepsilon_{i}, \quad\varepsilon_{i}\sim\mathsf{N}(0,\sigma^{2}). \tag{2.1}\] We assume that there is a clustering structure on features \(\mathbf{x}_{i}\), \(i\in[n]\), independent from the responses. We model this structure via Gaussian-Mixture model. **Gaussian-Mixture Model (GMM) on features.** Each example \(\mathbf{x}\) belong to cluster \(\ell\in[\![k]\!]\), with probability \(\pi_{\ell}\). We let \(\mathbf{\pi}=[\pi,\pi_{2},\ldots,\pi_{k}]\in\mathbb{R}^{k}\) with \(\mathbf{\pi}\geq 0\) and \(\mathbf{1}^{\mathsf{T}}\mathbf{\pi}=1\). The cluster conditional distribution of an example \(\mathbf{x}\) in cluster \(\ell\) follows an isotropic Gaussian with mean \(\mathbf{\mu}_{\ell}\in\mathbb{R}^{d}\), namely \[\mathbf{x}=\mathbf{\mu}_{\ell}+\mathbf{z},\quad\mathbf{z}\sim\mathsf{N}(\mathbf{0}, \tau^{2}\mathbf{I}). \tag{2.2}\] By scaling the model (2.1), without loss of generality we assume \(\tau=1\). Writing in the matrix form, we let \[\mathbf{X}=[\mathbf{x}_{1}|\mathbf{x}_{2}|\ldots|\mathbf{x}_{n}]\in\mathbb{R}^{ d\times n},\quad\mathbf{y}=(y_{1},\ldots,y_{n})\in\mathbb{R}^{n},\quad\mathbf{M}=[ \mathbf{\mu}_{1}|\mathbf{\mu}_{2}|\ldots|\mathbf{\mu}_{k}]\in\mathbb{R}^{d\times k}\,. \tag{2.3}\] It is also convenient to encode the cluster membership as one-hot encoded vectors \(\mathbf{\lambda}_{i}\in\mathbb{R}^{k}\), where \(\mathbf{\lambda}_{i}\) is one at entry \(\ell\) (with \(\ell\) being the cluster of example \(\mathbf{x}_{i}\)) and zero everywhere else. The GMM can then be written as \[\mathbf{X}=\mathbf{M}\mathbf{\Lambda}+\mathbf{Z}\,, \tag{2.4}\] with \(\mathbf{Z}\in\mathbb{R}^{d\times n}\) is a Gaussian matrix with i.i.d \(\mathsf{N}(0,1)\) entries, and \(\mathbf{\Lambda}\in\mathbb{R}^{k\times n}\) is the matrix obtained by stacking vectors \(\mathbf{\lambda}_{i}\) as its column. **Sensitive and non-sensitive features.** We assume that some of the features are sensitive for which we have some reservation to share with the learner and some non-sensitive features. Without loss of generality, we write it as \(\mathbf{x}=(\mathbf{x}_{\mathrm{s}},\mathbf{x}_{\mathrm{ns}})\), where \(\mathbf{x}_{\mathrm{s}}\in\mathbb{R}^{p}\) representing the sensitive features and \(\mathbf{x}_{\mathrm{ns}}\in\mathbb{R}^{d-p}\) representing the non-sensitive features. We also decompose the model \(\mathbf{\theta}_{0}\) (2.1) as \(\mathbf{\theta}_{0}=(\mathbf{\theta}_{0,\mathrm{s}},\mathbf{\theta}_{0,\mathrm{ns}})\) with \(\mathbf{\theta}_{0,\mathrm{s}}\in\mathbb{R}^{p}\) and \(\mathbf{\theta}_{0,\mathrm{ns}}\in\mathbb{R}^{d-p}\). Likewise, the cluster mean vector \(\mathbf{\mu}\) is decomposed as \(\mathbf{\mu}=(\mathbf{\mu}_{\mathrm{s}},\mathbf{\mu}_{\mathrm{ns}})\). The idea of look-alike clustering is to replace the sensitive features of an example \(\mathbf{x}_{\mathrm{s}}\) with the center of its cluster \(\mathbf{\mu}_{\mathrm{s}}\). This way, if each cluster is of size at least \(m\), then look-alike clustering offers \(m\)-anonymity. Our goal in this paper is to precisely characterize the effect of look-alike clustering on model generalization. We focus on the high-dimensional asymptotic regime, where the number of training data \(n\), and features sizes \(d,p\) grow in proportion. We formalize the high-dimensional asymptotic setting in the assumption below: **Assumption 1**.: _We assume that the number of clusters \(k\) is fixed and focus on the asymptotic regime where \(n,d,p\to\infty\) at a fixed ratio \(d/n\to\psi_{d}\) and \(p/n\to\psi_{p}\)._ To study the generalization of a model \(\mathbf{\theta}\) (performance on unseen data) via the _out-of-sample prediction risk_ defined as \(\operatorname{Risk}(\mathbf{\theta}):=\mathbb{E}[(y-\mathbf{x}^{\mathsf{T}}\mathbf{\theta}) ^{2}]\), where \((y,\mathbf{x})\) is generated according to (2.1). Our next lemma characterizes the risk when the feature \(\mathbf{x}\) is drawn from GMM. **Lemma 2.1**.: _Under the linear response model (2.1) and a GMM for features \(\mathbf{x}\), the out-of-sample prediction risk of a model \(\mathbf{\theta}\) is given by_ \[\operatorname{Risk}(\mathbf{\theta})=\sigma^{2}+\left\lVert\mathbf{\theta}_{0}-\mathbf{ \theta}\right\rVert_{\ell_{2}}^{2}+(\mathbf{\theta}_{0}-\mathbf{\theta})^{\mathsf{T}} \mathbf{M}\text{diag}(\mathbf{\pi})\mathbf{M}^{\mathsf{T}}(\mathbf{\theta}_{0}-\mathbf{\theta})\,.\] The proof of Lemma 2.1 is deferred to the supplementary. ## 3 Main results Consider the minimum \(\ell_{2}\) norm (min-norm) least squares regression estimator of \(\mathbf{y}\) on \(\mathbf{X}\) defined by \[\widehat{\mathbf{\theta}}=(\mathbf{X}\mathbf{X}^{\mathsf{T}})^{\dagger}\mathbf{X}\mathbf{y}\,, \tag{3.1}\] where \((\mathbf{X}\mathbf{X}^{\mathsf{T}})^{\dagger}\) denotes the Moore-Penrose pseudoinverse of \(\mathbf{X}\mathbf{X}^{\mathsf{T}}\). This estimator can also be formulated as \[\widehat{\mathbf{\theta}}\coloneqq\arg\min\left\{\left\|\mathbf{\theta}\right\|_{\ell _{2}}:\,\mathbf{\theta}\text{ minimizes }\left\|\mathbf{y}-\mathbf{X}^{\mathsf{T}}\mathbf{\theta} \right\|_{\ell_{2}}\right\}\,.\] We also define the "look-alike estimator" denoted by \(\widehat{\mathbf{\theta}}_{L}\), where the sensitive features are first anonymized via look-like modeling, and then the min-norm estimator is computed based on the resulting features. Specifically the sensitive feature \(\mathbf{x}_{\mathrm{s}}\) of each sample is replaced by the center of its cluster. In our notation, writing \(\mathbf{X}^{\mathsf{T}}=[\mathbf{X}^{\mathsf{T}}_{\mathrm{s}},\mathbf{X}^{\mathsf{T}}_{ \mathrm{ns}}]\), we define \(\mathbf{X}^{\mathsf{T}}_{L}=[(\mathbf{M}_{\mathrm{s}}\mathbf{\Lambda})^{\mathsf{T}},\mathbf{X} ^{\mathsf{T}}_{\mathrm{ns}}]\) the features matrix obtained after look-alike modeling on the sensitive features. The look-alike estimator is then given by \[\widehat{\mathbf{\theta}}_{L}=(\mathbf{X}_{L}\mathbf{X}^{\mathsf{T}}_{L})^{\dagger}\mathbf{X}_ {L}\mathbf{y}\,, \tag{3.2}\] Our main result is to provide a precise characterization of the risk of look-alike estimator \(\widehat{\mathbf{\theta}}_{L}\) as well as \(\widehat{\mathbf{\theta}}\) (non-look-alike) in the asymptotic regime, as described in Assumption 1. We then discuss regimes where look-alike clustering offers better generalization. As our analysis shows there are two majorly different setting in the behavior of the look-alike estimator: * \(\psi_{d}-\psi_{p}\leq 1\). In words, the sample size \(n\) is asymptotically larger than \(d-p\), the number of non-sensitive features. This regime is called _underparametrized asymptotics_. * \(\psi_{d}-\psi_{p}\geq 1\), which is referred to as overparametrized asymptotics. Our first theorem is on the risk of look-alike estimator in the underparametrized setting. To present our result, we consider the following singular value decomposition for \(\mathbf{M}_{\mathrm{s}}\), the matrix of cluster centers restricted to sensitive features: \[\mathbf{M}_{\mathrm{s}}=\mathbf{U}_{\mathrm{s}}\mathbf{\Sigma}_{\mathrm{s}}\mathbf{V}^{ \mathsf{T}}_{\mathrm{s}},\quad\mathbf{U}_{\mathrm{s}}\in\mathbb{R}^{p\times r}, \mathbf{\Sigma}_{\mathrm{s}}\in\mathbb{R}^{r\times r},\mathbf{V}_{\mathrm{s}}\in \mathbb{R}^{k\times r}\,,\] where \(r=\mathrm{rank}(\mathbf{M}_{\mathrm{s}})\leq k\). **Theorem 3.1**.: _(**Look-alike estimator, underparametrized regime**) Consider the linear response model (2.1), where the features are coming from the GMM (2.4). Also assume that \(\|\mathbf{\theta}_{0,\mathrm{s}}\|=r_{\mathrm{s}}\) and \(\|\mathbf{U}^{\mathsf{T}}_{\mathrm{s}}\mathbf{\theta}_{0,\mathrm{s}}\|=\sqrt{\rho}r_{ \mathrm{s}}\), for all \(n,p\). Under Assumption 1 with \(\psi_{d}-\psi_{p}\leq 1\), the out-of-sample prediction risk of look-alike estimator \(\widehat{\mathbf{\theta}}_{L}\), defined by (3.2), converges in probability,_ \[\mathrm{Risk}(\widehat{\mathbf{\theta}}_{L})\overset{\mathcal{P}}{\to}\frac{ \sigma^{2}+r_{\mathrm{s}}^{2}}{1-(\psi_{d}-\psi_{p})}-\rho r_{\mathrm{s}}^{2}\,.\] There are several intriguing observations about this result. In the underparametrized regime: 1. The risk depends on \(\mathbf{\theta}_{0,\mathrm{s}}\) (model component on the sensitive features), only through the norms \(\|\mathbf{\theta}_{0,\mathrm{s}}\|=r_{\mathrm{s}}\) and \(\|\mathbf{U}^{\mathsf{T}}_{\mathrm{s}}\mathbf{\theta}_{0,\mathrm{s}}\|=\sqrt{\rho}r_{ \mathrm{s}}\). Note that \(\|\mathbf{U}^{\mathsf{T}}_{\mathrm{s}}\mathbf{\theta}_{0,\mathrm{s}}\|\) measures the alignment of the model with the left singular vectors of the cluster centers. 2. The cluster structure on the non-sensitive features plays no role in the risk, nor does \(\mathbf{\theta}_{0,\mathrm{ns}}\) the model component corresponding to the non-sensitive features. 3. The cluster prior probabilities \(\mathbf{\pi}\) does not impact the risk. **Remark 3.1**.: _Specializing the result of Theorem 3.1 to the case of \(\mathbf{M}_{\mathrm{s}}=0\) (no cluster structure on the sensitive feature), we obtain that the risk of \(\widehat{\mathbf{\theta}}_{L}\) converges in probability to_ \[\mathrm{Risk}(\widehat{\mathbf{\theta}}_{L})\stackrel{{\mathcal{P}}} {{\rightarrow}}\frac{\sigma^{2}+r_{\mathrm{s}}^{2}}{1-(\psi_{d}-\psi_{p})}\,,\] _in the underparametrized regime \(\psi_{d}-\psi_{p}\leq 1\). Note that in this case, the look-alike modeling zeros the sensitive features and only regresses the response on the non-sensitive features. Therefore, \(\widehat{\mathbf{\theta}}_{L}\) is a misspecified model (using the terminology of [12]). It is worth noting that in this case, we recover the result of [12](Theorem 4) in the underparametrized regime._ We next proceed to the overparametrized setting. For technical convenience, we make some simplifying assumption, however, we believe a similar derivation can be obtained for the general case, albeit with a more involved analysis. **Assumption 2**.: _Suppose that there is no cluster structure on the non-sensitive features (\(\mathbf{M}_{\mathrm{ns}}=\mathbf{0}\)). Also, assume orthogonal, equal energy centers for the clusters on the sensitive features (\(\mathbf{M}_{\mathrm{s}}=\mu\mathbf{U}_{\mathrm{s}}\) with \(\mathbf{U}_{\mathrm{s}}^{\mathsf{T}}\mathbf{U}_{\mathrm{s}}=\mathbf{I}_{k}\))._ Our next theorem characterizes the risk of look-alike estimator in the underparametrized regime. **Theorem 3.2**.: _(**Look-alike estimator, overparametrized regime**) Consider the linear response model (2.1), where the features are coming from the GMM (2.4). Also assume that \(\|\mathbf{\theta}_{0,\mathrm{s}}\|=r_{\mathrm{s}}\), \(\|\mathbf{\theta}_{0,\mathrm{ns}}\|=r_{\mathrm{ns}}\) and \(\|\mathbf{U}_{\mathrm{s}}^{\mathsf{T}}\mathbf{\theta}_{0,\mathrm{s}}\|=\sqrt{\rho}r_{ \mathrm{s}}\), for all \(n,p,d\). Under Assumption 1 with \(\psi_{d}-\psi_{p}\geq 1\), and Assumption 2, the out-of-sample prediction risk of look-alike estimator \(\widehat{\mathbf{\theta}}_{L}\), defined by (3.2), converges in probability,_ \[\mathrm{Risk}(\widehat{\mathbf{\theta}}_{L})\stackrel{{\mathcal{P}}} {{\rightarrow}}\sigma^{2}+(1-\rho)r_{\mathrm{s}}^{2}+\gamma_{0}^{2}+\mathbf{ \alpha}^{\mathsf{T}}(\mathbf{I}+\mu^{2}\text{diag}(\mathbf{\pi}))\mathbf{\alpha}\,, \tag{3.3}\] _where \(\mathbf{\pi}=(\pi_{1},\ldots,\pi_{k})\) encodes the cluster priors and \(\gamma_{0}\) and \(\mathbf{\alpha}\in\mathbb{R}^{k}\) are given by the following relations:_ \[\mathbf{\alpha} = \left(\mathbf{I}+\frac{\mu^{2}\text{diag}(\mathbf{\pi})}{\psi_{d}-\psi_{p }-1}\right)^{-1}\mathbf{U}_{\mathrm{s}}^{\mathsf{T}}\mathbf{\theta}_{0,\mathrm{s}}\,,\] \[\gamma_{0}^{2} = \frac{1}{\psi_{d}-\psi_{p}-1}\left(\sigma^{2}+r_{\mathrm{s}}^{2} +\mu^{2}\mathbf{\alpha}^{\mathsf{T}}\text{diag}(\mathbf{\pi})\mathbf{\alpha}\right)+\left( 1-\frac{1}{\psi_{d}-\psi_{p}}\right)r_{\mathrm{ns}}^{2}\,.\] **Remark 3.2**.: _When there is no cluster structure on the features (\(\mu=0\)), the look-alike modeling zeros the sensitive features and only regress the response on the non-sensitive features. Therefore, \(\widehat{\mathbf{\theta}}_{L}\) is a misspecified model (using the terminology of [12]). In this case, the risk of \(\widehat{\mathbf{\theta}}_{L}\) converges to_ \[\mathrm{Risk}(\widehat{\mathbf{\theta}}_{L})\stackrel{{\mathcal{P}}} {{\rightarrow}}\left(1+\frac{1}{\psi_{d}-\psi_{p}-1}\right)(\sigma^{2}+r_{ \mathrm{s}}^{2})+\left(1-\frac{1}{\psi_{d}-\psi_{p}}\right)r_{\mathrm{ns}}^{2}\,,\] _in the overparametrized regime \(\psi_{d}-\psi_{p}\geq 1\). This complements the characterization of misspecified model provided in Remark 3.1, and recovers the result of [12](Theorem 4) in the overparametrized regime._ As discussed in the introduction, one of the focal interest in this work is to understand cases where look-alike modeling improves generalization. In Section 5 we discuss this by comparing the look-alike estimator \(\widehat{\mathbf{\theta}}_{L}\) with the min-norm estimator \(\widehat{\mathbf{\theta}}\), given by (3.1) which utilizes the full information on the sensitive features. In order to do that, we next derive a precise characterization of the risk of \(\widehat{\mathbf{\theta}}\) in the asymptotic setting. **Theorem 3.3**.: _(**min-norm estimator with no look-alike clustering**) Consider the linear response model (2.1), where the features are coming from the GMM (2.4). Under Assumption 1, the followings hold for the min-norm estimator \(\widehat{\mathbf{\theta}}\) given by (3.1):_ 1. _(underparametrized setting) If_ \(\psi_{d}\leq 1\)_, we have_ \[\mathrm{Risk}(\widehat{\mathbf{\theta}})\stackrel{{ \mathcal{P}}}{{\rightarrow}}\frac{\sigma^{2}}{1-\psi_{d}}\,.\] 2. _(overparametrized setting) If_ \(\psi_{d}\geq 1\)_, under Assumption_ 2_, the prediction risk of_ \(\widehat{\mathbf{\theta}}\) _converges in probability_ \[\mathrm{Risk}(\widehat{\mathbf{\theta}})\stackrel{{ \mathcal{P}}}{{\rightarrow}}\sigma^{2}+\tilde{\gamma}_{0}^{2}+ \tilde{\mathbf{\alpha}}^{\mathsf{T}}(\mathbf{I}+\mu^{2}\text{diag}(\mathbf{\pi}))\tilde{ \mathbf{\alpha}}\,,\] (3.4) _where_ \(\tilde{\gamma}_{0}\) _and_ \(\tilde{\mathbf{\alpha}}\) _are given by the following relations:_ \[\tilde{\mathbf{\alpha}} = \left(\mathbf{I}+\frac{\mathbf{I}+\mu^{2}\text{diag}(\mathbf{\pi})}{\psi_{d} -1}\right)^{-1}\mathbf{U}_{\mathrm{s}}^{\mathsf{T}}\mathbf{\theta}_{0,\mathrm{s}}\,,\] \[\tilde{\gamma}_{0}^{2} = \frac{1}{\psi_{d}-1}\left(\sigma^{2}+\tilde{\mathbf{\alpha}}^{ \mathsf{T}}(\mathbf{I}+\mu^{2}\text{diag}(\mathbf{\pi}))\tilde{\mathbf{\alpha}}\right)+ \left(1-\frac{1}{\psi_{d}}\right)\left((1-\rho)r_{\mathrm{s}}^{2}+r_{\mathrm{ ns}}^{2}\right).\] **Remark 3.3**.: _In the special case that there is no cluster structure on the features (\(\mu=0\)), Theorem 3.3 characterizes the risk of min-norm estimator under Gaussian designs, which is analyzed in [13](Theorem 1) under a more general setting (when the features have i.i.d entries with zero mean, unit variance and finite \(4+\eta\) moment for some \(\eta>0\))._ **Example 3.4**.: _(Balanced clusters) In the case of equal cluster prior (\(\pi_{1}=\ldots=\pi_{k}=1/k\)), the risk characterization (3.3) depends on \(\mathbf{\alpha}\) only through \(\left\|\mathbf{\alpha}\right\|_{\ell_{2}}\) (and likewise, the risk (3.4) depends on \(\tilde{\mathbf{\alpha}}\) only through its norm). This significantly simplifies these characterizations._ ### Extension to imperfect clustering estimation In our previous results, we assumed that the underlying cluster memberships of users are known to the learner, so we could concentrate our analysis on the impact of training using anonymous cluster centers. However, in practice, clusters should be estimated from the features and thus includes an estimation error. In our next result, we combine our previous result with a perturbation analysis to bound the risk of the look-alike estimator based on estimated clusters. Recall matrix \(\mathbf{M}\in\mathbb{R}^{d\times k}\) from (2.3), whose columns are the cluster centers. Also, recall the matrix \(\mathbf{\Lambda}\in\mathbb{R}^{k\times n}\) whose columns are the one-hot encoding of the cluster memberships. We let \(\widehat{\mathbf{M}}\) and \(\widehat{\mathbf{\Lambda}}\) indicate the estimated matrices, with the cluster estimation error rate \(\delta_{n}:=\frac{1}{\sqrt{n}}\|\mathbf{M}_{s}\mathbf{\Lambda}-\widehat{\mathbf{M}}_{s} \widehat{\mathbf{\Lambda}}\|_{2}\), where \(\|\cdot\|_{2}\) indicates spectral norm. Note that only the cluster estimation error with respect to the sensitive features matters because in the look-alike modeling only those features are anonymized (replaced by the cluster centers). **Proposition 3.4**.: _Let \(\widetilde{\boldsymbol{X}}^{\mathsf{T}}:=[(\widetilde{\boldsymbol{M}}_{s} \widetilde{\boldsymbol{\Lambda}})^{\mathsf{T}},\boldsymbol{X}_{\mathrm{ns}}^{ \mathsf{T}}]\) be the feature matrix after replacing the sensitive features with the estimated cluster centers of users. We also let \(\widetilde{\boldsymbol{\theta}}_{L}=(\widetilde{\boldsymbol{X}}_{L} \widetilde{\boldsymbol{X}}_{L}^{\mathsf{T}})^{\prime}\widetilde{\boldsymbol{X }}_{L}\boldsymbol{y}\) be the look-alike estimator based on \(\widetilde{\boldsymbol{X}}_{L}\). Note that \(\widetilde{\boldsymbol{\theta}}_{L}\) is the counterpart of \(\widetilde{\boldsymbol{\theta}}_{L}\) given by (3.2). Define the cluster estimation error rate \(\delta_{n}:=\frac{1}{\sqrt{n}}\|\boldsymbol{M}_{s}\boldsymbol{\Lambda}- \widetilde{\boldsymbol{M}}_{s}\widetilde{\boldsymbol{\Lambda}}\|_{2}\), and suppose that either of the following conditions hold:_ * _(i)_ \(\psi_{d}-\psi_{p}<0.5\) _and_ \(\delta<\sqrt{1-(\psi_{d}-\psi_{p})}-\sqrt{\psi_{d}-\psi_{p}}\)_._ * _(ii)_ \(\psi_{d}-\psi_{p}>2\) _and_ \(\delta<\sqrt{\psi_{d}-\psi_{p}-1}-1\)_._ _Then,_ \[\mathrm{Risk}(\widetilde{\boldsymbol{\theta}}_{L})\leq\mathrm{Risk}( \widehat{\boldsymbol{\theta}}_{L})+C\delta\,,\] _for some constant \(C\) depending on the problem parameters._ We refer to the supplementary for the proof of Proposition 3.4 and for the explicit constant \(C\). ## 4 Numerical experiments In this section, we validate our theory with numerical experiments. We consider GMM with \(k\) clusters, where the centers of clusters are given by \(\mu\boldsymbol{u}_{\ell}\), for \(\ell\in[k]\), where \(\boldsymbol{u}_{\ell}\in\mathbb{R}^{d}\) are of unit \(\ell_{2}\)-norm. Also the vectors \(\boldsymbol{u}_{\ell}\) are non-zero only on the first \(p\) entries, and their restriction to these entries form a random orthogonal constellation. Therefore, defining \(\boldsymbol{U}=[\boldsymbol{u}_{1},\ldots,\boldsymbol{u}_{k}]\), we have \(\boldsymbol{U}=\begin{bmatrix}\boldsymbol{U}_{\mathrm{s}}\\ \boldsymbol{0}\end{bmatrix}\), with \(\boldsymbol{U}_{\mathrm{s}}^{\mathsf{T}}\boldsymbol{U}_{\mathrm{s}}= \boldsymbol{I}_{k}\). In this setting there is no cluster structure on the non-sensitive features and the cluster centers on the sensitive features are orthogonal and of same norm. Recall the decomposition of the model \(\boldsymbol{\theta}_{0}=(\boldsymbol{\theta}_{0,\mathrm{s}},\boldsymbol{ \theta}_{0,\mathrm{ns}})\), with \(\boldsymbol{\theta}_{0}\) the true underlying model (2.1) and \(\boldsymbol{\theta}_{0,\mathrm{s}}\), \(\boldsymbol{\theta}_{0,\mathrm{ns}}\) the components corresponding to sensitive and non-sensitive features. We generate \(\boldsymbol{\theta}_{0,\mathrm{ns}}\in\mathbb{R}^{d-p}\) to have i.i.d standard normal entries and then normalize it to have \(\left\|\boldsymbol{\theta}_{0,\mathrm{ns}}\right\|_{\ell_{2}}=r_{\mathrm{ns}}\). For \(\boldsymbol{\theta}_{0,\mathrm{s}}\), we generate \(\boldsymbol{Z}_{1},\boldsymbol{Z}_{2}\sim\mathsf{N}(\boldsymbol{0}_{p}, \boldsymbol{I}_{p})\), independently and let \[\boldsymbol{\theta}_{0,\mathrm{s}}=r_{\mathrm{s}}\sqrt{\rho}\;\frac{\mathsf{P} \boldsymbol{U}_{\mathrm{s}}\boldsymbol{z}_{1}}{\left\|\mathsf{P}\boldsymbol{U }_{\mathrm{s}}\boldsymbol{z}_{1}\right\|_{\ell_{2}}}+r_{\mathrm{s}}\sqrt{1- \rho}\;\frac{(\boldsymbol{I}-\mathsf{P}\boldsymbol{U}_{\mathrm{s}})\boldsymbol{ z}_{2}}{\left\|(\boldsymbol{I}-\mathsf{P}\boldsymbol{U}_{\mathrm{s}})\boldsymbol{z}_{2} \right\|_{\ell_{2}}}\,,\] where \(\mathsf{P}\boldsymbol{U}_{\mathrm{s}}:=\boldsymbol{U}_{\mathrm{s}}\boldsymbol {U}_{\mathrm{s}}^{\mathsf{T}}\) is the projection onto column space of \(\boldsymbol{U}_{\mathrm{s}}\). Therefore, \(\left\|\boldsymbol{\theta}_{0,\mathrm{s}}\right\|_{\ell_{2}}=r_{\mathrm{s}}\) and \(\left\|\boldsymbol{U}_{\mathrm{s}}^{\mathsf{T}}\boldsymbol{\theta}_{0,\mathrm{ s}}\right\|_{\ell_{2}}=\sqrt{\rho}r_{\mathrm{s}}\). Note that \(\rho\) quantifies the alignment of the model with the cluster centers, confined to the sensitive features.) We will vary the values of \(r_{\mathrm{s}}\) and \(r_{\mathrm{ns}}\) in our experiments. We also consider the case of balanced clusters, so the cluster prior probabilities are all equal, \(\pi_{\ell}=1/k\), for \(\ell\in[k]\). We set the number of cluster \(k=3\), dimension of sensitive features \(p=200\) and the dimension of entire features vector \(d=500\). We also set \(\mu=5\) and \(\rho=0.3\). In our experiments, we vary the sample size \(n\) and plot the risk of \(\widehat{\boldsymbol{\theta}}_{L}\) and \(\widehat{\boldsymbol{\theta}}\) versus \(\psi_{d}-\psi_{p}=(d-p)/n\). We consider different settings, where we vary \(r_{\mathrm{s}}\), \(r_{\mathrm{ns}}\) and \(\sigma\) (noise variance in model (2.1)). In Figure 2, we report the results. Curves correspond to our asymptotic theory and does to the numerical simulations. (Each dot is obtained by averaging over \(20\) realizations of that configuration.) As we observe, in all scenarios our theoretical predictions are a perfect match to the empirical performance. Figure 2: Validation of theoretical characterizations of the risks. Curves correspond to (asymptotic) analytical predictions, and dots to numerical simulations (averaged over 20 realizations). In all the plots, \(d=500\), \(p=200\), \(\mu=5\), \(k=3\), \(\rho=0.3\). Left panel corresponds to the risk of \(\widehat{\mathbf{\theta}}_{L}\) and right panel corresponds to the risk of \(\widehat{\mathbf{\theta}}\). When does look-alike clustering improve generalization? In Section 3, we provided a precise characterization of the risk of look-alike estimator \(\widehat{\mathbf{\theta}}_{L}\) and its counterpart, the min-norm estimator \(\widehat{\mathbf{\theta}}\) which utilizes the full information on the sensitive features. By virtue of these characterizations, we would like to understand regimes where the look-alike clustering helps with the model generalization, and the role of different problem parameters in achieving this improvement. Notably, since the look-alike estimator offers \(m\)-anonymity on the sensitive features (with \(m\) the minimum size of clusters), our discussion here points out instances where data anonymization and model generalization are not in-conflict. We define the gain of look-alike estimator \(\Delta\) as follows: \[\Delta:=\frac{\mathrm{Risk}(\widehat{\mathbf{\theta}})}{\mathrm{Risk}(\widehat{ \mathbf{\theta}}_{L})}\] which indicates the gain obtained in generalization via look-alike clustering. For ease in presentation, we focus on the case of balanced clusters (equal priors \(\pi_{1}=\ldots=\pi_{k}=1/k\)), and consider three cases: \(\bullet\)**Case 1 (\(\psi_{d}\leq 1\)):** In this case, both \(\widehat{\mathbf{\theta}}_{L}\) and \(\widehat{\mathbf{\theta}}\) are in the underparametrized regime and Theorems 3.1 and 3.3 (a) provide simple closed-form characterization of the risks of \(\widehat{\mathbf{\theta}}_{L}\) and \(\widehat{\mathbf{\theta}}\), by which we obtain \[\Delta\stackrel{{\mathcal{P}}}{{\rightarrow}}\frac{(1-\psi_{d}) ^{-1}}{(1+r_{\mathrm{s}}^{2}/\sigma^{2})(1-\psi_{d}+\psi_{p})^{-1}-\rho r_{ \mathrm{s}}^{2}/\sigma^{2}}\,.\] Define the signal-to-noise ratio \(\mathrm{SNR}=r_{\mathrm{s}}/\sigma\). Since \(\rho\leq 1\), it is easy to see that \(\Delta\) is decreasing in the SNR. In particular, as \(\mathrm{SNR}{\rightarrow}\) 0, we have \(\Delta\rightarrow(1-\psi_{d}+\psi_{p})/(1-\psi_{d})>1\), which means the look-alike estimator \(\widehat{\mathbf{\theta}}_{L}\) achieves lower risk compared to \(\widehat{\mathbf{\theta}}\). In Figure (a)a we plot \(\log(\Delta)\) versus \(\mathrm{SNR}\), for several values of \(\psi_{p}\). Here we set \(\psi_{d}=0.9\) and \(\rho=0.3\). As we observe in low SNR, the look-alike estimator has lower risk. Specifically, for each curve there is a threshold for the SNR, below which \(\log(\Delta)>0\). Furthermore, this threshold increases with \(\psi_{p}\), covering a larger range of \(\mathrm{SNR}\) where \(\widehat{\mathbf{\theta}}_{L}\) has better generalization. In Figure (b)b we report similar curves, where this time \(\psi_{p}=0.5\) and we consider several values of \(\rho\). As we observe, at fixed SNR the gain \(\Delta\) is increasing in \(\rho\). This is expected since \(\rho\) measures the alignment of the underlying model \(\mathbf{\theta}_{0}\) with the (left eigenvectors of) cluster centers and so higher \(\rho\) is to advantage of the look-alike estimator which uses the cluster centers instead of individuals' sensitive features. \(\bullet\)**Case 2 (\(\psi_{d}\geq 1,\psi_{d}-\psi_{p}\leq 1\)):** In this case, the look-alike estimator \(\widehat{\mathbf{\theta}}_{L}\) is in the underparametrized regime, while the min-norm \(\widehat{\mathbf{\theta}}\) is in the overparametrized regime. The following theorem uses the characterizations in Theorem 3.1 and and 3.3 (b), and shows that in the low \(\mathrm{SNR}{=r_{\mathrm{s}}/\sigma}\), the look-alike estimator \(\widehat{\mathbf{\theta}}_{L}\) has a positive gain. It further shows the monotonicity of the gain with respect to different problem parameters. **Theorem 5.1**.: _Suppose that \(\psi_{d}\geq 1\) and \(\psi_{d}-\psi_{p}\leq 1\), and consider the case of equal cluster priors. The gain \(\Delta\) is increasing in \(r_{\mathrm{ns}}\) and \(\rho\), and is decreasing in \(\mu^{2}/k\). Furthermore, under the following condition_ \[\mathrm{SNR}^{2}:=\left(\frac{r_{\mathrm{s}}}{\sigma}\right)^{2} \leq\frac{1+(\psi_{d}-1)^{-1}-(1-\psi_{d}+\psi_{p})^{-1}}{(1-\psi_{d}+\psi_{ p})^{-1}+\psi_{d}^{-1}-1}\,, \tag{5.1}\] _we have \(\Delta\geq 1\), for all values of other parameters (\(\mu,k,\rho,r_{\rm ns}\))._ We refer to the appendix for the proof of Theorem 5.1. **An interpretation based on regularization:** We next provide an argument to build further insight on the result of Theorem 5.1. Recall the data model (2.1), where substituting from (2.2) and decomposing over sensitive and non-sensitive features we arrive at \[y =\langle\mathbf{x}_{\rm s},\mathbf{\theta}_{\rm s}\rangle+\langle\mathbf{x}_{ \rm ns},\mathbf{\theta}_{\rm ns}\rangle+\varepsilon\] \[=\langle\mathbf{\mu}_{\rm s},\mathbf{\theta}_{\rm s}\rangle+\langle\mathbf{z} _{\rm s},\mathbf{\theta}_{\rm s}\rangle+\langle\mathbf{x}_{\rm ns},\mathbf{\theta}_{\rm ns }\rangle+\varepsilon\,.\] Note that \(\langle\mathbf{z}_{\rm s},\mathbf{\theta}_{\rm s}\rangle\sim\mathsf{N}(0,\|\mathbf{\theta }_{\rm s}\|^{2})\). At low SNR, this term is of order of the noise term \(\varepsilon\sim\mathsf{N}(0,\sigma^{2})\). Recall that the look-alike clustering approach replaces the sensitive feature \(\mathbf{x}_{\rm s}\) by the cluster center \(\mathbf{\mu}_{\rm s}\), and therefore drops the term \(\langle\mathbf{z}_{\rm s},\mathbf{\theta}_{\rm s}\rangle\) from the model during the training process. In other words, look-alike clustering acts as a form of regularization which prevents overfitting to the noisy component \(\langle\mathbf{z}_{\rm s},\mathbf{\theta}_{\rm s}\rangle\), and this will help with the model generalization, together with anonymizing the sensitive features. In Figure 3(a) we plot \(\log(\Delta)\) versus \(\mu\) for several values of \(r_{\rm ns}\). Here, \(\psi_{d}=2\), \(\psi_{p}=1.7\), \(\sigma=1\), \(r_{\rm s}=0.5\) and so condition (5.1) holds. As we observe \(\log(\Delta)\) is positive, decreasing in \(\mu\) and also at any fixed \(\mu\), it is increasing in \(r_{\rm ns}\), all of which are consistent with the Theorem 5.1. In Figure 3(b), we plot similar curves where this time \(r_{\rm ns}=0.2\) and we try several values of \(\rho\). As we see the look-alike estimator has larger gain \(\Delta\) at larger values of \(\rho\). \(\bullet\)**Case 3 (\(\psi_{d}-\psi_{p}\geq 1\)):** In this case, both \(\widehat{\mathbf{\theta}}_{L}\) and \(\widehat{\mathbf{\theta}}\) are in the overparametrized regime. We use Theorems 3.1 and 3.3 (b) to obtain an analytical expression for the gain \(\Delta\). Although the form is more complicated in this case, it gives non-trivial insights on the role of different parameters on the gain. Let us first focus on \(r_{\rm ns}\), the energy of the model on the non-sensitive features. Invoking the equations (3.3) and (3.4) and hiding the terms that do not depend on \(r_{\rm ns}\) in constants \(C_{1}\), \(C_{2}\) we Figure 4: Behavior of gain \(\Delta\) in the generalization of the look-alike estimator \(\widehat{\mathbf{\theta}}_{L}\) over min-norm estimator \(\widehat{\mathbf{\theta}}\) as we vary \(\mu\) the energy of cluster centers. Here, \(\psi_{d}=2\), \(\psi_{p}=1.7\), \(\sigma=1\), \(k=5\), \(r_{\mathrm{s}}=0.5\). We are in the underparametrized regime for \(\widehat{\mathbf{\theta}}_{L}\) (since \(\psi_{d}-\psi_{p}\leq 1\)) and the overparametrized regime for \(\widehat{\mathbf{\theta}}\) (since \(\psi_{d}\geq 1\)). arrive at \[\Delta\stackrel{{(\mathcal{P})}}{{\rightarrow}}\frac{C_{1}+(1-\frac{1}{ \psi_{d}})r_{\text{ns}}^{2}}{C_{2}+(1-\frac{1}{\psi_{d}-\psi_{p}})r_{\text{ns}} ^{2}}\,.\] Therefore, \(\lim_{r_{\text{ns}}\rightarrow\infty}\Delta=(1-\psi_{d}^{-1})/(1-(\psi_{d}- \psi_{p})^{-1})>1\), indicating a gain for the look-alike estimator over \(\widehat{\mathbf{\theta}}\). In Figure 4(a), we plot \(\log(\Delta)\) versus \(r_{\text{ns}}\) for several values of \(\psi_{p}\). As we observe, when \(r_{\text{ns}}\) is large enough we always have a gain, which is increasing in \(\psi_{p}\). We next consider the effect of SNR = \(r_{\text{s}}/\sigma\). In Figure 4(b) we plot \(\log(\Delta)\) versus SNR, for several values of \(\psi_{p}\). Similar to the underparametrized regime, we observe that in low SNR, the look-alike estimator has better generalization (\(\log(\Delta)>0\)). ## 6 Beyond linear models In previous section, we used our theory for linear models to show that at low SNR, look-alike modeling improves model generalization. We also provided an insight for this phenomenon by arguing that look-alike modeling acts as a form of regularization and avoids over-fitting at low SNR regime. In this section we show empirically that this phenomenon also extends to non-linear models. Consider the following data generative model: \[y\sim\text{Binomial}(N,p_{\mathbf{x}}),\quad p_{\mathbf{x}}=\frac{1}{1+\exp(-(\mathbf{x}, \mathbf{\theta}_{0})+\varepsilon)}\,,\] where \(\varepsilon\sim\mathsf{N}(0,\sigma^{2})\). We construct the model \(\mathbf{\theta}_{0}=(\mathbf{\theta}_{0,\text{s}},\mathbf{\theta}_{0,\text{ns}})\) similar to the setup in Section 4. We set \(n=200\), \(d=180\), \(k=3\), \(\mu=5\), \(\sigma=1\), \(\rho=0.3\), \(r_{\text{ns}}=2\) and \(N=1000\) (number of trials in Binomial distribution). We vary SNR by changing \(r_{\text{s}}\) in the set \(\{0.1,0.3,\ldots,1.9\}\). The estimators \(\widehat{\mathbf{\theta}}\) and \(\widehat{\mathbf{\theta}}_{L}\) are obtained by fitting a GLM with logit link function and binomial distribution. We compute the risks of \(\widehat{\mathbf{\theta}}\) and \(\widehat{\mathbf{\theta}}_{L}\) by averaging over a test set of size \(50K\). In Figure 6, we plot the gain \(\log(\Delta)\) versus \(r_{\text{s}}\) where each data point is by averaging over 50 different realizations of data. As we observe at low SNR, \(\log(\Delta)>0\) indicating that the look-alike estimator \(\widehat{\mathbf{\theta}}_{L}\) obtains a lower risk than the min-norm estimator. ## 7 Overview of proof techniques Our analysis of the generalization error is based on an extension of Gordon's Gaussian process inequality [10], called Convex-Gaussian Minimax Theorem (CGMT) [13]. Here, we outline the general steps of this framework and refer to the supplementary for complete details and derivations. Consider the following two Gaussian processes: \[\mathbf{X}_{\mathbf{u},\mathbf{v}} :=\mathbf{u}^{\mathsf{T}}\mathbf{G}\mathbf{v}+\psi(\mathbf{u},\mathbf{v})\,,\] \[\mathbf{Y}_{\mathbf{u},\mathbf{v}} :=\left\|\mathbf{u}\right\|_{\ell_{2}}\mathbf{g}^{\mathsf{T}}\mathbf{v}+\left\| \mathbf{v}\right\|_{\ell_{2}}\mathbf{h}^{\mathsf{T}}\mathbf{u}+\psi(\mathbf{u},\mathbf{v})\,,\] where \(\mathbf{G}\in\mathbb{R}^{n\times d}\), \(\mathbf{g}\in\mathbb{R}^{n}\) and \(\mathbf{h}\in\mathbb{R}^{d}\), all have i.i.d standard normal entries. Further, \(\psi:\mathbb{R}^{d}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a continuous function, which is convex in the first argument and concave in the second argument. Given the above two processes, consider the following min-max optimization problems, which are respectively referred to as the _Primary Optimization (PO)_ and the _Auxiliary Optimization (AO)_ problems: \[\Phi_{\mathrm{PO}}(\mathbf{G}) := \min_{\mathbf{u}\in\mathbf{S}_{\mathbf{u}}}\max_{\mathbf{v}\in\mathbf{S}_{\mathbf{v}}}\mathbf{X }_{\mathbf{u},\mathbf{v}}\,, \tag{7.1}\] \[\Phi_{\mathrm{AO}}(\mathbf{g},\mathbf{h}) := \min_{\mathbf{u}\in\mathbf{S}_{\mathbf{u}}}\max_{\mathbf{v}\in\mathbf{S}_{\mathbf{v}}}\mathbf{ Y}_{\mathbf{u},\mathbf{v}}\,. \tag{7.2}\] The main result of CGMT is to connect the above two random optimization problems. As shown in [16](Theorem 3), if \(\mathbf{S}_{\mathbf{u}}\) and \(\mathbf{S}_{\mathbf{v}}\) are compact and convex then, for any \(\lambda\in\mathbb{R}\) and \(t>0\), \[\mathbb{P}\left(|\Phi_{\mathrm{PO}}(\mathbf{G})-\lambda|>t\right)\leq 2\mathbb{P} \left(|\Phi_{\mathrm{AO}}(\mathbf{g},\mathbf{h})-\lambda|>t\right)\,.\] An immediate corollary of this result (by choosing \(\lambda=\mathbb{E}[\Phi_{\mathrm{AO}}(\mathbf{g},\mathbf{h})]\)) is that if the optimal cost of the AO problem concentrates in probability, then the optimal cost of the corresponding PO problem also concentrates, in probability, around the same value. In addition, as shown in part (iii) of [16](Theorem 3), concentration of the optimal solution of the AO problem implies concentration of the optimal solution of the PO around the same value. Therefore, the two optimization are intimately connected and by analyzing the AO problem, which is substantially simpler, one can derive corresponding properties of the PO problem. The CGMT framework has been used to infer statistical properties of estimators in certain high-dimensional asymptotic regime. The intermediate steps in the CGMT framework can be summarized as follows: First form an PO problem in the form of (7.1) and construct the corresponding AO problem. Second, derive the point-wise limit of the AO objective in terms of a convex-concave optimization problem, over only few scalar variables. This step is called'scalarization'. Next, it is possible to establish uniform convergence of the scalarized AO to the (deterministic) min-max optimization problem using convexity conditions. Finally, by analyzing the latter deterministic problem, one can derive the desired asymptotic characterizations. Of course implementing the above steps involved problem-specific intricate calculations. Our proofs of Theorems 3.1, 3.2, 3.3 in the supplementary follow this general strategy. Figure 6: Behavior of gain \(\Delta\) versus SNR for the nonlinear model described in Section 6. At small SNR, we observe a positive gain (lower risk of look-alike estimator \(\mathbf{\widetilde{\theta}}_{L}\) compared to \(\mathbf{\widetilde{\theta}}\)). ## Acknowledgement We would like to thank Sugato Basu, Kedar Dhamdhere, Alessandro Epasto, Rezsa Farahani, Asif Islam, Omkar Muralidharan, Dustin Zelle and Peilin Zhong for helpful discussion about this work. We also thank the anonymous reviewers of NeurIPS for their thoughtful comments. This work is supported in part by the NSF CAREER Award DMS-1844481 and the NSF Award DMS-2311024.
2304.06945
Study on Soft Robotic Pinniped Locomotion
Legged locomotion is a highly promising but under-researched subfield within the field of soft robotics. The compliant limbs of soft-limbed robots offer numerous benefits, including the ability to regulate impacts, tolerate falls, and navigate through tight spaces. These robots have the potential to be used for various applications, such as search and rescue, inspection, surveillance, and more. The state-of-the-art still faces many challenges, including limited degrees of freedom, a lack of diversity in gait trajectories, insufficient limb dexterity, and limited payload capabilities. To address these challenges, we develop a modular soft-limbed robot that can mimic the locomotion of pinnipeds. By using a modular design approach, we aim to create a robot that has improved degrees of freedom, gait trajectory diversity, limb dexterity, and payload capabilities. We derive a complete floating-base kinematic model of the proposed robot and use it to generate and experimentally validate a variety of locomotion gaits. Results show that the proposed robot is capable of replicating these gaits effectively. We compare the locomotion trajectories under different gait parameters against our modeling results to demonstrate the validity of our proposed gait models.
Dimuthu D. K. Arachchige, Tanmay Varshney, Umer Huzaifa, Iyad Kanj, Thrishantha Nanayakkara, Yue Chen, Hunter B. Gilbert, Isuru S. Godage
2023-04-14T06:24:09Z
http://arxiv.org/abs/2304.06945v1
# Study on Soft Robotic Pinniped Locomotion ###### Abstract Legged locomotion is a highly promising but under-researched subfield within the field of soft robotics. The compliant limbs of soft-limbed robots offer numerous benefits, including the ability to regulate impacts, tolerate falls, and navigate through tight spaces. These robots have the potential to be used for various applications, such as search and rescue, inspection, surveillance, and more. The state-of-the-art still faces many challenges, including limited degrees of freedom, a lack of diversity in gait trajectories, insufficient limb dexterity, and limited payload capabilities. To address these challenges, we develop a modular soft-limbed robot that can mimic the locomotion of pinnipeds. By using a modular design approach, we aim to create a robot that has improved degrees of freedom, gait trajectory diversity, limb dexterity, and payload capabilities. We derive a complete floating-base kinematic model of the proposed robot and use it to generate and experimentally validate a variety of locomotion gaits. Results show that the proposed robot is capable of replicating these gaits effectively. We compare the locomotion trajectories under different gait parameters against our modeling results to demonstrate the validity of our proposed gait models. + Footnote †: 1}\)School of Computing, Jarvis College of Computing and Digital Media, DePaul University, Chicago, IL 60604, USA.Corresponding author: [email protected]\({}^{2}\)College of Engineering, Ohio State University, Columbus, OH 43210, USA. \({}^{3}\)Dyson School of Design Engineering, Faculty of Engineering, Imperial College London SW7 2BX, UK. \({}^{4}\)Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA.\({}^{5}\)Department of Mechanical and Industrial Engineering, Louisiana State University, Baton Rouge, LA 70803, USA. \({}^{6}\)Department of Engineering Technology and Industrial Distribution, Texas A&M University, College Station, TX 77843, USA. ## I Introduction Soft robots are manufactured with flexible materials (e.g., elastomers, fabrics, polymers, shape memory alloy) and are mostly actuated through pneumatic and hydraulic pressure, tendons, and smart materials [1]. Soft mobile robots - a branch of the soft robot family - use compliant structures (e.g., body, limbs, etc.) to achieve locomotion. They are mostly designed to mimic the behavior (typically locomotive patterns) of biological creatures [2]. Compared to rigid mobile robots, the inherently compliant elements in soft mobile robots enable them to absorb ground impact forces without active impedance control [3]. Furthermore, their ability to deform actively and passively allows them to gain access to confined areas [4]. As a result, soft mobile robots have great potential to replace humans in performing dangerous tasks, such as nuclear site inspection [5], search and rescue operations [6], and planetary exploration [7]. Many soft-limbed robotic prototypes have been proposed to date [8]. For instance, the pneumatically actuated multi-gait robot reported in [9] uses five actuators to generate crawling and undulation gaits. However, it was only capable of preprogrammed straight locomotion without turning. The autonomous untethered quadrupped in [10] is capable of carrying the subsystems (i.e., miniature air compressors, a battery, valves, and a controller). The robot can operate under adverse environmental conditions but only supports limited gaits. The quadrupped in [11] can achieve various dynamic locomotion gaits such as crawling and trotting but without turning. The quadrupped in [12] presents a new approach for controlling the gaits of soft-legged robots using simple pneumatic circuits without electronic components. The need for preprocessing the gaits offer limited gait diversity. The soft robot prototypes reported in [13, 14] have stiffness-tunable limbs and are inspired by starfish, including its locomotion and water-vascular systems. But the low speed and low efficiency due to shape memory alloy actuators limit their utility. The amphibious soft robot in [15] uses highly compliant limbs with no stiffness tunability and resulting in unstable and slow locomotion. The soft-limbed hexapod proposed in [16] showed the ability to derive a variety of gaits. The hexapod appeared in [17] showed the ability to traverse challenging terrains. The large number of limbs however increases the robots' complexity at the cost of limb dexterity. The soft-limbed robot proposed in [18] uses only four limbs in spatially symmetric tetrahedral topology. But due to the use of solenoid valves - binary actuation, it Fig. 1: (A) Bioinspiration from pinnipeds (i.e., seals, sea ions, and walruses) that use fore flippers and body (or hind flipper) for terrestrial locomotion. (B) Pinniped robot in an unactuated pose. has limited control of the locomotion gaits. In addition, no analytical gait derivation approach was reported and only demonstrated preprogrammed locomotions. We propose a new soft-limbed pinniped robot to address the above limitations. We adopt a modular design approach to increase the robustness and utilize hybrid soft limbs with improved payload and dexterous capabilities to fabricate the robot. In addition, we present a systematic approach to derive novel locomotion gaits. Further, we adopt a proportional limb bending mechanism to achieve improved workspace and control. The robot validates locomotion at a 38-fold speed increase than that of the state-of-the-art robot in [18]. Our main technical contributions are: i) designing and fabricating a novel pinniped robot using hybrid soft limbs; ii) deriving a complete floating-base kinematic model; iii) employing the kinematic model to derive fundamental limb movements; iv) parameterizing limb movements to derive new, sophisticated locomotion gaits; and v) experimentally validating the locomotion gaits under varying gait parameters. ## II System Model ### _Prototype Description_ The proposed soft-limbed pinniped robot is shown in Fig. 1B. It consists of 4 identical soft limbs: Head (H), Back limb (B), Front Right limb (FR), and Front Left limb (FL). A soft limb (Fig. 2A) is actuated by three pneumatic muscle actuators (PMAs) and structurally supported by a backbone and outer shell (Fig. 2B). PMAs are fabricated using silicone tubes (Fig. 2C). PMAs are inserted into radially symmetric grooves (or channels) of the backbone structure (Figs. 2B and 2C) and further supported by 3D-printed parts shown in Figs. 2D, 2E, and 2F. We use Nylon threads to wrap PMAs in parallel to the backbone - in a way that the wrapping does not affect the bending performance - to prevent buckling upon extension during operation. This PMA and backbone arrangement results in an antagonistic actuator configuration since the backbone constrains the length change of PMAs during operation without constraining the omnidirectional bending. Further, the protective shell protects PMAs from potentially damaging environmental contacts. A soft limb has an effective length of 240 \(mm\), a diameter of 40 \(mm\), and a weight of 0.15 \(kg\). As shown in our previous work [19, 20, 21], this hybrid design (i.e., combining both soft and hard elements) increases the achievable stiffness range (hence payload) and provides decoupled stiffness and pose control. We connect four soft limbs using a 3D-printed tetrahedral joint (Fig. 2G) to obtain the pinniped topology (Fig. 1). Thus, the robot has 12-DoF (3-DoF per limb) and weighs 0.65 \(kg\). In pinnipeds, the bulk of the mass is distributed toward the body (i.e., back end). However, we adopt this topology with symmetric mass distribution to optimize movements in all directions. Further, its spatial symmetry enables reorientation and thus better stability. ### _Kinematics of Soft Limbs_ Figures 3A and 3B show the schematic of a soft limb. Consider any \(j\)-th limb at a given time \(t\). Let the length change of PMAs due to actuation be \(l_{jl}(t)\in\mathbb{R}\), for \(i\in\{1,2,3\}\) and \(j\in\{1,2,3,4\}\) where \(i\) and \(j\) stand for the PMA number and limb index, respectively. Thus, the joint variable vector of the \(j\)-th limb is expressed as \(\boldsymbol{q}_{j}=\left[l_{j1},l_{j2},l_{j3}\right]^{\mathsf{T}}\). The time dependency is omitted for brevity. The body coordinate system of any \(j\)-th soft limb, \(\{O_{j}\}\), is defined at the geometric center of the cross-section on one end (termed base) with the first PMA - associated with \(l_{j1}\) joint variable - anchor point coinciding with the \(+X_{j}\) axis (Fig. 3A). The remaining PMAs with jointspace parameters \(l_{j2}\) and \(l_{j3}\) are indexed in counterclockwise direction at \(\frac{2\pi}{3}\) angle offsets about \(+Z_{j}\) from each other at \(r_{j}\) distant from the origin of \(\{O_{j}\}\). When PMAs are actuated with a resulting differential pressure, the torque imbalance at either end of the soft limb causes it to bend. As is the case with prior work on similarly-arranged soft robots, due to the uniform and symmetric construction, we can approximate that the limb's neutral axis bends in a circular arc. Hence, the spatial pose of a soft limb can be parameterized by the angle subtended by the circular arc, \(\phi_{j}\), and the angle to the bending plane w.r.t. the \(+X_{j}\) axis, \(\theta_{j}\). The radius of the circular arc can be derived as Fig. 3: (A) Schematic of a soft limb design with actuator arrangement. (B) A view from an angle normal to the bending plane showing curve parameters. (C) Schematic of the pinniped robot. Fig. 2: (A) Assembled soft limb. (B) Rigid kinematic chain. (C) PMAs. (D) Edge cap. (E) Middle joint. (F) Upper joint. (G) Tetrahedral joint. \(\frac{L}{\phi_{j}}\) where \(L\in\mathbb{R}\) is the unactuated length of a PMA (Fig. 3B). Using basic arc geometry [22], the curve parameters are derived using PMA lengths as \[L+l_{ji} =\left\{\frac{L}{\phi_{j}}-r\cos\left(\frac{2\pi}{3}\left(i-1 \right)-\theta_{j}\right)\right\}\phi_{j},\text{where}\] \[l_{ji} =-r_{j}\phi_{j}\cos\left(\frac{2\pi}{3}\left(i-1\right)-\theta_{ j}\right). \tag{1}\] Since the soft limb is inextensible, the sum of length changes of PMAs (i.e., jointspace variables) for all \(i\) in (1) add up to zero. This results in the length constraint \(l_{j1}=-\left(l_{j2}+l_{j3}\right)\), indicating that the complete limb kinematics can be expressed by two independent jointspace variables. From (1) and following [22, 23], we can derive the curve parameters (i.e., configuration space variables) in terms of the joint variables as \[\phi_{j} =\frac{2}{3r_{j}}\sqrt{\sum_{i=1}^{3}\left(l_{ji}^{2}-l_{ji}\,\,l_ {j\bmod\left(i,3\right)+1}\right)},\text{ and} \tag{2a}\] \[\theta_{j} =\arctan\left\{\sqrt{3}\left(l_{j3}-l_{j2}\right),l_{j2}+l_{j3}-2 l_{j1}\right\}. \tag{2b}\] We derive the homogeneous transformation matrix (HTM) for any \(j\)-th soft limb, \(\mathbf{T}_{j}\in\mathbb{SE}\left(3\right)\) as \[\mathbf{T}_{j}\left(q,\xi\right) =\mathbf{R}_{Z}\left(\theta_{j}\right)\mathbf{P}_{X}\left(\frac{L }{\phi_{j}}\right)\mathbf{R}_{Y}\left(\xi\phi_{j}\right)\mathbf{P}_{X}\left(- \frac{L}{\phi_{j}}\right)\mathbf{R}_{Z}\left(-\theta_{j}\right)\] \[=\left[\begin{array}{cc}\mathbf{R}_{j}\left(q,\xi\right)& \mathbf{p}_{j}\left(q,\xi\right)\\ 0&1\end{array}\right], \tag{3}\] where \(\mathbf{R}_{Z}\in\mathbb{SO}\left(3\right)\) and \(\mathbf{R}_{Y}\in\mathbb{SO}\left(3\right)\) define taskspace rotation matrices about \(+Z_{j}\) and \(+Y_{j}\), respectively, while \(\mathbf{P}_{X}\in\mathbb{R}^{3}\) defines the taskspace translation matrix along \(+X_{j}\). \(\mathbf{R}_{j}\in\mathbb{SO}\left(3\right)\) and \(\mathbf{p}_{j}\in\mathbb{R}^{3}\) give the homogeneous rotation and position matrices, respectively. The scalar \(\xi\in\left[0,1\right]\) defines any point along the neutral axis of the limb (Fig. 3B). Refer to [22] for more information about the derivation. ### _Inverse Kinematics of Soft Limbs_ The relationship between the curve parameters and taskspace coordinates at the tip (i.e., \(\xi=1\)), \(\mathbf{p}_{j}\) of (3), is given by \[x_{j} =L\phi_{j}^{-1}\cos\left(\theta_{j}\right)\left\{1-\cos\left( \phi_{j}\right)\right\}, \tag{4a}\] \[y_{j} =L\phi_{j}^{-1}\sin\left(\theta_{j}\right)\left\{1-\cos\left( \phi_{j}\right)\right\},\text{ and}\] (4b) \[z_{j} =L\phi_{j}^{-1}\sin\left(\phi_{j}\right), \tag{4c}\] where \(x_{j}\), \(y_{j}\), and \(z_{j}\) are the position vector elements w.r.t. the soft limb body coordinates frame, \(\left\{O_{j}\right\}\). A soft limb taskspace - obtained from the kinematic model in (3) - is a symmetric shell about the \(+Z_{j}\) axis of its body coordinate frame, as elucidated, \(\left\{O_{j}\right\}\), in Fig. 4A. Recall that, because of the length constraint imposed by the backbone (Sec. II-B), there are only two kinematic DoFs. Thus we can use two taskspace variables, \(x_{j}\) and \(y_{j}\), to derive the curve parameters \(\theta_{j}\) and \(\phi_{j}\). This can be done by solving for (5) that maps taskspace to curve parameters. Note that, there is no closed-form solution to (5). Thus, in this work, we utilize MATLAB's 'fmincon' constrained optimization routine to solve it. \[\theta_{j} =\arctan\left(y_{j},x_{j}\right), \tag{5a}\] \[\phi_{j}^{-1}\left\{1-\cos\left(\phi_{j}\right)\right\} =L^{-1}\sqrt{x_{j}^{2}+y_{j}^{2}}. \tag{5b}\] ### _Complete Robot Kinematics_ Refer to the schematic of the robot shown in Fig. 3C. Utilizing (3), the HTMs of limbs, \(\mathbf{T}_{Limb_{j}}\in\mathbb{SE}\left(3\right)\) relative to the robot coordinates frame, \(\left\{O_{R}\right\}\), located at the geometric center of the tetrahedral joint can be expressed as \[\mathbf{T}_{Limb_{1}}\left(q_{1},\xi\right) =\mathbf{T}_{1}\left(q_{1},\xi\right), \tag{6a}\] \[\mathbf{T}_{Limb_{2}}\left(q_{2},\xi\right) =\mathbf{R}_{Y}\left(\frac{\pi}{2}+\delta\right)\mathbf{R}_{Z} \left(\pi\right)\mathbf{T}_{2}\left(q_{2},\xi\right),\] (6b) \[\mathbf{T}_{Limb_{3}}\left(q_{3},\xi\right) =\mathbf{R}_{Y}\left(\frac{\pi}{2}+\delta\right)\mathbf{R}_{Z} \left(\frac{5\pi}{3}\right)\mathbf{T}_{3}\left(q_{3},\xi\right),\] (6c) \[\mathbf{T}_{Limb_{4}}\left(q_{4},\xi\right) =\mathbf{R}_{Y}\left(\frac{\pi}{2}+\delta\right)\mathbf{R}_{Z} \left(\frac{7\pi}{3}\right)\mathbf{T}_{4}\left(q_{4},\xi\right), \tag{6d}\] where \(\delta\) is \(19.47^{\circ}\) (Fig. 3C) computed from the tetrahedral geometry. The complete kinematic model of the \(j\)-th limb of the robot can be obtained utilizing (6) with a floating-base coordinate frame, \(\mathbf{T}_{b}\in\mathbb{SE}\left(3\right)\) as below. \[\mathbf{T}_{Limb_{j}}\left(q_{b},q_{j},\xi\right) =\mathbf{T}_{b}\left(q_{b}\right)\mathbf{T}_{Limb_{j}}\left(q_{j}, \xi\right), \tag{7}\] \[\mathbf{T}_{b}\left(q_{b}\right) =\left[\begin{array}{cc}\mathbf{R}_{Z}\left(\alpha\right)\mathbf{ R}_{Y}\left(\beta\right)\mathbf{R}_{X}\left(\gamma\right)&\mathbf{p}_{b}\\ 0&1\end{array}\right]. \tag{8}\] Herein, \(q_{b}=[x_{b},y_{b},z_{b},\alpha,\beta,\gamma]\) with \([\alpha,\beta,\gamma]\) and \(\mathbf{p}_{b}=[x_{b},y_{b},z_{b}]^{\mathrm{T}}\) denote the orientation and translation variables of \(\left\{O_{R}\right\}\) relative to the global coordinate frame \(\left\{O\right\}\) (Fig. 3C). Fig. 4: (A) Taskspace of a soft limb in its body coordinate frame, (B) Pinniped terrestrial crawling with limb, body, and head movements. Fig. 5: Fundamental motion trajectory of a soft limb. ## III Locomotion Trajectory Generation ### _Fundamental Limb Motion_ The locomotion gaits derived here are inspired by the terrestrial crawling of pinnipeds (Fig. 4B). We use limb kinematics in Sec. II to parameterize and derive circular taskspace movement of the limb tip as the fundamental limb motion. For any \(j\)-th soft limb - shown in Fig. 5 - we define a circular trajectory of radius \(\rho\) at \(d\) distance from the limb's origin, \(\{O_{j}\}\), and period, \(\tau\). At time \(t\), the tip position relative to \(\{O_{j}\}\) is given by \[x_{j}=\rho\sin\left(-\tfrac{2\pi t}{\tau}\right),\ y_{j}=\rho\cos\left(-\tfrac {2\pi t}{\tau}\right),\ z_{j}=d \tag{9}\] We apply uniformly distributed \(t\in[0,\tau]\) values on (9) to obtain a 100-point taskspace trajectory corresponding to the circular limb motion. We transform the taskspace trajectory to configuration space trajectory using the inverse kinematic model described in Sec. II-C. Subsequently, (1) is used to map the configuration space trajectory \((\theta_{j},\phi_{j})\) to the jointspace trajectory (\(l_{ji}\)). ### _Effect of Center of Gravity_ The center of gravity (CoG) of the robot helps stabilize locomotion [24]. We compute the robot CoG to investigate and regulate locomotion stability. From [25], the CoG of a limb, \(\mathbf{c}_{j}\in\mathbb{R}^{3}\), relative to its body coordinate frame \(O_{j}\)) is \[\mathbf{c}_{j}(q_{j})=\int_{0}^{1}\mathbf{p}_{j}(\xi,q_{j})d\xi. \tag{10}\] Substituting \(\mathbf{p}_{j}\) in (4) into (10), \(\mathbf{c}_{j}(q_{j})\) can be derived as \[\mathbf{c}_{j}(q_{j})=\frac{L}{\phi_{j}^{2}}\begin{bmatrix}\cos\left(\theta_{ j}\right)\left(\phi_{j}-\sin\left(\phi_{j}\right)\right)\\ \sin\left(\theta_{j}\right)\left(\phi_{j}-\sin\left(\phi_{j}\right)\right)\\ \left(1-\cos\left(\phi_{j}\right)\right)\end{bmatrix}. \tag{11}\] Utilizing the results in (6) and (11), CoG relative to the robot coordinate frame, \(\{O_{R}\}\), denoted by \(\mathbf{C}_{j}\in\mathbb{R}^{3}\), can be obtained. If the mass of the \(j\)-th limb is \(m_{j}\), then CoG of the robot relative to \(\{O_{R}\}\), \(\mathbf{C}_{R}\in\mathbb{R}^{3}\), can be written as \[\mathbf{C}_{R}\left(q_{j}\right)=\frac{1}{\sum_{j=1}^{4}m_{j}}\sum_{j=1}^{4}m_ {j}\mathbf{C}_{j}(q_{j}). \tag{12}\] ### _Forward Crawling_ We generate forward crawling locomotion by simultaneously (i.e., with zero phase offset) replicating the limb motion derived in Sec. III-A in FR and FL limbs as illustrated in Fig. 6A. Therein, we move the robot in \(+X\) direction, by giving anticlockwise and clockwise motion trajectories to FR and FL limbs w.r.t. the local coordinate frames thereof, respectively. However, achieving forward crawling is challenging as there is, unlike pinnipeds with their relatively massive bodies, no body (or support limb) to counterbalance the angular moment generated by crawling forelimbs. Because of that, forward crawling in the proposed robot can induce instability. We circumvent this limitation by controlling the CoG position given by (12) to obtain a more stable forward crawling gait as described below. Refer to Fig. 7A for the limb movements and CoG trajectories during a forward crawling cycle. We cyclically and proportionally bend the Head (H) limb towards the moving direction from a straight position (\(\phi=0^{\circ}\)) to a value computed using (12), \(\phi=90^{\circ}\), during a locomotion cycle (Fig. 7A). This dynamic CoG control approach stabilizes the movement by counteracting instantaneous torque imbalances. We generate an additional thrust from the Back (B) limb (located on the opposite side) by actuating it in a manner that supports forward propelling. Therein, the B limb is gradually bent in a linear trajectory against the moving floor (Fig. 7A). As a consequence, the resultant limb displacement torque increases. Readers are referred to the experimental video on forward crawling to further understand the above limb actuating mechanism. The impact of H and B limb actuation on crawling thrusts can be visualized by tracking the robot CoG and limb movements as shown in Figs. 7A and 7B. The CoG\({}_{0}\) denotes the CoG trajectory when H and B limbs are not actuated. When they are actuated, CoG\({}_{0}\) shifts towards the moving direction (\(+X\)) as noted by CoG\({}_{F}\) in Fig. 7A. Figure 7B shows computed CoG\({}_{0}\), CoG\({}_{F}\), and crawling limb tips (X\({}_{FR}\), X\({}_{FL}\)) in the moving direction relative to \(O_{R}\). During the crawling thrust applying interval (i.e., ground contact period), the robot CoG converges and closely follows crawling limb tips as noted by CoG\({}_{F}\) in Fig. 7B. It causes an increase in the weight-induced torque supported by the crawling limbs (FR & FL). As a consequence, with the increase in ground-limb Fig. 6: Limb trajectories: (A) Forward crawling, (B) Backward crawling, (C) Crawling-and-turning (Leftward), (D) In-place turning (counterclockwise). Fig. 7: In forward crawling – (A) Spatial limb displacements and computed CoG trajectories, (B) CoG components and crawling limb tip displacements along the moving direction (i.e., \(+X\) axis) relative to \(O_{R}\). reaction forces, the crawling thrusts increase. ### _Backward Crawling_ The backward crawling is referred to as moving in the \(-X\) direction (Fig. 1B). Here, the limb motion derived in Sec. III-A is simultaneously applied to FR and FL limbs in the opposite direction to that of the forward crawling, i.e., FR and FL limbs are given clockwise and anticlockwise motion trajectories respectively, as illustrated in Fig. 6B. We keep the Head (H) limb bent in the \(-X\) direction (i.e., backward) for shifting the robot CoG toward FR and FL limbs for improved stability and generating more thrust from the increased weight (reaction forces) at the limbs-ground contact [18]. Concurrently, the Back (B) limb is bent upward (in the \(+Z\) direction of \(O_{R}\)) to reduce the contact surface and minimize the frictional resistance [18]. ### _Crawling-and-Turning_ Pinnipeds use peristaltic body movement to propel forward since the bulk of the body weight is distributed towards the back (body) [26]. But, the proposed soft robot design has a symmetric weight distribution and thus it is difficult to maintain stability while propelling forward. As a consequence, the robot shows limited frontal movements. Conversely, when propelling backward, the torque imbalance is countered by the Body (i.e., B limb). It enables the use of the B limb in turning only in backward movements. Therefore, we opt to achieve turning in the backward direction. To achieve turning locomotion, we additionally actuate the B limb similarly to straight crawling limbs (FR & FL) discussed in Sec. III-C. For example, a clockwise trajectory of the B limb results in a leftward turn (Fig. 6C), while changing the direction of the B limb to anticlockwise results in a rightward turn. We replicate the B limb motion with different stride radii to control the turning effect. For example, a relatively large stride trajectory of the B limb can turn the robot efficiently (see results in Sec. IV and experimental videos). ### _In-place Turning_ In-place turning is referred to as the rotation about the robot \(+Z\) axis (Fig. 1A). It is achieved by crawling all ground-contacting limbs in the same direction of rotation (clockwise/counterclockwise) as shown in Fig. 6D. Additionally, we actuate the Head (H) limb in the same direction of rotation in a circular trajectory at the same angular velocity. In that way, we shift the CoG of the Head (H) limb into the direction of rotation and support the turning. We can reverse the direction of in-place turning by reversing the direction of crawling in all limbs. ## IV Experimental Validation ### _Experimental Setup_ The experimental setup for the robot is depicted in Fig. 8A. Air pressure is supplied from an \(8-bar\) pneumatic source to digital proportional pressure regulators (ITV3050 31F3N3, SMC USA), as shown in Fig. 8B. The pressure regulators receive commands from a MATLAB Simulink Desktop Real-Time model through a data acquisition card (PCI-6703, NI USA), which sends an \(0-10\)\(V\) analog voltage signal. To make the soft limbs move and perform locomotion, the jointspace trajectories obtained in Sec. III must be converted into actuation pressure trajectories and input into the actuation setup shown in Fig. 8. The jointspace-pressure mapping reported in [27] is used to generate the corresponding pressure inputs. The robot is tested on a carpected floor with approximately uniform friction, as seen in Fig. 9. ### _Testing Methodology_ We actuated each gait for \(15\)\(s\) with a \(3\)\(bar\) actuator pressure ceiling (based on PMAs' ability to achieve the required limb deformation). The frequency range, \(\{0.75,1.00,1.25\}\)\(Hz\) was chosen based on the operational bandwidth of PMAs to replicate meaningful locomotion. With \(03\) frequency combinations, we conducted \(54\) experiments for \(06\) straight crawling gaits, \(06\) crawling-and-turning gaits, and \(06\) in-place turning gaits as detailed in Secs. IV-C and IV-D. ### _Forward and Backward Crawling Gaits_ We generated a total of \(18\) combinations of forward and backward crawling locomotion trajectories, with three gaits in each direction, using three different stride radii (\(\rho_{1}=0.06\)\(m\), \(\rho_{2}=0.08\)\(m\), \(\rho_{3}=0.10\)\(m\)). Figures 9A and 9B show the progression of the robot during forward and backward crawling at the \(0.10\)\(m-1.00\)\(Hz\) stride radius-frequency combination. Complete videos of the experiments are included in our multimedia submission. To determine the robot's moving distance along the \(X\) and \(Y\) directions, we used the perspective image projection approach reported in [27, 28]. This approach utilized video feedback and floor carpet geometry data to estimate the distances. Note that some deviation from the intended gait is expected due to the performance variations of the custom-built PMAs powering the soft limbs. We present the performance of each crawling gait in terms of estimated robot speed, which is shown in Table I. The experiments revealed that the robot achieved higher speeds at larger stride radii (\(0.10\)\(m\)) and moderate actuation frequencies (\(1.00\)\(Hz\)). This is because larger crawling strides generate stronger limb displacement torques on the floor than Fig. 8: (A) Robot actuation setup, (B) Pressure regulator assembly. smaller strides. In addition, moderate actuation frequencies enable air pressure to reach the PMAs in a timely manner through the long pneumatic tubes, allowing for desired limb deformation without distortion of the torque amplitude. The highest recorded crawling speed was 16.9 \(cm/s\) (0.65 body length/second), which is a 38-fold increase from the state-of-the-art reported in [18], 0.37\(cm/s\). In forward crawling, the robot must perform additional limb deformations, as described in Sec. III-D, in order to maintain balance and generate additional forward propulsion. As a result, forward crawling recorded lower speeds compared to backward crawling at all times. The accompanying video further demonstrates that although forward crawling resembles pinniped locomotion, it is less efficient in maintaining forward locomotion stability. ### _Turning Gaits_ We have successfully generated crawling-and-turning gaits for backward crawling locomotion (as described in Sec. III-E). We created 03 leftward and 03 rightward turning trajectories by varying the stride radius of the B limb at values of (\(\rho_{1}=0.04\ m\), \(\rho_{2}=0.06\ m\), \(\rho_{3}=0.08\ m\)). For these gaits, the FR and FL limbs were actuated at a fixed stride radius of 0.10 \(m\). For in-place turning, we produced six trajectories to represent clockwise/counterclockwise turning under three stride radii (\(\rho_{1}=0.06\ m\), \(\rho_{2}=0.08\ m\), \(\rho_{3}=0.10\ m\)). During these gaits, all limbs, including the Head (H) limb, were actuated under the same stride radii as each crawling gait. Figures 9C and 9D show the leftward crawling-and-turning gait and counterclockwise in-place turning gait, respectively. The performance of these trajectories is presented in Tables II and III, respectively. We experimentally measured the turn angle and \(X-Y\) floor displacement for all gaits using the method described in the straight crawling in Sec. IV-C. We then calculated the angular speed per unit distance for crawling-and-turning gait and the angular speed for in-place turning gait. According to the data in Table II, the effectiveness of turning increases with the stride radius of the turning limb. Similarly, the data in Table III indicates Fig. 9: (A) Forward crawling, (B) Backward crawling, (C) Crawling-and-turning (leftward), (D) In-place turning (counterclockwise), at 0.10 \(m-1.00\ Hz\). that the robot performs well in replicating in-place turning at higher stride radii, due to the increase in relative turn displacement torque with the applied trajectory stride radius. ## V Conclusions The soft-limbed robots have great potential for use in locomotion applications. We have designed a soft-limbed robot, which mimics pinniped locomotion, with a tetrahedral topology. The modular design approach for developing the robot was explained. Forward and inverse kinematic models for a single soft limb, as well as a complete floating-base kinematic model for the entire robot, were derived. The task-space trajectories for fundamental limb motion were proposed, and joint-space trajectories were obtained using kinematic models for forward/backward crawling, crawling-and-turning, and in-place turning gaits. The performance of the pinniped robot was experimentally validated under different stride radii and actuation frequencies, and the results show that the proposed locomotion trajectories were replicated well. Further work will focus on the development of dynamic gaits and closed-loop control of pinniped locomotion.
2308.06448
Latent Random Steps as Relaxations of Max-Cut, Min-Cut, and More
Algorithms for node clustering typically focus on finding homophilous structure in graphs. That is, they find sets of similar nodes with many edges within, rather than across, the clusters. However, graphs often also exhibit heterophilous structure, as exemplified by (nearly) bipartite and tripartite graphs, where most edges occur across the clusters. Grappling with such structure is typically left to the task of graph simplification. We present a probabilistic model based on non-negative matrix factorization which unifies clustering and simplification, and provides a framework for modeling arbitrary graph structure. Our model is based on factorizing the process of taking a random walk on the graph. It permits an unconstrained parametrization, allowing for optimization via simple gradient descent. By relaxing the hard clustering to a soft clustering, our algorithm relaxes potentially hard clustering problems to a tractable ones. We illustrate our algorithm's capabilities on a synthetic graph, as well as simple unsupervised learning tasks involving bipartite and tripartite clustering of orthographic and phonological data.
Sudhanshu Chanpuriya, Cameron Musco
2023-08-12T02:47:57Z
http://arxiv.org/abs/2308.06448v1
# Latent Random Steps as Relaxations of Max-Cut, Min-Cut, and More ###### Abstract Algorithms for node clustering typically focus on finding homophilous structure in graphs. That is, they find sets of similar nodes with many edges _within_, rather than _across_, the clusters. However, graphs often also exhibit heterophilous structure, as exemplified by (nearly) bipartite and tripartite graphs, where most edges occur across the clusters. Graphing with such structure is typically left to the task of _graph simplification_. We present a probabilistic model based on non-negative matrix factorization which unifies clustering and simplification, and provides a framework for modeling arbitrary graph structure. Our model is based on factorizing the process of taking a random walk on the graph. It permits an unconstrained parametrization, allowing for optimization via simple gradient descent. By relaxing the hard clustering to a soft clustering, our algorithm relaxes potentially hard clustering problems to a tractable ones. We illustrate our algorithm's capabilities on a synthetic graph, as well as simple unsupervised learning tasks involving bipartite and tripartite clustering of orthographic and phonological data. ## 1 Introduction A core method of finding structure in networks is mapping nodes to some smaller set of node clusters based on structural similarity. There are various algorithms for this task of node clustering, one of the most well-known being the normalized cuts algorithm (Shi & Malik, 2000), which assigns clusters based on an eigenvector of the normalized graph Laplacian. This algorithm finds a hard clustering, where each node is mapped to exactly one cluster; soft clustering (Yu et al., 2005) relaxes this problem and instead assigns nodes to clusters probabilistically, so that each node is mapped to a categorical distribution over clusters. Clustering algorithms generally try to find groups of nodes that are in line with graph homophily, wherein edges connect nodes with similar attributes, and wedges tend to be closed ("a friend of a friend is a friend"). Only a small number of clustering algorithms can be seen as capturing heterophilous structures, such as (near) bipartiteness: for example, algorithms for the max-cut problem (Grotschel & Pulleyblank, 1981) can find approximately bipartite structure in graphs. Approaches for the closely related task of graph simplification (also called graph compression) are often more amenable than typical clustering approaches to addressing heterophilous structure. Like clustering, simplification is focused on finding structure in graphs, but with the goal of minimizing reconstruction error from a compressed representation. Algorithms for simplification work by merging edges or nodes (Toivonen et al., 2011; Garg & Jaakkola, 2019), or by approximate factorization of the adjacency matrix (Nourbakhsh et al., 2014). We present a probabilistic framework which unifies node clustering and graph simplification and is applicable to both homophilous and heterophilous structure. In particular, we propose factoring an undirected graph \(\mathbf{A}\in\mathbb{R}_{+}^{n\times n}\) into two components: a bipartite graph \(\mathbf{V}\in\mathbb{R}_{+}^{n\times m}\), which connects the \(n\) original nodes to \(m\) latent nodes, where \(m<n\), and a smaller undirected graph \(\mathbf{W}\in\mathbb{R}_{+}^{m\times m}\), which is a graph on the latent nodes. Intuitively, this factorization approximates taking one step of a random walk on \(\mathbf{A}\) as a three step procedure: first taking one random step on \(\mathbf{V}\) from the original nodes to the latent nodes, then one random step within the latent graph \(\mathbf{W}\), and finally one random step on \(\mathbf{V}\) back from the latent to the original nodes: \[\pi(\mathbf{A})\approx\pi(\mathbf{V})\,\pi(\mathbf{W})\,\pi(\mathbf{V}^{\top}), \tag{1}\] where \(\pi\) denotes dividing each row of a matrix by its sum, yielding the random walk transition matrix corresponding to the adjacency matrix. Figure 1 illustrates this process. As we discuss in Section 3, this model permits a differentiable parametrization, allowing for fitting via gradient descent on a simple cross-entropy loss. Further, we can ensure that the transition matrix on the right-hand side is reversible, meaning that it corresponds exactly to one step of a random walk on some undirected graph \(\mathbf{B}\in\mathbb{R}_{+}^{n\times n}\). Our model allows for retrieval of this \(\mathbf{B}\) as a rank-\(m\) approximation of \(\mathbf{A}\), connecting this clustering method to graph simplification. **Demonstrative Toy Graph.** We construct a synthetic network that exhibits both homophily and heterophily as a concrete demonstration of how our model can adapt to both. Consider a union of 3 bicliques, each with 10 nodes in either set, for a total of 60 nodes. Figure 2 (left) is a plot of this graph. We can naturally cluster the nodes in at least two ways: either we find 3 homophilous clusters with most edges _within_ clusters (as in the min-cut task), or we find 2 heterophilous clusters with most edges _across_ clusters (like the max-cut task). As we discuss in Section 4, in our model, this corresponds to fixing one of the following latent graphs \(\mathbf{W}\), then finding a bipartite graph \(\mathbf{V}\) that minimizes the approximation error in Equation 1: \[\mathbf{W}_{\text{clique}}=\frac{1}{3}\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}\qquad\mathbf{W}_{\text{biclique}}=\frac{1}{2}\begin{pmatrix}0 &1\\ 1&0\end{pmatrix}.\] In Figure 2 (right), we show the \(\pi(\mathbf{V})\) matrices, which map each node to a distribution over clusters, that result from both ways of clustering the graph. The one with \(\mathbf{W}_{\text{clique}}\) indeed assigns each node to three clusters, corresponding to the three bicliques, so that edges occur only within clusters; whereas the one with \(\mathbf{W}_{\text{biclique}}\) assigns each node to one of two clusters so as to split each biclique, such that edges occur only across clusters. Each method also results in a different simplified graph \(\mathbf{B}\): the former erases the biclique structure and results in three cliques; whereas the latter erases the distinction between the three disjoint bicliques, resulting in a single large biclique. The latter clustering may be more useful for mining data structure in some applications, but most algorithms only allow for the former. Our key contributions can be summarized as follows: * We present our _latent random step_ model. To our knowledge, this is the first probabilistic model for undirected graph simplification that accommodates arbitrary homophilous and heterophilous structure. * Unlike similar work, our model admits an unconstrained parametrization. Simple gradient descent (and its variants) on a natural probabilistic loss can be used to fit our model, allowing for flexible node clustering and low-rank approximation of a weighted graph. Modifying the latent graph \(\mathbf{W}\) ranges the type of node clustering task through relaxations of potentially hard problems like \(k\)-way max-cut. * We apply our model and algorithm to real-world data by simplifying weighted graphs constructed from raw orthographic and phonological data. We find that graphs with heterophilous structure naturally arise when considering the sequences of letters and phonemes in words. From these graphs, our unsupervised algorithm finds heterophilous clusters that closely align with ground-truth labels. ## 2 Related Work Our model can be represented as an approximate graph factorization of the form \(\mathbf{A}\approx\mathbf{U}\mathbf{W}\mathbf{U}^{\top}\), which has also been employed in some prior works. Yu et al. (2005) present such a model for use in soft clustering and also discuss its interpretation in terms of random walks, but they focus on the case where \(\mathbf{W}\) is diagonal, that is, the homophilous case. Perhaps the closest model to ours is that of Nourbakhsh et al. (2014), who also allow their equivalent of \(\mathbf{W}\) to be non-diagonal. However, their experiments do not explore non-homophilous clustering, and they do not work within a fully probabilistic framework as we do; among other differences, the loss they optimize is based on Frobenius norm rather than cross-entropy. While these are the two models that are closest to ours, all three models fit under the umbrella of non-negative matrix factorization (NMF) for node clustering and graph simplification, for which there is other prior work (Ding et al., 2008; Kuang et al., 2012). Our model, along with some of the other discussed models, can be seen as a very generalized variant of the well-known stochastic block model (SBM) (Holland et al., 1983). The key differences are that: 1) nodes in the SBM are assigned to exactly one community, as opposed to our model's distributional assignments; and 2) the central probability matrix in the SBM gives probabilities of nodes in two communities being connected, which is slightly different from our model, wherein the central matrix gives the proportion of edges which occur between two communities; and 3) SBMs, while capable of representing heterophilous structure, are Figure 1: Diagram of the latent random step model. A random step on a graph \(\mathbf{A}\) with \(n\) nodes is approximated by a random step forward on a bipartite graph \(\mathbf{V}\), then a random step on a smaller graph \(\mathbf{W}\) with \(m\) nodes, then a random step back on \(\mathbf{V}\). Figure 2: The synthetic graph \(\mathbf{A}\) (left) and two different clusterings (right), for which we show \(\mathbf{B}\) (top) and \(\pi(\mathbf{V})^{\top}\) (bottom). also typically studied in the context of homophilous structure. As suggested by use of the term 'latent' states, our model can also be seen as an instance of the Hidden Markov Model (Baum and Petrie, 1966) to the process of taking a random walk on a graph; unlike in most applications of HMMs, here the analyzed process is explicitly first-order, by construction. Finally, our fitting algorithm joins much prior work as a relaxation of a computationally-hard node partitioning problem; perhaps best-known is the work of Goemans and Williamson (1995), which gives a spectral relaxation of the max-cut problem. Unlike that work, we provide no theoretical guarantees of performance, though we observe good performance in experiments. On the other hand, our framework can go well beyond max-cut to unify min-cut, \(k\)-way max-cut, and more, depending on how the latent graph \(\mathbf{W}\) is set. ## 3 Methodology As stated in Section 1, we propose to approximately factorize an undirected graph \(\mathbf{A}\in\mathbb{R}_{+}^{n\times n}\) into a bipartite graph \(\mathbf{V}\in\mathbb{R}_{+}^{n\times m}\) and a smaller undirected graph \(\mathbf{W}\in\mathbb{R}_{+}^{m\times m}\). We seek \[\pi(\mathbf{A})\approx\pi(\mathbf{V})\,\pi(\mathbf{W})\,\pi(\mathbf{V}^{\top})=\pi(\mathbf{B}), \tag{2}\] where again \(\pi\) denotes dividing each row of a matrix by its sum, yielding a random walk transition matrix. \(\mathbf{B}\), which is a symmetric matrix in \(\mathbb{R}_{+}^{n\times n}\), is a rank-\(m\) reconstruction of \(\mathbf{A}\) (that is, a simplified version of \(\mathbf{A}\)); like \(\mathbf{A}\), \(\mathbf{B}\) can be seen as an undirected, weighted graph on the original \(n\) nodes. Reversibility Criterion.We first establish a condition on \(\mathbf{V}\) and \(\mathbf{W}\) for the transition matrix \(\pi(\mathbf{B})\) from Equation 2 to be reversible, that is, to correspond to a random step in an _undirected_ graph \(\mathbf{B}\). This condition is crucial for fitting the model to not only yield a clustering of the nodes (given by \(\pi(\mathbf{V})\)), but also a simplified graph \(\mathbf{B}\). Reversibility is satisfied iff there exists a diagonal matrix \(\mathbf{D}_{\mathbf{B}}\in\mathbb{R}_{+}^{n\times n}\) for which the product \(\mathbf{D}_{\mathbf{B}}\,\pi(\mathbf{B})\) is a symmetric matrix \(\mathbf{B}\). We assume that \(\mathbf{V}\) and \(\mathbf{W}\) are not just non-negative, but strictly positive; we will only parametrize such \(\mathbf{V}\) and \(\mathbf{W}\) anyway. Let \(\mathbf{D}_{\mathbf{V}}\) and \(\mathbf{D}_{\mathbf{V}}^{\prime}\) denote the diagonal matrices whose diagonal elements are the row-sums and column-sums of \(\mathbf{V}\), respectively, and let \(\mathbf{D}_{\mathbf{W}}\) denote the row-sums of \(\mathbf{W}\) (which are equivalent to the columns-sums since \(\mathbf{W}\) is symmetric). We have reversibility if, for some \(\mathbf{D}_{\mathbf{B}}\), the following matrix is symmetric: \[\mathbf{B} =\mathbf{D}_{\mathbf{B}}\,\pi(\mathbf{V})\,\pi(\mathbf{W})\,\pi(\mathbf{V}^{\top})\] \[=\mathbf{D}_{\mathbf{B}}\,\mathbf{D}_{\mathbf{V}}^{-1}\mathbf{V}\,\mathbf{D}_{\mathbf{W}}^{-1 }\mathbf{W}\,\mathbf{D}_{\mathbf{V}}^{\prime-1}\mathbf{V}^{\top})\] \[=\mathbf{D}_{\mathbf{B}}\mathbf{D}_{\mathbf{V}}^{-1}\left(\mathbf{V}\mathbf{D}_{\mathbf{W}}^{ -1}\mathbf{W}\mathbf{D}_{\mathbf{V}}^{\prime-1}\mathbf{V}^{\top}\right).\] The transpose of this matrix is \[\mathbf{B}^{\top}=\left(\mathbf{V}\mathbf{D}_{\mathbf{V}}^{\prime-1}\mathbf{W}\mathbf{D}_{\mathbf{W}}^{-1} \mathbf{V}^{\top}\right)\mathbf{D}_{\mathbf{V}}^{-1}\mathbf{D}_{\mathbf{B}}.\] Note that if \(\mathbf{D}_{\mathbf{V}}^{\prime}=\mathbf{D}_{\mathbf{W}}\), then the parenthesized parts of the final expressions are equivalent. Further, with \(\mathbf{D}_{\mathbf{B}}=\mathbf{D}_{\mathbf{V}}\), the matrix is fully equal to its transpose and is therefore symmetric. Explicitly, the matrix simplifies to the form \[\mathbf{B}=\mathbf{V}\mathbf{D}_{\mathbf{W}}^{-1}\mathbf{W}\mathbf{D}_{\mathbf{W}}^{-1}\mathbf{V}^{\top}.\] Hence reversibility is satisfied if the column-sums of the bipartite graph \(\mathbf{V}\) are equal to the degrees of the latent graph \(\mathbf{W}\). If this condition is satisfied, the degrees \(\mathbf{D}_{\mathbf{B}}\) of the reconstructed graph \(\mathbf{B}\), which corresponds to the transition matrix \(\pi(\mathbf{B})\), are exactly the row-sums of \(\mathbf{V}\). Parametrization.We can parametrize our model using two matrices of free parameters, \(\mathbf{W}_{\mathbf{p}}\in\mathbb{R}^{m\times m}\) and \(\mathbf{V}_{\mathbf{p}}\in\mathbb{R}^{n\times m}\), to represent the latent graph \(\mathbf{W}\) and the bipartite graph \(\mathbf{V}\), respectively. Let \(\sigma_{\text{mat}}\) and \(\sigma_{\text{col}}\) denote functions which take the softmax of a matrix over all elements and over each column. We first construct \(\mathbf{W}\) as follows: \[\mathbf{W}=\sigma_{\text{mat}}\left(\mathbf{W}_{\mathbf{p}}+\mathbf{W}_{\mathbf{p}}^{\top}\right),\] which ensures both the positivity and the symmetry of \(W\); the softmax also ensures that all entries of \(\mathbf{W}\) sum to \(1\). Let \(\mathbf{D}_{\mathbf{W}}\) be the diagonal matrix containing the degrees of \(\mathbf{W}\). We now construct \(\mathbf{V}\) with: \[\mathbf{V}=\left(\sigma_{\text{col}}(\mathbf{V}_{\mathbf{p}})\right)\mathbf{D}_{\mathbf{W}},\] which ensures the positivity of \(\mathbf{V}\) and the reversibility criterion, that the column-sums of \(\mathbf{V}\) are equal to the degrees \(\mathbf{W}\). Note that while we provide the full parametrization for generality, in the experiments in this paper, we fix \(\mathbf{W}\) and find \(\mathbf{V}\), so \(\mathbf{W}_{\mathbf{p}}\) is not used. Fitting.We can fit this model via gradient descent on a simple and natural cross-entropy loss: \[L=-\sum\nolimits_{i,j\in[n]}\left(\mathbf{\tilde{A}}_{ij}\log\left(\mathbf{\tilde{B}}_{ ij}\right)\right), \tag{3}\] where an overline denotes dividing a matrix by the sum of all of its elements. This loss views the adjacency matrix of the original graph \(\mathbf{A}\) and that of the reconstructed graph \(\mathbf{B}\) as probability distributions over pairs of nodes. Minimizing it tries to place more mass in \(\mathbf{B}\) among node pairs which correspond to edges in \(\mathbf{A}\). We additionally use an \(L_{2}\) regularization penalty on the parameters. Our implementation uses PyTorch (Paszke et al., 2019) for automatic differentiation and minimizes the loss using the SciPy (Jones et al., 2001) implementation of the L-BFGS (Liu and Nocedal, 1989; Zhu et al., 1997) algorithm with default hyperparameters. The free parameters are initialized uniformly at random on \((-10^{-2},+10^{-2})\). The regularization term for the loss is set to \(10^{-1}\) times the mean squared norm of the free parameters. ## 4 Experiments on Real-World Networks To illustrate the power of our model and algorithm, we perform some experiments on real-world datasets made from English-language orthographic (spelling) and phonological (pronunciation) data. In particular, to find structure in this data in an unsupervised manner, we construct graphs from it and factorize them using the following two latent graphs to find (soft) bipartite and tripartite clusterings: \[\mathbf{W}_{\text{bi}}=\frac{1}{2}\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\qquad\mathbf{W}_{\text{ti}}=\frac{1}{6}\begin{pmatrix}0&1&1\\ 1&0&1\\ 1&1&0\end{pmatrix}.\] Factorizing with both latent graphs corresponds to finding clusters such that intra-cluster edges are minimized: with \(\mathbf{W}_{\text{bi}}\), we find two clusters, and with \(\mathbf{W}_{\text{tri}}\), we find three clusters such that roughly one-third of the total edge weight is assigned to each of the three pairs of clusters. These clustering tasks can be seen as soft relaxations of the standard and 3-way max-cut problems. Orthographic Adjacency.We construct a graph based on spellings of common English language words. The nodes of the graph correspond to the 26 letters, and the edge weights correspond to the number of times, across all common words, that two letters are directly adjacent. For the list of words, we use the 20K most frequently-used English words as determined by the Google Books Ngram Viewer 1. Footnote 1: Specifically, we use the list on the google-10000-english repo. We perform a bipartite clustering of this graph using \(\mathbf{W}_{\text{bi}}\) based on the intuition that the spelling of words very roughly tends to alternate between vowels and consonants. See Figure 3 for the results. Indeed, the resulting clustering reflects the intuition: if we convert the soft clustering \(\pi(\mathbf{V})\) into a hard clustering by assigning each letter to the cluster for which it has higher probability, this clustering cleanly divides the letters into vowels and consonants. (The letter 'y', which can act as both, is placed into the vowel cluster, but with the least probability among the vowels.) Phonological Adjacency.We now construct a graph based on pronunciations of English language words. Using the same list of common words as for the orthographic data, we convert these words to sequences of phonemes as determined by the CMU Pronuncing Dictionary 2. We use the NLTK API (Bird et al., 2009) to access the dictionary. The graph is similar to the orthographic one: the nodes of this graph correspond to 39 English language phonemes, and the edge weights correspond to the number of times, across all common words in the dictionary, that two phonemes are directly adjacent. Footnote 2: This resource is hosted online at the speech.cs.cmu website. While applying a bipartite node clustering to phonological data also separates vowel sounds as with the orthographic data, we find that significantly more interesting structure can be extracted with a tripartite clustering, using the latent graph \(\mathbf{W}_{\text{tri}}\). See Figure 4 for a plot of the resulting cluster affinities \(\pi(\mathbf{V})\). The clustering strongly reflects ground-truth categorizations of the phonemes. Most striking is that one of the clusters is dominated by vowel sounds, and that vowel, stop, and nasal/liquid sounds have high affinities for three distinct clusters. ## 5 Conclusion We propose our latent random step model and perform node clustering on a synthetic graph and real-world orthographic and phonological graphs, finding structure in the graphs that goes beyond typical homophilous clusterings. The simplicity and flexibility of the model suggests several directions for extension of this work. We focus here on the setting where the latent graph \(\mathbf{W}\) is fixed and the bipartite graph \(\mathbf{V}\) is fit. We could instead attempt to fit both at once, yielding the full graph simplification setting. Besides this, there may be a straightforward extension to data and graphs of greater scale than we consider here: the cross-entropy loss (Equation 3) could be approximated by sampling of node pairs based on the weights of \(\bar{\mathbf{A}}\), allowing for an SGD fitting algorithm. More broadly, we hope that considering latent graphs beyond homophilous clusters can expand the applicability of node clustering, out to new problems and fields. Figure 4: Results of tripartite clustering of the phonological adjacency graph. This ternary plot projects the 3D categorical distributions given by the cluster affinities \(\pi(\mathbf{V})\) onto a 2D space. Each corner corresponds to a different cluster. Figure 3: Results of bipartite clustering of the orthographic adjacency graph. For each letter, we plot the probability of assignment to the first of the two clusters, that is, the first column of \(\pi(\mathbf{V})\). Letters are sorted in ascending order of this probability.
2310.04773
Equivariant deformation theory for nilpotent slices in symplectic Lie algebras
The Slodowy slice is a flat Poisson deformation of its nilpotent part, and it was demonstrated by Lehn-Namikawa-Sorger that there is an interesting infinite family of nilpotent orbits in symplectic Lie algebras for which the slice is not the universal Poisson deformation of its nilpotent part. This family corresponds to slices to nilpotent orbits in symplectic Lie algebras whose Jordan normal form has two blocks. We show that the nilpotent Slodowy varieties associated to these orbits are isomorphic as Poisson $\mathbb{C}^\times$-varieties to nilpotent Slodowy varieties in type D. It follows that the universal Poisson deformation in type C is a slice in type D. When both Jordan blocks have odd size the underlying singularity is equipped with a $\mathbb{Z}_2$-symmetry coming from the type D realisation. We prove that the Slodowy slice in type C is the $\mathbb{Z}_2$-equivariant universal Poisson deformation of its nilpotent part. This result also has non-commutative counterpart, identifying the finite W-algebra as the universal equivariant quantization.
Filippo Ambrosio, Lewis Topley
2023-10-07T10:50:53Z
http://arxiv.org/abs/2310.04773v2
# Equivariant deformation theory for nilpotent slices in symplectic Lie algebras ###### Abstract. The Slodowy slice is a flat Poisson deformation of its nilpotent part, and it was demonstrated by Lehn-Namikawa-Sorger that there is an interesting infinite family of nilpotent orbits in symplectic Lie algebras for which the slice is not the universal Poisson deformation of its nilpotent part. This family corresponds to slices to nilpotent orbits in symplectic Lie algebras whose Jordan normal form has two blocks. We show that the nilpotent Slodowy varieties associated to these orbits are isomorphic as Poisson \(\mathbb{C}^{\times}\)-varieties to nilpotent Slodowy varieties in type \(\mathsf{D}.\) It follows that the universal Poisson deformation in type \(\mathsf{C}\) is a slice in type \(\mathsf{D}.\) When both Jordan blocks have odd size the underlying singularity is equipped with a \(\mathbb{Z}_{2}\)-symmetry coming from the type \(\mathsf{D}\) realisation. We prove that the Slodowy slice in type \(\mathsf{C}\) is the \(\mathbb{Z}_{2}\)-equivariant universal Poisson deformation of its nilpotent part. This result also has non-commutative counterpart, identifying the finite \(W\)-algebra as the universal equivariant quantization. ## 1. Introduction It follows from recent work of Namikawa [20] that every conical symplectic singularity \(X\) admits a universal Poisson deformation, over an affine base space. This means that every Poisson deformation can be obtained from it via a unique base change. By the work of Losev [15] this Poisson deformation admits a quantization which satisfies another remarkable universal property: every quantization of a Poisson deformation can be obtained from this quantization via base change (see Section 2 for more detail). A convenient formalism for expressing these facts is the functor of Poisson deformations and the functor of quantizations of Poisson deformations. The work of Namikawa and Losev can be succinctly expressed by saying that these functors are representable, that they are represented over the same base, and satisfy an excellent compatibility condition (see [15, Proposition 3.5] and [1, Definition 2.12]). Let \(G\) be a simple complex algebraic group with Lie algebra \(\mathfrak{g}\). If \(e\) is an element in the nilpotent cone \(\mathcal{N}(\mathfrak{g})\subset\mathfrak{g}\) with adjoint orbit \(\mathcal{O}\) then we can consider the Slodowy slice \(\mathcal{S}_{e}\) which is transversal to \(\mathcal{O}\) at \(e\), and the nilpotent Slodowy variety \(\mathcal{N}_{e}:=\mathcal{N}(\mathfrak{g})\cap\mathcal{S}_{e}\). This subvariety comes equipped with a natural \(\mathbb{C}^{\times}\)-action contracting to \(e\), and it acquires a conical Poisson structure by Hamiltonian reduction, as observed by Gan-Ginzburg [9]. The Springer resolution \(\widehat{\mathcal{N}}(\mathfrak{g})\to\mathcal{N}(\mathfrak{g})\) restricts to a symplectic resolution \(\widehat{\mathcal{N}}_{e}\to\mathcal{N}_{e}\). In summary, \(\mathcal{N}_{e}\) is a conical symplectic singularity. Similarly \(\mathcal{S}_{e}\) is a Poisson variety by Hamiltonian reduction, and the adjoint quotient \(\mathfrak{g}\to\mathfrak{g}/\!/G\) restricts to \(\mathcal{S}_{e}\to\mathfrak{g}/\!/G\) in such a way that the scheme theoretic central fibre of \(\mathcal{S}_{e}\to\mathfrak{g}/\!/G\) is equal to \(\mathcal{N}_{e}\) (see [23, SS5]). To summarise the remarks above, \(\mathcal{S}_{e}\to\mathfrak{g}/\!/G\) is a Poisson deformation of \(\mathcal{N}_{e}\). Furthermore the slice admits a natural quantization over the same base, known as the finite \(W\)-algebra. By the uniqueness (up to \(G\)-conjugacy) of \(\mathfrak{sl}_{2}\)-triples containing \(e\) as the nilpositive, due to Kostant and Mal'cev [8, SS3.4], we see that the Slodowy slice only depends on the adjoint orbit of \(e\) up to Poisson isomorphism, and a similar statement holds for finite \(W\)-algebras (see [9] for more detail). Two natural questions arise, for each nilpotent orbit \(\mathcal{O}:=G\cdot e\subseteq\mathcal{N}(\mathfrak{g})\): 1. _is_ \(\mathcal{S}_{e}\) _a universal Poisson deformation of_ \(\mathcal{N}_{e}\)_?_ 2. _is the finite_ \(W\)_-algebra a universal filtered quantization of_ \(\mathcal{N}_{e}\)_?_ The first question was answered comprehensively by Lehn-Namikawa-Sorger [13], who showed that the answer is positive, with a small list of possible exceptions. The second question was answered by the current authors and their collaborators: the questions a have positive answer for precisely the same class of orbits [1, Theorem 1.2]. The exceptional cases, where \(\mathcal{S}_{e}\to\mathfrak{g}/\!/G\) is not a universal Poisson deformation, are listed in the following table. In [1] we also emphasised the notion of a _universal equivariant Poisson deformation_ and _universal equivariant quantization_. Let \(X\) be a conical symplectic singularity, and \(\Gamma\) a reductive group of \(\mathbb{C}^{\times}\)-equivariant Poisson automorphisms. Then a \(\Gamma\)-deformation of \(X\) is a flat \(\mathbb{C}^{\times}\times\Gamma\)-equivariant morphism \(X_{S}\to S\) of Poisson \(\mathbb{C}^{\times}\times\Gamma\)-varieties, where \(S\) is equipped with the trivial Poisson structure and trivial \(\Gamma\)-action, whilst the \(\mathbb{C}^{\times}\)-action on \(S\) is contracting, and the central fibre of \(X_{S}\to S\) is isomorphic to \(X\), via a fixed choice of \(\Gamma\)-equivariant isomorphism. The notion of a \(\Gamma\)-quantization is defined similarly (see Section 2.5). One can consider the functors \(\mathrm{PD}^{\Gamma}_{\mathbb{C}[X]}\) and \(\mathrm{Q}^{\Gamma}_{\mathbb{C}[X]}\) of Poisson \(\Gamma\)-deformations and \(\Gamma\)-quantizations, and one of the main results of [1] states that these functors are representable over the same base, and enjoy the same compatibility as the functors \(\mathrm{PD}_{\mathbb{C}[X]}\) and \(\mathrm{Q}_{\mathbb{C}[X]}\) mentioned above. Written more informally, every conical symplectic singularity with \(\Gamma\)-action as above admits a universal Poisson \(\Gamma\)-deformation (resp. universal \(\Gamma\)-quantization), from which every other such deformation (resp. quantization) can be obtained by base change. In this paper we focus on the the exceptions in the fourth column of Table 1. We build an isomorphism from the nilpotent Slodowy variety to a two-block nilpotent in a symplectic Lie algebra to a certain nilpotent Slodowy variety in an orthogonal Lie algebra, again with two blocks. We then proceed to examine the Poisson deformation theory of such varieties. Although the Slodowy slice and the finite \(W\)-algebra fail to satisfy the universal properties of Namikawa and Losev they do, in fact, satisfy certain equivariant universal properties with respect to hidden symmetry groups. This sharpens the observations of Lehn-Namikawa-Sorger [13, SS12], in a purely algebraic manner. Since subregular orbits in symplectic Lie algebras have Jordan normal form with two blocks, our results generalize a theorem of Slodowy [27, Theorem 8.8], and settles a conjecture of [1] in type \(\mathsf{C}\). ### Poisson isomorphisms and equivariant deformations in type \(\mathsf{C}\) We now describe our first main theorem. Fix \(n\geq 3\) and \(0\leq i<\lfloor\frac{n}{2}\rfloor\). Let \(e\in\mathcal{N}(\mathfrak{so}_{2n})\) be an element of Jordan type \((n,n)\) or \((2n-2i-1,2i+1)\). Also pick an element \(e_{0}\in\mathcal{N}(\mathfrak{sp}_{2n-2})\) with Jordan type \((n-1,n-1)\) or \((2n-2i-2,2i)\) respectively. **Theorem 1.1**.: _There is an isomorphism of \(\mathbb{C}^{\times}\)-Poisson varieties \(\mathcal{N}_{e}\simeq\mathcal{N}_{e_{0}}\)._ Using Theorems 1.1 and 1.2 of [1], one can immediately deduce: **Corollary 1.2**.: 1. \(\mathcal{S}_{e}\to\mathfrak{so}_{2n}/\!/\operatorname{SO}_{2n}\) _is a universal graded Poisson deformation of_ \(\mathcal{N}_{e_{0}}\subseteq\mathfrak{sp}_{2n-2}\)_._ 2. _The finite_ \(W\)_-algebra_ \(U(\mathfrak{so}_{2n},e)\) _is a universal filtered quantization of_ \(\mathcal{N}_{e_{0}}\subseteq\mathfrak{sp}_{2n-2}\)_._ \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Type of \(\mathfrak{g}\) & Any & \(\mathsf{BCFG}\) & \(\mathsf{C}\) & \(\mathsf{G}\) \\ \hline Type of \(\mathcal{O}\) & Regular & Subregular & Two Jordan blocks & dimension 8 \\ \hline \end{tabular} \end{table} Table 1. Cases in which \(\mathcal{S}_{e}\to\mathfrak{g}/\!/G\) is not a universal Poisson deformation of \(\mathcal{N}_{e}\). Theorem 1.1 and part (i) of its corollary are algebraic versions of [13, Proposition 12.1], where the analogous results were proven for germs of complex analytic spaces. Algebraic isomorphisms were obtained [11, SS5.1] although Poisson structures were not considered (see also [14, SS8] where the isomorphism of Springer resolutions is discussed). Our main result upgrades this to an isomorphism of Poisson varieties. Our proof of Theorem 1.1 uses two different arguments according to whether the Jordan blocks of \(e_{0}\) are both even or odd dimensional. In the even case, we make use of a Poisson presentation for algebras of regular functions on Slodowy slices obtained by the second author in [28]. This yields to a closed \(\mathbb{C}^{\times}\)-Poisson embedding \(\mathcal{S}_{e_{0}}\hookrightarrow\mathcal{S}_{e}\) (Corollary 3.2) and a dimension argument allows to conclude (see Section 5.1). In the odd case the Poisson presentation of [28] does not apply, however we can still derive a Poisson embedding of slices using Brown's description [4] of the finite \(W\)-algebra for these Jordan types as a truncated twisted Yangian, see Section 5.2. It is well-known that \(\mathcal{S}_{e}\to\mathfrak{so}_{2n}/\!\!/\operatorname{SO}_{2n}\) and \(\mathcal{S}_{e_{0}}\to\mathfrak{sp}_{2n-2}/\!\!/\operatorname{Sp}_{2n-2}\) are both \(\mathbb{C}^{\times}\)-Poisson deformations of their respective central fibres. Likewise, the finite \(W\)-algebras \(U(\mathfrak{so}_{2n},e)\) and \(U(\mathfrak{sp}_{2n-2},e_{0})\) are quantizations of \(\mathbb{C}[\mathcal{N}_{e}]\) and \(\mathbb{C}[\mathcal{N}_{e_{0}}]\) over their respective centres [23]. To formulate our second main result we must introduce the additional assumption that the Jordan type of \(e\) is not \((2k,2k)\). Such partitions are called very even, and correspond to two \(\operatorname{SO}_{4k}\)-orbits permuted by the outer automorphism group of \(\mathfrak{so}_{4k}\). See the remarks following [8, Theorem 5.1.6] for more detail. For all other two-block partitions, the outer automorphism group of \(\mathfrak{so}_{2n}\) stabilises the orbit of \(e\), and one can show that there exists a splitting of \(\operatorname{Aut}(\mathfrak{so}_{2n})\to\operatorname{Out}(\mathfrak{so}_{2n})\) which stabilises the slice \(\mathcal{S}_{e}\), see Section 4.3. This gives rise to a distinguished group \(\Gamma\subseteq\operatorname{Aut}(\mathcal{S}_{e})\) of \(\mathbb{C}^{\times}\)-Poisson automorphisms. By Theorem 1.1(1) we see that \(\mathcal{N}_{e_{0}}\) admits an \(\mathbb{C}^{\times}\times\Gamma\)-action, with \(\Gamma\) acting by Poisson automorphisms. Note that these symmetries are quite exceptional: they cannot be constructed by restricting automorphisms of \(\mathfrak{sp}_{2n-2}\). **Theorem 1.3**.: _Let \(n\geq 3\) and let \(e_{0}\in\mathcal{N}(\mathfrak{sp}_{2n-2})\) have two Jordan blocks, and suppose that the block sizes are distinct when \(n\) is odd._ 1. _The Slodowy slice_ \(\mathcal{S}_{e_{0}}\to\mathfrak{sp}_{2n-2}/\!\!/\operatorname{Sp}_{2n-2}\) _is a universal element of the functor_ \(\operatorname{PD}^{\Gamma}_{\mathbb{C}[\mathcal{N}_{e_{0}}]}\)_._ 2. _The finite_ \(W\)_-algebra_ \(U(\mathfrak{sp}_{2n-2},e_{0})\)_, viewed as a flat family of filtered algebras over its centre, is a universal element of_ \(\operatorname{Q}^{\Gamma}_{\mathbb{C}[\mathcal{N}_{e_{0}}]}\)_._ _Remark 1.4_.: Note that Theorem 1.1 holds for all nilpotent elements of \(\mathfrak{sp}_{2n-2}\) with at most two Jordan blocks. On the other hand since the very even orbits in \(\mathfrak{so}_{2n}\) are not characteristic, the "hidden symmetries" of \(\mathcal{N}_{e_{0}}\) do not appear when both blocks have odd size, and so Theorem 1.3 cannot even be formulated for these orbits. This paper is a natural progression of [1]. Therein, the theory of equivariant deformations and quantizations functors was formalised and applied to the subregular nilpotent Slodowy varieties, which are known to be isomorphic to Kleinian singularities. Together with our collaborators we showed that when \(\mathfrak{g}\) is simple of type \(\mathsf{B}\), \(\mathsf{F}\) or \(\mathsf{C}_{2n}\), \(n\geq 1\) the Slodowy slice is the universal Poisson graded \(\Gamma\)-deformation where \(\Gamma\subseteq\operatorname{Aut}(\mathfrak{g})\) is a splitting of the outer automorphism group of \(\mathfrak{g}\). We remark that the results of this paper cover all subregular nilpotent Slodowy slices in type \(\mathsf{C}_{\mathsf{n}}\) for all \(n\), extending [1, Theorem 1.3]. ### Structure of the paper In Section 2, after fixing notation and conventions, we recall the properties of conical symplectic singularities which are relevant to the current work. We also introduce functorial formalism for the commutative and non-commutative equivariant deformation theory of these singularities, surveying the notions of [1, SS2]. We conclude the section by explaining that the Slodowy slice is a graded Poisson deformation of its nilpotent part, and that an analogous statement holds for the finite \(W\)-algebra. In Section 3 we exhibit a Poisson presentation (Theorem 3.1) of the algebras of regular functions on Slodowy slices to certain orbits in type \(\mathsf{C}\) and \(\mathsf{D}\). This immediately leads to a \(\mathbb{C}^{\times}\)-Poisson embedding \(\mathcal{S}_{e_{0}}\hookrightarrow\mathcal{S}_{e}\) mentioned earlier in the introduction, in case \(e\) is not very even. In Section 4 we discuss various properties of Slodowy slices in orthogonal Lie algebras, again assuming that the Jordan type has two parts, and is not very even. We characterise the restriction of the Pfaffian to the slice, amongst all Casimirs. We show that the defining ideal of \(\mathcal{S}_{e_{0}}\subseteq\mathcal{S}_{e}\) is generated by a certain Casimir, written in terms of the Poisson presentation. Finally in Section 4.3, we describe the Poisson action induced by the outer automorphism group of \(\mathfrak{so}_{2n}\) on the Slodowy slice. A key result here is Proposition 4.11, illustrating that this action descends to the invariants by a change of sign on the Pfaffian. This allows us to conclude (see Proposition 4.12) that the Pfaffian generates the defining ideal of \(\mathcal{S}_{e_{0}}\). Finally, in Section 5, we gather together all of the ingredients listed above to give the proofs of Theorems 1.1 and 1.3. ### Connections with symplectic duality To conclude the introduction we explain that the results of this paper are consistent with some recent conjectures. One of the most exciting themes in the theory of conic symplectic singularities is symplectic duality. This is a conjectural pairing between conic symplectic singularities, which is understood to be an expression of mirror symmetry for 3d \(\mathcal{N}=4\) supersymmetric gauge theories (see [3] for example). This duality of singularities should transpose certain invariants. For example, there should be an order-reversing bijection on symplectic leaves, and a Koszul duality between certain categories of representations associated to these singularities. Now let \(G\) be a connected reductive algebraic group and \(G^{\vee}\) the Langlands dual group. As usual, we write \(\mathfrak{g}=\operatorname{Lie}(G)\), resp. \(\mathfrak{g}^{\vee}=\operatorname{Lie}(G^{\vee})\). There is an order reversing map from nilpotent \(G^{\vee}\)-orbits to nilpotent \(G\)-orbits, known as Barbasch-Vogan-Lusztig-Spaltenstein (BVLS) duality, which restricts to a bijection on special orbits [8, SS6.3]. In [16, SS9] this construction was upgraded to a map from nilpotent \(G^{\vee}\)-orbits to equivalence classes of \(G\)-equivariant covers of nilpotent \(G\)-orbits, which is referred to as _refined BVLS duality_. It is expected that the nilpotent Slodowy slice in \(\mathfrak{g}^{\vee}\) associated to an orbit is symplectic dual to the affinization of the refined BVLS dual orbit cover. If we take \(0\leq i\leq\lfloor\frac{n}{2}\rfloor\) and consider the nilpotent slice in \(\mathfrak{so}_{2n}\) to the orbit with partition \((2n-2i-1,2i+1)\) then the BVLS dual is the orbit in \(\mathfrak{so}_{2n}\) with partition \((2^{2i},1^{2n-2i})\). This coincides with the refined BVLS dual. For the nilpotent slice to the orbit of type \((2n-2i-2,2i)\) in \(\mathfrak{sp}_{2n-2}\) the BVLS dual is the the orbit of type \((3,2^{2i-2},1^{2n-4i})\) in \(\mathfrak{so}_{2n-1}\), however the refined BVLS dual of this orbit is the universal (2-fold) cover of this orbit. The affinization of the cover is actually isomorphic (as \(\mathbb{C}^{\times}\)-Poisson varieties) to the affinization of the orbit in \(\mathfrak{so}_{2n}\) with partition \((2^{2i},1^{2n-2i})\). This was first observed by Brylinski-Kostant in the language of shared orbit pairs [6, Theorem 5.9], see also [10, Corollary 2.5(a)] for a recent development on this theme. This classical isomorphism of Poisson varieties is therefore the symplectic dual of our isomorphism in Theorem 1.1. Finally we remark that there is a more nuanced version of symplectic duality emerging, which takes into account symmetries of singularities, and might be referred to as equivariant symplectic duality, see the introduction to [18]. It is expected that our Theorem 1.3 is an expression of some aspect of equivariant duality. ### Acknowledgements We would like to thank Giovanna Carnovale, Ali Craw, Francesco Esposito, Austin Hubbard and Ryo Yamagishi for useful discussions. We would also like to thank Yiqiang Li for drawing our attention to [11, 14], and special thanks to Paul Levy and Dmytro Matvieievskyi for useful discussions regarding Section 1.3. The first author is a member of INDAM-GNSAGA and part of his research work was conducted at Friedrich Schiller Universitat Jena. The second author's work is funded by the UKRI FLF grant numbers MR/S032657/1, MR/S032657/2, MR/S032657/3. ## 2. Deformation theory of conical symplectic singularities We begin the paper by reviewing some of the well-known facts from the theory of conical symplectic singularities, and recalling some of the main results of [1], which built upon the work of Losev and Namikawa [15, 20, 21]. ### Conventions We work over the field \(\mathbb{C}\) of complex numbers. If \(\Gamma\) is a group acting on a set \(V\), we denote the action with a dot. The subset of invariants is \(V^{\Gamma}:=\{v\in V\mid\gamma\cdot v=v\text{ for all }\gamma\in\Gamma\}\). If \(\Gamma\) is a group acting on an algebra \(A\), the algebra of coinvariants is denoted \(A_{\Gamma}\) and is the quotient of \(A\) by the ideal generated by \(\{a-\gamma\cdot a\mid a\in A,\gamma\in\Gamma\}\), and it should be viewed as the regular functions on the scheme \(\operatorname{Spec}(A)^{\Gamma}\). Partitions \(\lambda=(\lambda_{1},...,\lambda_{k})\) are ordered in non-decreasing manner: \(\lambda_{1}\leqslant\lambda_{2},\leqslant\cdots\leqslant\lambda_{k}\). ### Graded algebras, filtered algebras and representable functors Let \(\mathbf{G}\) be the category of finitely generated, non-negatively graded commutative \(\mathbb{C}\)-algebras \(B=\bigoplus_{i\geqslant 0}B_{i}\) such that \(B_{0}=\mathbb{C}\) together with graded morphisms. For \(B\in\mathbf{G}\), we write \(\mathbb{C}_{+}:=B/B_{+}\), where \(B_{+}=\bigoplus_{i\geqslant 1}B_{i}\). By a filtered algebra we mean, unless explicitly stated, an associative non-negatively filtered algebra \(A\) with exhaustive connected filtration given by the sequence of subspaces \[F_{0}A\subset F_{1}A\subset\cdots\subset F_{i}A\subset F_{i+1}A\subset\cdots.\] We denote by \(\mathbf{F}\) the category of finitely generated filtered commutative \(\mathbb{C}\)-algebras \(A=\bigcup_{i\geqslant 0}F_{i}A\) such that \(F_{0}A=\mathbb{C}\) together with strictly filtered morphisms, i.e., \(\operatorname{Hom}_{\mathbf{F}}(A,A^{\prime})\) is the set of filtered algebra morphisms \(A\to A^{\prime}\) such that \(\phi(F_{i}A)=F_{i}A^{\prime}\cap\phi(A)\) for \(A,A^{\prime}\in\mathbf{F}\). With these assumptions the _associated graded_ functor \(\operatorname{gr}\colon\mathbf{F}\to\mathbf{G}\) is exact. We also have a functor in the opposite direction \(\operatorname{flt}\colon\mathbf{G}\to\mathbf{F}\) which takes a graded algebra to itself, with filtration induced by the grading, and similar for morphisms. The category \(\mathbf{SF}\) of strictly filtered algebras is defined as the essential image of \(\operatorname{flt}\). A Poisson algebra \(A\) is a commutative algebra equipped with a Lie bracket \(\{\cdot,\cdot\}\colon A\times A\to A\) which is a biderivation of the associative multiplication, known as a Poisson bracket. The Poisson centre or Casimirs of \(A\) is \(\operatorname{Cas}A:=\{a\in A\mid\{a,b\}=0\text{ for all }b\in A\}\). If a Poisson algebra \(A=\bigoplus_{i\geqslant 0}A_{i}\) is graded, we say that \(A\) has (Poisson) bracket in degree \(-d\) if \(\{A_{i},A_{j}\}\subset A_{i+j-d}\). When a Poisson graded algebra \(A\) is a \(B\)-algebra for \(B\in\mathbf{G}\), we implicitly assume that the structure map \(B\to A\) has image in \(\operatorname{Cas}A\). We say that a filtered algebra \(A\) has commutator in degree \(-d\) if \([F_{i}A,F_{j}A]\subset F_{i+j-d}A\). In this case the associated graded \(\operatorname{gr}A\) naturally inherits the structure of a Poisson algebra with bracket in degree \(-d\), see [7, SS1.3]. When a filtered algebra \(A\) is a \(B\)-algebra for \(B\in\mathbf{F}\), we always assume the structure map \(B\to A\) is strictly filtered with image in the centre of \(A\), which implies that \(\operatorname{gr}B\to\operatorname{Cas}(\operatorname{gr}A)\). ### Conical symplectic singularities with reductive group actions A symplectic singularity (introduced by Beauville in [2]) is a normal variety \(X\) whose smooth locus \(X^{\operatorname{reg}}\) carries a symplectic form \(\pi\) which extends to a regular (possibly degenerate) \(2\)-form \(\tilde{\pi}\) on some (any) smooth resolution \(\tilde{X}\) of \(X\). The latter condition is independent of the chosen resolution: this can be deduced from [26, Corollary to Theorem 3.22], since \(2\)-forms on the smooth resolution which are defined away from a locus of codimension \(2\) can always be extended, using the algebraic form of Hartog's principle. An affine symplectic singularity with a contracting \(\mathbb{C}^{\times}\)-action (such that the Poisson bracket is rescaled by the contracting action) is called a conical symplectic singularity. Some classical examples of these varieties are the normalizations of closures of nilpotent orbits in simple Lie algebras. If \(X\) is a conical symplectic singularity then it is necessarily affine, and so the sheaf of regular functions is determined by the algebra \(A=\mathbb{C}[X]\). The choice of the form \(\pi\) gives rise to a Poisson bracket on \(A\) which is generically non-degenerate. Furthermore the \(\mathbb{C}^{\times}\)-action endows \(A\) with a non-negative grading, and there exists \(d\in\mathbb{N}\) such that the Poisson bracket has degree \(-d\) with respect to this grading. Therefore the theory of these singularities is encapsulated by an especially nice class of graded Poisson algebras. In the philosophy of [27] we wish to study deformation theory with fixed symmetries. This works especially nicely for conical symplectic singularities, as we observed in [1]. In what follows we shall always take \(\Gamma\) to be a reductive algebraic group acting on \(A=\mathbb{C}[X]\) by graded Poisson automorphisms, and we say that \(X\) is a _\(\Gamma\)-conical symplectic singularity_. ### Poisson deformations and the deformation functor Throughout this section, we fix a \(\Gamma\)-conical symplectic singularity \(X\) and set \(A=\mathbb{C}[X]\), with Poisson bracket in degree \(-d\). Let \(B\in\mathbf{G}\). A graded Poisson deformation of \(A\) over \(B\) is a pair \((\mathcal{A},\iota)\) where \(\mathcal{A}\) is a graded Poisson \(B\)-algebra flat as a \(B\)-module and \(\iota\colon\mathcal{A}\otimes_{B}\mathbb{C}_{+}\to A\) is an isomorphism of graded Poisson algebras. The notion of an isomorphism of graded Poisson deformations of \(A\) over the base \(B\) is the obvious one (see [1, Definition 2.4], for example). Consider the functor \[\operatorname{PD}_{A}\colon\,\mathbf{G}\to\mathbf{Sets}\] which associates to \(B\in\mathbf{G}\) the set \(\operatorname{PD}_{A}(B)\) of isoclasses of graded Poisson deformations of \(A\) over \(B\). For \(\beta\colon B\to B^{\prime}\) in \(\mathbf{G}\) the morphism \(\operatorname{PD}_{A}(\beta)\) maps the isoclass of \((\mathcal{A},\iota)\) to the isoclass of \((\mathcal{A}^{\prime},\iota^{\prime})\), where \(\mathcal{A}^{\prime}=\mathcal{A}\otimes_{B}B^{\prime}\) and \(\iota^{\prime}\) is defined by the composition of isomorphisms \(\mathcal{A}^{\prime}\otimes_{B^{\prime}}\mathbb{C}_{+}\to\mathcal{A}\otimes_ {B}\mathbb{C}_{+}\to A\) where the second map is \(\iota\). Namikawa [20] has shown that there exists \(B_{u}\in\mathbf{G}\) such that \(\operatorname{PD}_{A}\) is representable over \(B_{u}\) i.e. the functors \(\operatorname{PD}_{A}\) and \(\operatorname{Hom}_{\mathbf{G}}(B_{u},-)\) are naturally isomorphic. Let \((\mathcal{A}_{u},\iota_{u})\in\operatorname{PD}_{A}(B_{u})\) correspond to \(\operatorname{id}_{B_{u}}\): this is called a universal graded Poisson deformation of \(A\). The terminology comes from the fact that \((B_{u},(\mathcal{A}_{u},\iota_{u}))\) is a universal element of \(\operatorname{PD}_{A}\) in the notation of [17, III.1], however we will always omit the universal base \(B_{u}\) of the deformation for the sake of brevity. The extent to which a universal element of \(\operatorname{PD}_{A}\) is unique is discussed in [1, Definition 2.6, Remark 2.7]. For \(B\in\mathbf{G}\), the group \(\Gamma\) acts on the set \(\operatorname{PD}_{A}(B)\): indeed, \(\gamma\in\Gamma\) maps the isoclass of \((\mathcal{A},\iota)\) to the isoclass of \((\mathcal{A},\gamma\circ\iota)\). The representatives of isoclasses in the fixed-point set \(\operatorname{PD}_{A}(B)^{\Gamma}\) are called Poisson \(\Gamma\)-deformations of \(A\) over \(B\). There is a well-defined functor \(\operatorname{PD}_{A}^{\Gamma}\colon\,\mathbf{G}\to\mathbf{Sets}\) mapping \(B\in\mathbf{G}\) to \(\operatorname{PD}_{A}(B)^{\Gamma}\) and such that \(\operatorname{PD}_{A}^{\Gamma}(\beta)=\operatorname{PD}_{A}(\beta)\) for \(\beta\) a morphism in \(\mathbf{G}\)[1, Definiton 2.16]. It follows from the representability of \(\operatorname{PD}_{A}\) that there exists a right \(\Gamma\)-action on \(B_{u}\) mirroring the \(\Gamma\)-action on \(\operatorname{PD}_{A}(B_{u})\)[1, Remark 2.18]. Moreover \(\operatorname{PD}_{A}^{\Gamma}\) is representable over the algebra of coinvariants \((B_{u})_{\Gamma}\)[1, Proposition 2.23] and if \(\alpha_{\Gamma}\colon\, B_{u}\to(B_{u})_{\Gamma}\) denotes the quotient morphism in \(\mathbf{G}\), we shall call \(\operatorname{PD}_{A}(\alpha_{\Gamma})(\mathcal{A}_{u},\iota_{u})\) a universal Poisson \(\Gamma\)-deformation of \(A\). ### Quantizations of Poisson deformations and the quantization functor Retain the \(\Gamma\)-conical symplectic singularity \(X\) from the previous section and let \(A=\mathbb{C}[X]\), with Poisson bracket in degree \(-d\). The graded Poisson deformation functor has a non-commutative, filtered analogue which we now recall. Let \(B\in\mathbf{SF}\). A filtered quantization (of a Poisson deformation) of \(A\) over \(B\) is a pair \((\mathcal{Q},\iota)\) where \(\mathcal{Q}\) is a filtered \(B\)-algebra with bracket in degree \(-d\), flat as a \(B\)-module such that \((\operatorname{gr}\mathcal{Q},\iota)\in\operatorname{PD}_{A}(\operatorname{gr}B)\) with \(\operatorname{gr}B\to\operatorname{gr}\mathcal{Q}\) arising from \(B\to\mathcal{Q}\) via the associated graded construction. As in the case of graded Poisson deformations, there is a definition of isomorphism between two filtered quantizations over the base \(B\)[1, Definition 2.9]. We have a functor \[\operatorname{Q}_{A}\colon\operatorname{\mathbf{SF}}\to\operatorname{\mathbf{ Sets}}\] associating to \(B\in\operatorname{\mathbf{SF}}\) the set \(\operatorname{Q}_{A}(B)\) of isoclasses of filtered quantizations of \(A\) over \(B\). For \(\beta\colon B\to B^{\prime}\) in \(\operatorname{\mathbf{SF}}\), \(\operatorname{Q}_{A}(\beta)\) maps the isoclass of \((\mathcal{Q},\iota)\) to the isoclass of \((\mathcal{Q}^{\prime},\iota^{\prime})\), with \(\mathcal{Q}^{\prime}=\mathcal{Q}\otimes_{B}B^{\prime}\) and \(\iota^{\prime}\) defined by the composition of isomorphisms \(\operatorname{gr}\mathcal{Q}^{\prime}\otimes_{\operatorname{gr}B^{\prime}} \mathbb{C}_{+}\to\operatorname{gr}\mathcal{Q}\otimes_{\operatorname{gr}B} \mathbb{C}_{+}\to A\) where the first map is obvious and the second one is \(\iota\). It follows from the work of Losev [15] (see especially Proposition 3.5(1)) that the functors \(\operatorname{PD}_{A}\) and \(\operatorname{Q}_{A}\) satisfy an extremely nice compatibility property, and so we say that \(A\) admits an optimal quantization theory. To be precise, this means that there exists \(B_{u}\in\operatorname{\mathbf{SF}}\) such that \(\operatorname{Q}_{A}\) is representable over \(B_{u}\), the functor \(\operatorname{PD}_{A}\) is representable over \(\operatorname{gr}B_{u}\in\operatorname{\mathbf{G}}\), and the two natural isomorphisms make the following diagram commute: where, for \(B\in\operatorname{\mathbf{SF}}\), we set \(\operatorname{gr}_{B}\colon\operatorname{Hom}_{\operatorname{\mathbf{SF}}}(B _{u},B)\to\operatorname{Hom}_{\operatorname{\mathbf{G}}}(\operatorname{gr}B_ {u},\operatorname{gr}B)\) to be the map \(\beta\mapsto\operatorname{gr}\beta\). We call the element \((\mathcal{Q}_{u},\iota_{u})\in\operatorname{Q}_{A}(B_{u})\) corresponding to \(\operatorname{id}_{B_{u}}\) a universal filtered quantization of \(A\), and we remark that it satisfies a property which is analogous to the one of \((\mathcal{A}_{u},\iota_{u})\). See [1, SS2.5] for more detail. There is also a quantum counterpart to the theory of Poisson \(\Gamma\)-deformations: for \(B\in\operatorname{\mathbf{SF}}\), the group \(\Gamma\) acts on the set \(\operatorname{Q}_{A}(B)\) with \(\gamma\in\Gamma\) mapping the isoclass of \((\mathcal{Q},\iota)\) to the isoclass of \((\mathcal{Q},\gamma\circ\iota)\). We call representatives of isoclasses in the fixed-point sets \(\operatorname{Q}_{A}(B)^{\Gamma}\) the filtered \(\Gamma\)-quantizations of \(A\) over \(B\). This allows to define a functor \(\operatorname{Q}_{A}^{\Gamma}\colon\operatorname{\mathbf{SF}}\to\operatorname {\mathbf{Sets}}\) whose definition is analogous to \(\operatorname{PD}_{A}^{\Gamma}\)[1, Definiton 2.24], and a right \(\Gamma\)-action on \(B_{u}\) (see [1, SS2.7]). The notion of optimal quantization theory can be upgraded to the equivariant setting in the obvious manner: we say that \(A\) admits an optimal \(\Gamma\)-quantization theory if there exists \(B_{u,\Gamma}\in\operatorname{\mathbf{SF}}\) such that \(\operatorname{Q}_{A}^{\Gamma}\), resp. \(\operatorname{PD}_{A}^{\Gamma}\), is representable over \(B_{u,\Gamma}\), resp. \(\operatorname{gr}B_{u,\Gamma}\in\operatorname{\mathbf{G}}\), and the two natural isomorphisms make the following diagram commute: In [1, Theorem 2.39] we observed that whenever a Poisson algebra \(A\) admits an optimal quantization theory it also admits an optimal \(\Gamma\)-quantization theory, with the coinvariant algebra \((B_{u})_{\Gamma}\) as the representing object \(B_{u,\Gamma}\in\operatorname{\mathbf{SF}}\). Finally, if \(\alpha_{\Gamma}\colon B_{u}\to(B_{u})_{\Gamma}\) is the quotient morphism in \(\operatorname{\mathbf{SF}}\), then we call \(\operatorname{Q}_{A}(\alpha_{\Gamma})(\mathcal{Q}_{u},\iota_{u})\) a universal \(\Gamma\)-quantization of \(A\). ### The Slodowy slice as a Poisson deformation Let \(G\) be a complex, connected simple algebraic group and \(\mathfrak{g}=\operatorname{Lie}(G)\) and let \(\mathcal{N}:=\mathcal{N}(\mathfrak{g})\) be the nilpotent cone of \(\mathfrak{g}\). For each \(e\in\mathcal{N}\) we can choose an \(\mathfrak{sl}_{2}\)-triple \((e,h,f)\) containig \(e\) as the nilpositive element, and define the affine subvariety \(\mathcal{S}_{e}:=e+\mathfrak{g}^{f}\), known as the Slodowy slice to \(G\cdot e\) at \(e\). It is a classical result that \(\mathcal{S}_{e}\) intersects \(G\cdot e\) transversally at \(e\). We now describe a conical structure on \(\mathcal{S}_{e}\) and, equivalently, a non-negative grading on \(\mathbb{C}[\mathcal{S}_{e}]\). The semisimple derivation \(\operatorname{ad}(h)\) has integral eigenvalues and so it defines a grading \(\mathfrak{g}=\bigoplus_{i\in\mathbb{Z}}\mathfrak{g}(i)\), called the Dynkin grading. If we let \(\nu:\mathbb{C}^{\times}\to G\) be the unique cocharacter such that \(d_{1}\nu(1)=h\) then we can define a \(\mathbb{C}^{\times}\)-action on \(\mathfrak{g}\) by \[(t,x)\mapsto t^{-2}\operatorname{Ad}(\nu(t))x \tag{2.1}\] This action preserves \(\mathcal{S}_{e}\) and gives a contracting \(\mathbb{C}^{\times}\)-action with unique fixed point \(e\), known as the _Kazhdan action_. The induced grading on the \(\mathbb{C}[\mathcal{S}_{e}]\) is known as the _Kazdan grading_. The Lie bracket on \(\mathfrak{g}\) extends uniquely to a Lie bracket on \(\mathbb{C}[\mathfrak{g}^{*}]\) with \(\mathfrak{g}\) placed in degree \(2\) so that the Poisson bracket is graded in degree \(-2\). The Casimirs \(\operatorname{Cas}\mathbb{C}[\mathfrak{g}^{*}]\) coincide with the \(G\)-invariant functions \(\mathbb{C}[\mathfrak{g}^{*}]^{G}\). Using a choice of \(G\)-equivariant isomorphism \(\kappa\colon\mathfrak{g}\to\mathfrak{g}^{*}\) we transport this to a Poisson structure on \(S(\mathfrak{g}^{*})=\mathbb{C}[\mathfrak{g}]\). Thanks to [9] the variety \(\mathcal{S}_{e}\) can be equipped with a Poisson structure by applying Hamiltonian reduction to \(\mathfrak{g}\). **Lemma 2.1**.: _[_24_, Footnote 1] Restriction \(\mathbb{C}[\mathfrak{g}]\to\mathbb{C}[\mathcal{S}_{e}]\) gives an isomorphism \(\mathbb{C}[\mathfrak{g}]^{G}\simeq\operatorname{Cas}\mathbb{C}[\mathcal{S}_{ e}]\)._ The _nilpotent Slodowy variety at \(e\)_ is \(\mathcal{N}_{e}:=\mathcal{N}\cap\mathcal{S}_{e}\). This lies in the union of nilpotent orbits whose closure contain \(G\cdot e\) and it is a Poisson subvariety of \(\mathcal{S}_{e}\). In fact it carries the structure of a conical symplectic singularity (see [1, SS 3.1] for a more detailed survey of these facts). It is a well known theorem of Kostant (see [12, SS7] for example) that we have a graded Poisson isomorphism \(\mathbb{C}[\mathfrak{g}]\otimes_{\mathbb{C}[\mathfrak{g}]^{G}}\mathbb{C}_{+} \simeq\mathbb{C}[\mathcal{N}]\), and by [23, Theorem 5.4] we obtain an isomorphism \(\iota\colon\mathbb{C}[\mathcal{S}_{e}]\otimes_{\mathbb{C}[\mathfrak{g}]^{G}} \mathbb{C}_{+}\to\mathbb{C}[\mathcal{N}_{e}]\). To rephrase this in the language of the functor of Poisson deformations, we have that \[(\mathbb{C}[\mathcal{S}_{e}],\iota)\in\operatorname{PD}_{\mathbb{C}[ \mathcal{N}_{e}]}(\mathbb{C}[\mathfrak{g}]^{G}).\] ### The finite \(W\)-algebra as a quantization Denote by \(U(\mathfrak{g})\) the universal enveloping algebra of \(\mathfrak{g}\), and extend the Kazhdan grading on \(\mathfrak{g}\) to a filtration on \(U(\mathfrak{g})\). Put \(\chi:=\kappa(e)\). The set \(\mathfrak{g}(<-1)_{\chi}:=\{x-\chi(x)\mid x\in\mathfrak{g}(<-1)\}\subseteq U( \mathfrak{g})\) is stable under \(\operatorname{ad}\mathfrak{g}(<0)\). The finite \(W\)-algebra is the quantum Hamiltonian reduction \[U(\mathfrak{g},e):=\big{(}U(\mathfrak{g})/U(\mathfrak{g})\mathfrak{g}(<-1)_{ \chi}\big{)}^{\operatorname{ad}\mathfrak{g}(<0)}.\] The subquotient inherits an algebra structure from \(U(\mathfrak{g})\) with remarkable properties. The Kazhdan filtration descends to \(U(\mathfrak{g},e)\) and defines a non-negative filtration on \(U(\mathfrak{g},e)\). The commutator on \(U(\mathfrak{g},e)\) lies in filtered degree \(-2\) and \(\operatorname{gr}U(\mathfrak{g},e)\simeq\mathbb{C}[\mathcal{S}_{e}]\) as Kazhdan graded Poisson, algebras by [9, Proposition 5.2]. Furthermore, \(U(\mathfrak{g},e)\) only depends on the adjoint orbit of \(e\) up to isomorphism [23, 9]. Denote the centre of \(U(\mathfrak{g},e)\) by \(Z(\mathfrak{g},e)\). The natural map \(U(\mathfrak{g})^{G}=Z(\mathfrak{g})\to Z(\mathfrak{g},e)\) is an isomorphism (see [1, Lemma 3.3] for example). Furthermore the inclusion map \(Z(\mathfrak{g},e)\to U(\mathfrak{g},e)\) is flat and strictly filtered with respect to the Kazhdan filtration and \(\operatorname{gr}Z(\mathfrak{g})\simeq\mathbb{C}[\mathfrak{g}]^{G}\), where the associated graded is taken with respect to the Kazhdan grading. Combining the details above we conclude that \[(U(\mathfrak{g},e),\iota)\in\operatorname{Q}_{\mathbb{C}[\mathcal{N}_{e}]}(Z (\mathfrak{g})),\] where \(\iota\) is defined in Section 2.6. We refer the reader to [1, Lemma 3.4(2)] for more detail. **Theorem 2.2** ([13] Theorem 1.2 & 1.3; [1] Theorem 1.2).: _Let \(e\in\mathcal{N}(\mathfrak{g})\) and let \(\mathcal{N}_{e}\), resp. \(\mathcal{S}_{e}\), resp. \(U(\mathfrak{g},e)\) be the nilpotent Slodowy variety, resp. the Sodowy slice, resp. the finite \(W\)-algebra at \(e\). Then the following are equivalent:_ 1. \(\operatorname{PD}_{\mathbb{C}[\mathcal{N}_{e}]}\) _is represented by_ \(\mathbb{C}[\mathfrak{g}]^{G}\) _and_ \((\mathbb{C}[\mathcal{S}_{e}],\iota)\) _is a universal Poisson deformation;_ 2. \(\operatorname{Q}_{\mathbb{C}[\mathcal{N}_{e}]}\) _is represented by_ \(Z(\mathfrak{g})\) _and_ \((U(\mathfrak{g},e),\iota)\) _is a universal filtered quantization;_ 3. _the pair_ \((\mathfrak{g},G\cdot e)\) _does not appear in Table_ 1_._ ## 3. Presentations of Slodowy slices associated to two row partitions ### Poisson presentations Let \(X\) be a set. The free Lie algebra \(L_{X}\) on \(X\) is the initial object in the category of complex Lie algebras generated by \(X\) and can be constructed as the Lie subalgebra of the free associative algebra \(\mathbb{C}\langle X\rangle\) generated by \(X\), with the commutator as a Lie bracket, see [25, Theorem 4.2]. The _free Poisson algebra generated by \(X\)_ is the initial object in the category of complex Poisson algebras generated by \(X\). It can be constructed as the symmetric algebra \(S(L_{X})\). If there is a Poisson surjection \(S(L_{X})\twoheadrightarrow A\) then we say that \(A\)_is Poisson generated by (the image of) \(X\)_. It is important to distinguish this from \(A\) being generated by (the image of) \(X\) as a commutative algebra, and we will make this distinction clear whenever necessary. We say that a complex Poisson algebra \(A\) has Poisson generators \(X\) and relations \(Y\subseteq S(L_{X})\) if there is a surjective Poisson homomorphism \(S(L_{X})\twoheadrightarrow A\) and the kernel is the Poisson ideal generated by \(Y\). ### Poisson presentations of Slodowy slices Let \(\mathfrak{g}\) be a simple Lie algebra of type \(\mathsf{C}_{n}\), or \(\mathsf{D}_{n}\) and and let \(\lambda\) be a partition of \(2n\). Suppose that we are in one of the following situations: 1. \(\mathfrak{g}\) is of type \(\mathsf{C}\) and all parts of \(\lambda\) are even; 2. \(\mathfrak{g}\) is of type \(\mathsf{D}\) and all parts of \(\lambda\) are odd. In [28] the second author provided a Poisson presentation for \(\mathbb{C}[\mathcal{S}_{e}]\) in the cases where \(e\in\mathcal{N}(\mathfrak{g})\) has partition satisfying (i) or (ii). In the current setting we only need the presentation in types \(\mathsf{C}\) and \(\mathsf{D}\) when \(e\) has two Jordan blocks. The reader should note that the presentation here is simpler than the general case, in particular relations (1.9)-(1.12) from [28, Theorem 1.3] are void in the two-block setting, and the relation (3.7), which is special to the symplectic case, can be expressed in a closed form for \(n=2\) (see [28, Example 4.9]). Until otherwise stated let \(\mathfrak{g}\) be a simple Lie algebra of type \(\mathsf{C}_{n}\) or \(\mathsf{D}_{n}\) and fix an element \(e\in\mathcal{N}(\mathfrak{g})\) with portion \(\lambda=(\lambda_{1},\lambda_{2})\) satisfying (i) or (ii). Write \[s_{1,2}=\frac{\lambda_{2}-\lambda_{1}}{2}.\] Also make the following notation for \(r,s\in\mathbb{Z}\) \[\varpi_{r,s}:=(-1)^{r}-(-1)^{s}=\left\{\begin{array}{cl}2&\text{ if }r\in 2 \mathbb{Z},s\in 2\mathbb{Z}+1,\\ 0&\text{ if }r+s\in 2\mathbb{Z},\\ -2&\text{ if }r\in 2\mathbb{Z}+1,s\in 2\mathbb{Z}.\end{array}\right. \tag{3.1}\] **Theorem 3.1**.: _[_28_, Theorem 1.3, Proposition 4.7 & Example 4.9]_ _Let \(\mathfrak{g}\) be a simple Lie algebra of type \(\mathsf{C}_{n}\) or \(\mathsf{D}_{n}\) and let \(e\in\mathcal{N}(\mathfrak{g})\) with partition \((\lambda_{1},\lambda_{2})\vdash 2n\) satisfying (i) or (ii). The algebra \(\mathbb{C}[\mathcal{S}_{e}]\) has Poisson generators_ \[\{\eta_{i}^{(2r)}\mid 1\leqslant i\leqslant 2,\ r>0\}\cup\{\theta^{(r)}\mid r >s_{1,2}\} \tag{3.2}\] _together with the following relations_ \[\{\eta_{i}^{(2r)},\eta_{j}^{(2s)}\} =0 \tag{3.3}\] \[\big{\{}\eta_{i}^{(2r)},\theta_{j}^{(s)}\} =(\delta_{i,j}-\delta_{i,j+1})\sum_{t=0}^{r-1}\eta_{i}^{(2t)} \theta_{j}^{(2r+s-1-2t)}\] (3.4) \[\{\theta^{(r)},\theta^{(s)}\} =\frac{1}{2}\sum_{t=r}^{s-1}\theta^{(t)}\theta^{(r+s-1-t)}+(-1)^{ s_{1,2}}\varpi_{r,s}\sum_{t=0}^{(r+s-1)/2}\eta_{2}^{(r+s-1-2t)}\bar{\eta}_{1}^{2t} \quad\text{ for }\quad r<s. \tag{3.5}\] \[\eta_{1}^{(2r)} =0\quad\text{for}\quad 2r>\lambda_{1} \tag{3.6}\] \[\sum_{t=0}^{\lambda_{1}/2}\eta_{1}^{(2t)}\theta^{(\lambda_{1}-2t+s_ {1,2}+1)} =0\qquad\qquad\text{when }\mathfrak{g}=\mathfrak{sp}_{2n}. \tag{3.7}\] _where we adopt the convention \(\eta_{i}^{(0)}=\tilde{\eta}_{i}^{(0)}=1\) and the elements \(\{\tilde{\eta}_{1}^{(2r)}\mid r\in\mathbb{Z}_{\geq 0}\}\) are defined via the recursion_ \[\tilde{\eta}_{1}^{(2r)}:=-\sum_{t=1}^{r}\eta_{1}^{(2t)}\tilde{\eta}_{1}^{(2r-2 t)}. \tag{3.8}\] _Furthermore the Kazhdan grading on \(\mathbb{C}[\mathcal{S}_{e}]\) places \(\eta_{i}^{(2r)}\) in degree \(4r\) and \(\theta^{(r)}\) in degree \(2r\), and the Poisson bracket lies in degree \(-2\). _ Now let \(\mathfrak{g}:=\mathfrak{so}_{2n}\) and \(\mathfrak{g}_{0}:=\mathfrak{sp}_{2n-2}\) with \(n\geq 3\) and fix a partition \(\lambda=(\lambda_{1},\lambda_{2})\vdash 2n\) satisfying (ii) from Section 3.2 and choose nilpotent elements \(e\in\mathfrak{g}\) and \(e_{0}\in\mathfrak{g}_{0}\) with partitions \((\lambda_{1},\lambda_{2})\) and \((\lambda_{1}-1,\lambda_{2}-1)\) respectively. In particular, \((\lambda_{1}-1,\lambda_{2}-1)\) satisfies (i). Also choose \(\mathfrak{sl}_{2}\)-triples for these nilpotent elements and build the Slodowy slices \(\mathcal{S}_{e}\subset\mathfrak{g}\) and \(\mathcal{S}_{e_{0}}\subset\mathfrak{g}_{0}\). **Corollary 3.2**.: _There is a Kazhdan graded surjection of Poisson algebras_ \[\varphi\colon\mathbb{C}[\mathcal{S}_{e}]\to\mathbb{C}[\mathcal{S}_{e_{0}}] \tag{3.9}\] _defined by sending the generators (3.2) of \(\mathbb{C}[\mathcal{S}_{e}]\) to those elements of \(\mathbb{C}[\mathcal{S}_{e_{0}}]\) with the same labels. The kernel of this homomorphism is Poisson generated by the element (3.7)._ Proof.: The relations (3.3)-(3.5) are identical for both \(\mathbb{C}[\mathcal{S}_{e}]\) and \(\mathbb{C}[\mathcal{S}_{e_{0}}]\). For parity reasons the sets \(\{s\mid 2s>\lambda_{1}\}\) and \(\{s\mid 2s>\lambda_{1}-1\}\) are equal, and so relation (3.7) places precisely the same constraints on the generators of \(\mathbb{C}[\mathcal{S}_{e}]\) and \(\mathbb{C}[\mathcal{S}_{e_{0}}]\). Since the Poisson generator (3.6) of the kernel of the map is homogeneous with respect to this grading, it follows that \(\varphi\) is graded. ## 4. Slodowy slices in orthogonal algebras ### Invariant theory for classical Lie algebras Let \(n>0\) and for \(x\in\mathfrak{gl}_{2n}\), consider the characteristic polynomial \(\det(t\mathbb{I}_{2n}-x)=t^{2n}+\sum_{i=1}^{2n}q_{i}(x)t^{2n-i}\in\mathbb{C}[ \mathfrak{gl}_{2n}]^{\mathrm{GL}_{2n}}[t]\); for \(i=1,\ldots,2n\), the element \(q_{i}\) is a homogeneous polynomial of degree \(i\) in \(\mathbb{C}[\mathfrak{gl}_{2n}]^{\mathrm{GL}_{2n}}\). In fact \(\{q_{i}\mid i=1,\ldots,2n\}\) is a complete system of algebraically independent generators of the algebra \(\mathbb{C}[\mathfrak{gl}_{2n}]^{\mathrm{GL}_{2n}}\)[12, Proposition 7.9]. Note that, up to sign, \(q_{1}\) is the trace and \(q_{2n}\) is the determinant. Now let \(n\geq 3\) and set \(\tilde{G}:=\mathrm{O}_{2n}\), \(G:=\mathrm{SO}_{2n}\), and \(\mathfrak{g}:=\mathfrak{so}_{2n}\). For convenience we assume that these groups stabilise the bilinear form associated to the identity matrix (see Remark 4.2). We recall some aspects of the invariant theory for the adjoint representation of \(G\), and we refer the reader to [12, SS7.7] for a good introductory survey. Consider the restriction of \(q_{i}\) to \(\mathfrak{g}\), and denote it \(p_{i}\). We have \(p_{i}\in\mathbb{C}[\mathfrak{g}]^{G}\), and furthermore \(p_{i}=0\) for \(i\) odd, whilst \(p_{i}\neq 0\) for \(i\) even. It is well-known [12, SS7.7] that \(p_{2n}\) admits a square root in \(\mathbb{C}[\mathfrak{g}]^{G}\) known as the _Pfaffian_, and we will denote it \(p\) throughout this article. The polynomials \(p_{2},p_{4},...,p_{2n-2},p\) form a complete set of algebraically independent generators of \(\mathbb{C}[\mathfrak{g}]^{G}\). The determinant \(\det\colon\tilde{G}\to\mathbb{C}^{\times}\) takes values in the subgroup \(\{\pm 1\}\subseteq\mathbb{C}^{\times}\) and \(G\) is the kernel of this homomorphism. Therefore the outer automorphism group \(\Gamma:=\tilde{G}/G\) of \(\mathfrak{g}\) is a cyclic group of order two. Note that the action of \(\tilde{G}\) on \(\mathbb{C}[\mathfrak{g}]^{G}\) factors to an action of \(\Gamma\), and we can characterise the Pfaffian as follows. **Lemma 4.1**.: \(\mathbb{C}p\) _is the unique one dimensional subspace of \(\mathbb{C}[\mathfrak{g}]^{G}\) such that:_ 1. \(\mathbb{C}p\) _lies in degree_ \(n\)_;_ 2. \(\mathbb{C}p\) _is isomorphic to the unique non-trivial representation of_ \(\Gamma\)_._ Proof.: Consider the subalgebra \(C\subseteq\mathbb{C}[\mathfrak{g}]^{G}\) generated by \(p_{2},p_{4},...,p_{2n}\). We claim that \(C=\mathbb{C}[\mathfrak{g}]^{\tilde{G}}\). Since \(\tilde{G}\subseteq\mathrm{GL}_{2n}\) it follows that every element of \(C\) is \(\tilde{G}\)-fixed, and so \(C\subseteq\mathbb{C}[\mathfrak{g}]^{\tilde{G}}\subseteq\mathbb{C}[\mathfrak{ g}]^{G}\). Since \(p^{2}=p_{2n}\) the quotient \(\mathbb{C}[\mathfrak{g}]^{G}/C\) is a \(2\)-dimensional \(\tilde{G}\)-module with basis \(\{1,p\}\). Note that the \(\tilde{G}\)-action factors through \(\Gamma\) and so \(C=\mathbb{C}[\mathfrak{g}]^{\tilde{G}}\) will follow if we demonstrate that \(p\) spans a copy of the unique non-trivial simple \(\Gamma\)-module. Thanks to our choice of symmetric bilinear form we have \(\tilde{G}=\{g\in\mathrm{GL}_{2n}\mid g^{\top}=g^{-1}\}\) and \(\mathfrak{g}=\{x\in\mathfrak{gl}_{2n}\mid x=-x^{\top}\}\). As noted in [12, SS7.7], for \(g\in\tilde{G},x\in\mathfrak{g}\) we have \(p(gxg^{\top})=\det(g)p(x)\), and so \(g\cdot p=-p\) for any element \(g\in\tilde{G}\) of determinant \(-1\). This shows that \(C=\mathbb{C}[\mathfrak{g}]^{\tilde{G}}\). Since \(p^{2}=p_{2n}\) it follows that \(\mathbb{C}[\mathfrak{g}]^{G}\) is a free \(\mathbb{C}[\mathfrak{g}]^{\tilde{G}}\)-module of rank two and \[\mathbb{C}[\mathfrak{g}]^{G}=\mathbb{C}[\mathfrak{g}]^{\tilde{G}}\oplus \mathbb{C}[\mathfrak{g}]^{\tilde{G}}p.\] By our previous remarks, this is also a decomposition of \(\Gamma\)-modules: \(\mathbb{C}[\mathfrak{g}]^{\tilde{G}}\) is equal to the isotypic component of the trivial \(\Gamma\)-module, and \(\mathbb{C}[\mathfrak{g}]^{\tilde{G}}p\) is the isotypic component of the non-trivial simple \(\Gamma\)-module. Since \(\mathbb{C}p\) is the unique one dimensional subspace of \(\mathbb{C}[\mathfrak{g}]^{\tilde{G}}p\) of degree \(n\) the proof is complete. _Remark 4.2_.: In this section we could have chosen \(\tilde{G}\) to be the subgroup stabilising any non-degenerate symmetric bilinear form on \(\mathbb{C}^{2n}\). Such forms are classified by invertible symmetric matrices, and if we chose one other than the identity matrix then our discussion of the Pfaffian would be slightly more complicated, see [12, SS7.7]. By the remarks of [12, SS1.3] we see that Lemma 4.1 holds true for any choice of form. Let \(e\in\mathcal{N}(\mathfrak{g})\) and embed it in a \(\mathfrak{sl}_{2}\) triple \((e,h,f)\). Retain the notation \(\mathcal{S}_{e}=e+\mathfrak{g}^{f}\) for the Slodowy slice. **Lemma 4.3**.: _Suppose that the short exact sequence \(G\hookrightarrow\tilde{G}\twoheadrightarrow\Gamma\) is split by a map \(s\colon\Gamma\to\tilde{G}\) such that \(s(\Gamma)\) preserves \(\mathcal{S}_{e}\). Then the restriction of the Pfaffian to \(\mathcal{S}_{e}\) generates the unique one dimensional subspace \(\mathbb{C}p|_{\mathcal{S}_{e}}\subseteq\mathrm{Cas}\,\mathbb{C}[\mathcal{S}_ {e}]\) such that:_ 1. \(\mathbb{C}p|_{\mathcal{S}_{e}}\) _has Kazhdan degree_ \(2n\)_;_ 2. \(\mathbb{C}p|_{\mathcal{S}_{e}}\) _is isomorphic to the unique non-trivial representation of_ \(s(\Gamma)\)_._ Proof.: Using (2.1) we see that elements of \(\mathbb{C}[\mathfrak{g}]^{G}\) of total degree \(i\) lie in Kazhdan degree \(2i\). Now the result is a direct consequence of Lemma 4.1, along with the remarks of Section 2.6. ### A special Casimir on the orthogonal slice In this section we fix \(\mathfrak{g}:=\mathfrak{so}_{2n}\) with \(n\geq 3\) and an element \(e\in\mathcal{N}(\mathfrak{g})\) with associated partition \(\lambda=(\lambda_{1},\lambda_{2})\vdash 2n\) satisfying (ii) from Section 3.2. We will study the slice \(\mathcal{S}_{e}\) using the Poisson presentation of Theorem 3.1, and we resume all the relevant notation from that section. In particular we let \(s_{1,2}:=\frac{\lambda_{2}-\lambda_{1}}{2}\), so that \(n=\lambda_{1}+s_{1,2}\). Introduce the notation \[z:=\sum_{t=0}^{\frac{\lambda_{1}-1}{2}}\eta_{1}^{(2t)}\theta^{(\lambda_{1}-2t+ s_{1,2})}=\sum_{t=0}^{\frac{\lambda_{1}-1}{2}}\eta_{1}^{(2t)}\theta^{(n-2t)} \in\mathbb{C}[\mathcal{S}_{e}]. \tag{4.1}\] The goal of this section is to prove: **Proposition 4.4**.: _We have \(z\in\mathrm{Cas}\,\mathbb{C}[\mathcal{S}_{e}]\)._ Proof.: We proceed by direct computation, showing that Poisson brackets between \(z\) and the generators (3.2) vanish. For all integers \(r\geq 0\), we have: \[\big{\{}\eta_{1}^{(2r)},z\big{\}} = \sum_{t=0}^{\frac{\lambda_{1}-1}{2}}\big{\{}\eta_{1}^{(2r)},\eta_{1 }^{(2t)}\theta^{(n-2t)}\big{\}}=\sum_{t=0}^{\frac{\lambda_{1}-1}{2}}\eta_{1}^{ (2t)}\big{\{}\eta_{1}^{(2r)},\theta^{(n-2t)}\big{\}}=\] \[= \sum_{t=0}^{\frac{\lambda_{1}-1}{2}}\eta_{1}^{(2t)}\sum_{k=0}^{r -1}\eta_{1}^{(2k)}\theta^{(2r+n-2t-2k)}=\sum_{k=0}^{r-1}\eta_{1}^{(2k)}\sum_{ t=0}^{\frac{\lambda_{1}-1}{2}}\eta_{1}^{(2t)}\theta^{(2r+n-2t-2k)}=\] \[= \sum_{k=0}^{r-1}\eta_{1}^{(2k)}\big{\{}\eta_{1}^{(\lambda_{1}+1) },\theta^{(n+2r-2k)}\big{\}}=0,\] where we use \(\eta_{1}^{(\lambda_{1}+1)}=0\) by (3.6) in the final line of the calculation. We now show that \(\big{\{}\eta_{2}^{(2r)},z\big{\}}=0\) for all \(r\geq 0\), in a similar fashion: \[\big{\{}\eta_{2}^{(2r)},z\big{\}} = \sum_{t=0}^{\frac{\lambda_{1}-1}{2}}\big{\{}\eta_{2}^{(2r)},\eta_ {1}^{(2t)}\theta^{(n-2t)}\big{\}}=-\sum_{t=0}^{\frac{\lambda_{1}-1}{2}}\eta_{1 }^{(2t)}\big{\{}\eta_{2}^{(2r)},\theta^{(n-2t)}\big{\}}=\] \[= -\sum_{k=0}^{r-1}\eta_{2}^{(2k)}\big{\{}\eta_{1}^{(\lambda_{1}+1 )},\theta^{(n+2r-2k)}\big{\}}=0.\] Now we observe that \(\{\eta_{1}^{(2)},\theta^{(s)}\}=\theta^{(s+1)}\) for all \(s\geq s_{1,2}+1\). Using the Jacobi identity and a recursive argument, we deduce that \(\{z,\theta^{(s)}\}=0\) for all \(s\geq s_{1,2}+1\) provided \(\{z,\theta^{(s_{1,2}+1)}\}=0\). Using \(s_{1,2}=n-\lambda_{1}\), it remains to show that \(\big{\{}z,\theta^{(n-\lambda_{1}+1)}\big{\}}=0\). \[\big{\{}z,\theta^{(n-\lambda_{1}+1)}\big{\}} = \sum_{t=0}^{\frac{\lambda_{1}-1}{2}}\Big{\{}\eta_{1}^{(2t)}\theta ^{(n-2t)},\theta^{(n-\lambda_{1}+1)}\Big{\}}\] \[= \sum_{t=0}^{\frac{\lambda_{1}-1}{2}}\Big{\{}\eta_{1}^{(2t)}, \theta^{(n-\lambda_{1}+1)}\Big{\}}\,\theta^{(n-2t)}+\sum_{t=0}^{\frac{\lambda _{1}-1}{2}}\eta_{1}^{(2t)}\left\{\theta^{(n-2t)},\theta^{(n-\lambda_{1}+1)}\right\}\] \[= \sum_{t=1}^{\frac{\lambda_{1}-1}{2}}\Big{\{}\eta_{1}^{(2t)}, \theta^{(n-\lambda_{1}+1)}\Big{\}}\,\theta^{(n-2t)}-\sum_{t=0}^{\frac{\lambda _{1}-1}{2}}\eta_{1}^{(2t)}\left\{\theta^{(n-\lambda_{1}+1)},\theta^{(n-2t)}\right\}\] \[= \sum_{t=1}^{\frac{\lambda_{1}-1}{2}}\sum_{k=0}^{t-1}\eta_{1}^{(2 k)}\theta^{(2t+n-\lambda_{1}-2k)}\theta^{(n-2t)}-\sum_{t=0}^{\frac{\lambda_{1}-1}{2}} \eta_{1}^{(2t)}\left\{\theta^{(n-\lambda_{1}+1)},\theta^{(n-2t)}\right\}\] \[= \sum_{t=1}^{\frac{\lambda_{1}-1}{2}}\sum_{k=0}^{t-1}\eta_{1}^{(2 k)}\theta^{(2t+n-\lambda_{1}-2k)}\theta^{(n-2t)}-\sum_{t=0}^{\frac{\lambda_{1}- 3}{2}}\eta_{1}^{(2t)}\left\{\theta^{(n-\lambda_{1}+1)},\theta^{(n-2t)}\right\}\] \[= \sum_{k=0}^{\frac{\lambda_{1}-3}{2}}\eta_{1}^{(2k)}\sum_{t=k+1}^ {\frac{\lambda_{1}-1}{2}}\theta^{(2t+n-\lambda_{1}-2k)}\theta^{(n-2t)}-\sum_{ t=0}^{\frac{\lambda_{1}-3}{2}}\eta_{1}^{(2t)}\left\{\theta^{(n-\lambda_{1}+1)}, \theta^{(n-2t)}\right\}.\] It is therefore enough to prove that, for fixed \(0\leq k\leq\frac{\lambda_{1}-3}{2}\), \[\sum_{j=k+1}^{\frac{\lambda_{1}-1}{2}}\theta^{(2j+n-\lambda_{1}-2k)}\theta^{(n -2j)}=\big{\{}\theta^{(n-\lambda_{1}+1)},\theta^{(n-2k)}\big{\}} \tag{4.2}\] Since \(n-\lambda_{1}+1\) and \(n-2k\) have the same parity, relation (3.5) implies that the right hand side of (4.2) equals: \[\frac{1}{2}\sum_{i=n-\lambda_{1}+1}^{n-2k-1}\theta^{(i)}\theta^{(\lambda_{2}-2k- i)}=\theta^{(n-\lambda_{1}+1)}\theta^{(n-2k-1)}+\cdots+\theta^{(\frac{\lambda_{2}-1- 2k}{2})}\theta^{(\frac{\lambda_{2}+1-2k}{2})}=\sum_{t=1}^{\frac{\lambda_{1}-1- 2k}{2}}\theta^{(n-\lambda_{1}+t)}\theta^{(n-2k-t)}.\] For \(k=0,\ldots,\frac{\lambda_{1}-3}{2}\) we define the index sets for the two summations \[I_{L}=\left\{j\in\mathbb{Z}\mid k+1\leqslant j\leqslant\frac{\lambda_{1}-1}{ 2}\right\},\quad I_{R}=\left\{t\in\mathbb{Z}\mid 1\leqslant t\leqslant\frac{ \lambda_{1}-1-2k}{2}\right\}.\] **Case 1.** If \(\frac{\lambda_{1}-1-2k}{2}\) is odd, there exists \(\bar{j}\in I_{L}\) such that \(m-2\bar{j}=\frac{\lambda_{2}-1-2k}{2}\), namely \(\bar{j}=\frac{\lambda_{1}+1+2k}{4}\). Then: \[\sum_{j\in I_{L}}\theta^{(2j+n-\lambda_{1}-2k)}\theta^{(n-2j)}=\sum_{j=k+1}^{ \bar{j}-1}\theta^{(2j+n-\lambda_{1}-2k)}\theta^{(n-2j)}+\sum_{j=\bar{j}}^{ \frac{\lambda_{1}-1}{2}}\theta^{(2j+n-\lambda_{1}-2k)}\theta^{(n-2j)}.\] By comparing pieces, one sees that: \[\sum_{j=k+1}^{\bar{j}-1}\theta^{(2j+n-\lambda_{1}-2k)}\theta^{(n-2j)}=\sum_{2 \mathbb{Z}\cap I_{R}}\theta^{(n-\lambda_{1}+t)}\theta^{(n-2k-t)}\] and since \(\mathbb{C}[\mathcal{S}_{e}]\) is commutative \[\sum_{j=\bar{j}}^{\frac{\lambda_{1}-1}{2}}\theta^{(2j+n-\lambda_{1}-2k)} \theta^{(n-2j)}=\sum_{(2\mathbb{Z}+1)\cap I_{R}}\theta^{(n-\lambda_{1}+t)} \theta^{(n-2k-t)},\] this concludes the proof in this case. **Case 2.** If \(\frac{\lambda_{1}-1-2k}{2}\) is even, there exists \(\bar{j}\in I_{L}\) such that \(2\bar{j}+n-\lambda_{1}-2k=\frac{\lambda_{2}-1-2k}{2}\), namely \(\bar{j}=\frac{\lambda_{1}-1+2k}{4}\). Then the proof mimics the one in the previous case: we observe that \[\sum_{j=k+1}^{\bar{j}}\theta^{(2j+n-\lambda_{1}-2k)}\theta_{(n-2j)}=\sum_{2 \mathbb{Z}\cap I_{R}}\theta^{(n-\lambda_{1}+t)}\theta^{(n-2k-t)}\] and applying commutativity in \(\mathbb{C}[\mathcal{S}_{e}]\) we have \[\sum_{j=\bar{j}+1}^{\frac{\lambda_{1}-1}{2}}\theta^{(2j+n-\lambda_{1}-2k)} \theta^{(n-2j)}=\sum_{(2\mathbb{Z}+1)\cap I_{R}}\theta^{(n-\lambda_{1}+t)} \theta^{(n-2k-t)}.\] Now since \(z\) is a Casimir, the Poisson ideal it generates coincides with the ideal which it generates when viewing \(\mathbb{C}[\mathcal{S}_{e}]\) as a commutative algebra. By Corollary 3.2, and using the same notation we initiated there, we have: **Corollary 4.5**.: _The kernel of the homomorphism \(\varphi\colon\mathbb{C}[\mathcal{S}_{e}]\to\mathbb{C}[\mathcal{S}_{e_{0}}]\) defined in (3.9) is generated by the single element \(z\). _ _Remark 4.6_.: Note that the results of this section are valid also when \(\lambda_{1}=1\): Theorem 3.1 holds for \(n\in\{1,2\}\) however the relations are significantly simpler in the case \(n=1\). ### The \(\Gamma\)-action on the orthogonal slice Throughout this section, we fix \(\mathfrak{g}:=\mathfrak{so}_{2n},n\geqslant 3\) and we let \(e\in\mathcal{N}(\mathfrak{g})\) with a two-block partition satisfying (ii) from Section 3.2 and \(z\) as in (4.1). Our goal is to examine the action of the outer automorphism group of \(\mathfrak{g}\) on \(z\), with respect to a carefully chosen splitting of the surjection \(\mathrm{O}_{2n}\twoheadrightarrow\mathrm{O}_{2n}\,/\,\mathrm{SO}_{2n}\). The set up which we use to make the necessary calculations is explained in [28, Part I, SS4]. For ease of reference we recap some of the main facts here. We begin by introducing a choice of embedding \(\mathfrak{g}\subseteq\mathfrak{gl}_{2n}\) and a nilpotent element \(e\in\mathfrak{g}\), a block diagonal nilpotent matrix with Jordan blocks of sizes \(\lambda_{1},\lambda_{2}\), with \(1\) or \(0\) on the super-diagonal and zeroes elsewhere. This is described in detail in [28, SS4.2]. The natural representation \(\mathbb{C}^{2n}\) of \(\mathfrak{g}\) has basis \(\{b_{i,j}\mid 1\leqslant i\leqslant 2,\ 1\leqslant j\leqslant\lambda_{i}\}\). The general linear Lie algebra \(\mathfrak{gl}_{2n}\) has a basis \[\{e_{i,j;k,l}\mid 1\leqslant i,k\leqslant 2,\ 1\leqslant j\leqslant\lambda_{i},\ 1\leqslant l\leqslant\lambda_{k}\} \tag{4.3}\] where \(e_{i,j;k,l}b_{r,s}=\delta_{k,r}\delta_{l,s}b_{i,j}\), The relations in \(\mathfrak{gl}_{2n}\) are given by the following formula \[[e_{i_{1},j_{1};k_{1},l_{1}},e_{i_{2},j_{2};k_{2},l_{2}}]=\delta_{k_{1},i_{2} }\delta_{l_{1},j_{2}}e_{i_{1},j_{1};k_{2},l_{2}}-\delta_{k_{2},i_{1}}\delta_{ l_{2},j_{1}}e_{i_{2},j_{2};k_{1},l_{1}} \tag{4.4}\] and \(\mathfrak{gl}_{2n}\) admits an involution \(\tau:\mathfrak{gl}_{2n}\to\mathfrak{gl}_{2n}\) determined by \[\tau(e_{i,j;k,l})=(-1)^{j-l-1}e_{k,\lambda_{k}+1-l;i,\lambda_{i}+1-j}. \tag{4.5}\] The subalgebra \(\mathfrak{g}:=\mathfrak{gl}_{2n}^{\tau}\) is isomorphic to \(\mathfrak{so}_{2n}\), see [28, (4.13)]. The nilpotent element \[e:=\sum_{i=1}^{2}\sum_{j=1}^{\lambda_{i}-1}e_{i,j;i,j+1} \tag{4.6}\] has two Jordan blocks of sizes \(\lambda_{1}\) and \(\lambda_{2}\). Furthermore \(e\) forms part of an \(\mathfrak{sl}_{2}\)-triple \(\{e,h,f\}\subseteq\mathfrak{g}\) and so \(\tau\) fixes each element of this triple. By the classification of \(\mathfrak{sl}_{2}\)-representations, the operator \(\mathrm{ad}(h)\) defines Dynkin gradings \(\mathfrak{g}=\bigoplus_{i\in\mathbb{Z}}\mathfrak{g}(i)\) and \(\mathfrak{gl}_{2n}=\bigoplus_{i\in\mathbb{Z}}\mathfrak{gl}_{2n}(i)\). To be precise we have \[\deg(e_{i,j;k,l})=2(l-j)+\lambda_{i}-\lambda_{k}. \tag{4.7}\] by [28, (4.5)]. Since \(\lambda_{1}-\lambda_{2}\) is even, it follows that the Dynkin grading is even, i.e. \(\mathfrak{g}(i)=\mathfrak{gl}_{2n}(i)=0\) for \(i\notin 2\mathbb{Z}\). We now introduce a splitting of \(\mathrm{O}_{2n}\twoheadrightarrow\mathrm{O}_{2n}\,/\,\mathrm{SO}_{2n}\). Let \(\gamma\in\mathrm{GL}_{2n}\) be the element defined by \[\gamma(b_{i,j}):=\left\{\begin{array}{cc}-b_{i,j}&\text{ if }i=1;\\ b_{i,j}&\text{ if }i=2.\end{array}\right.\] **Lemma 4.7**.: _Retain notation from the current section. Then:_ 1. \(\det(\gamma)=-1\) _and_ \(\gamma^{2}\) _is the identity._ 2. _For all_ \(i,j,k,l\) _as per (_4.3_) we have_ \[\mathrm{Ad}(\gamma)(e_{i,j;k,l})=\left\{\begin{array}{cc}-e_{i,j;k,l}&\text{ if }i\neq k;\\ e_{i,j;k,l}&\text{ if }i=k.\end{array}\right.\] 3. \(\gamma\) _lies in the orthogonal group_ \(\mathrm{O}_{2n}\subseteq\mathrm{GL}_{2n}\) _satisfying_ \(\mathrm{Lie}(\mathrm{O}_{2n})=\mathfrak{g}\)_._ 4. _If_ \(\widetilde{\Gamma}\subseteq\mathrm{O}_{2n}\) _denotes the subgroup generated by_ \(\gamma\) _then the map_ \(\widetilde{\Gamma}\to\mathrm{O}_{2n}\) _splits the surjection_ \(\mathrm{O}_{2n}\twoheadrightarrow\Gamma:=\mathrm{O}_{2n}\,/\,\mathrm{SO}_{2n}\)_. Furthermore_ \(\widetilde{\Gamma}\) _fixes_ \(e,h\) _and_ \(f\)_._ Proof.: Part (1) and (2) follow directly from the definitions. Comparing (2) with formula (4.5) we see that \(\operatorname{Ad}(\gamma)\) commutes with \(\tau\), hence \(\operatorname{Ad}(\gamma)\) preserves \(\mathfrak{g}\). Now (3) follows from the fact that \(\operatorname{O}_{2n}\subseteq\operatorname{GL}_{2n}\) is precisely the subgroup of \(\operatorname{GL}_{2n}\) preserving \(\mathfrak{g}\) under \(\operatorname{Ad}\). As we noted earlier \(\Gamma\) is a cyclic group of order two generated by the coset of any element of \(\operatorname{O}_{2n}\) of determinant \(-1\), and so the first part of (4) follows from (1) and (3). Examining (4.6) and [28, (4.5)] we see that \(\operatorname{Ad}(\gamma)\) fixes both \(e\) and \(h\), and by [8, Lemma 3.4.4] we see that \(\operatorname{Ad}(\gamma)\) also fixes \(f\). _Remark 4.8_.: If \(K\) is a simple algebraic group with \(\mathfrak{k}=\operatorname{Lie}(K)\) and \(e\in\mathcal{N}(\mathfrak{k})\) with an attached \(\mathfrak{sl}_{2}\)-triple \((e,h,f)\), Slodowy defines the _outer reductive centralizer_\(\operatorname{CA}(e)\) of \(e\) to be the elements of \(\operatorname{Aut}(\mathfrak{k})\) fixing the triple pointwise (cf. [27, SS7.6]). This is a possibly disconnected reductive group which acts via graded Poisson automorphisms on the Slodowy slice \(e+\mathfrak{k}^{f}\), and its nilpotent part as well, see the argument for [1, Lemma 4.1]. If \(e\) belongs to a characteristic nilpotent orbit, then the first part of the proof of [27, SS7.6, Lemma 2] carries over to this more general setting and yields the short exact sequence \(1\to C_{K}(e,h,f)\to\operatorname{CA}(e)\to\operatorname{Out}(\mathfrak{k})\to 1\), where \(C_{K}(e,h,f)\) is the pointwise stabiliser in \(K\) of the triple and \(\operatorname{Out}(\mathfrak{k})\) denotes the outer automorphism group of \(\mathfrak{k}\). For \(\mathfrak{k}=\mathfrak{so}_{2n}\) and \(e\) as in the current section, one can show that \(\widehat{\Gamma}\) is isomorphic to a subgroup of \(\operatorname{CA}(e)\) splitting the map \(\operatorname{CA}(e)\to\operatorname{Out}(\mathfrak{k})\). There is a parabolic Lie algebra \(\mathfrak{p}=\bigoplus_{i\geqslant 0}\mathfrak{g}(i)\) constructed from the Dynkin grading. Denote the nilradical of the opposite parabolic algebra by \(\mathfrak{n}=\bigoplus_{i<0}\mathfrak{g}(i)\). We recall some notation and facts from Sections 2.6 and 2.7: let \(\chi=\kappa(e)\in\mathfrak{g}^{*}\) and write \(\mathfrak{n}_{\chi}:=\{x-\chi(x)\mid x\in\mathfrak{n}\}\subseteq S(\mathfrak{ g})\cong\mathbb{C}[\mathfrak{g}^{*}]\). We introduced the Kazhdan grading on \(\mathbb{C}[\mathfrak{g}^{*}]\), defined by placing \(\mathfrak{g}(i)\) in degree \(i+2\). The decomposition \(S(\mathfrak{g})=S(\mathfrak{p})\oplus S(\mathfrak{g})\mathfrak{n}_{\chi}\) (see [5, Formula (8.4)]) gives rise to a projection operator \(S(\mathfrak{g})\twoheadrightarrow S(\mathfrak{p})\) which is \(\operatorname{Ad}(\gamma)\)-equivariant and Kazhdan graded, and this leads to a twisted \(\mathfrak{n}\)-action on \(S(\mathfrak{p})\) given by the composition \[S(\mathfrak{p})\stackrel{{\operatorname{ad}(x)}}{{\longrightarrow }}S(\mathfrak{g})\to S(\mathfrak{p})\] for \(x\in\mathfrak{n}\). We denote the invariants with respect to this action by \(S(\mathfrak{p})^{\operatorname{tw}(\mathfrak{n})}\). It follows from [9, Theorem 4.1], and is recapped with more detail in [5, SS1], that we have the following isomorphism \[S(\mathfrak{p})^{\operatorname{tw}(\mathfrak{n})}\cong\mathbb{C}[\mathcal{S} _{e}]. \tag{4.8}\] This depends entirely on the fact that the Dynkin grading is even. It is important to note that (4.8) respects the Kazhdan gradings, and is equivariant with respect to the group of automorphisms of \(\mathfrak{g}\) which fix the \(\mathfrak{sl}_{2}\)-triple \((e,h,f)\). These same remarks can all be carried out for the general linear Lie algebra: there is a parabolic subalgebra \(\bar{\mathfrak{p}}=\mathfrak{gl}_{2n}(\geqslant 0)\) and the opposite nilradical \(\bar{\mathfrak{n}}=\mathfrak{gl}_{2n}(<0)\) admits a twisted action on \(S(\bar{\mathfrak{p}})\), such that \[\mathbb{C}[e+\mathfrak{gl}_{2n}^{f}]\cong S(\bar{\mathfrak{p}})^{\operatorname {tw}(\bar{\mathfrak{n}})}\] as Kazhdan graded Poisson algebras. The involution \(\tau\) defined above extends to an involution on \(S(\bar{\mathfrak{p}})\) which preserves the \(\operatorname{tw}(\bar{\mathfrak{n}})\)-invariants. For concision we write \(S:=S(\bar{\mathfrak{p}})^{\operatorname{tw}\bar{\mathfrak{n}}}\). **Lemma 4.9**.: [28, (2.2) & Theorem 2.6] _Let \(\bar{\mathfrak{p}}_{-}\subseteq\bar{\mathfrak{p}}\) denote the subspace spanned by \(x\in\bar{\mathfrak{p}}\) such that \(\tau(x)=-x\). Consider the \(\operatorname{Ad}(\gamma)\)-equivariant surjective algebra homomorphism \(S(\bar{\mathfrak{p}})\to S(\mathfrak{p})=S(\bar{\mathfrak{p}})/(\bar{ \mathfrak{p}}_{-})\). This restricts to a surjective Poisson algebra homomorphism \(S^{\tau}\twoheadrightarrow S(\mathfrak{p})^{\operatorname{tw}(\mathfrak{n})}\)._ Now we describe elements of \(S^{\tau}\) which map to the generators \(\eta_{i}^{(2r)}\) and \(\theta^{(r)}\) of \(S(\mathfrak{p})^{\operatorname{tw}(\mathfrak{n})}\) described in Theorem 3.1, under the homomorphism appearing in Lemma 4.9. The following elements were first introduced in the enveloping algebra \(U(\bar{\mathfrak{p}})\) in a slightly different language in [5, SS9, (1)-(6)]. The current set-up is identical to [28, SS4.3]. For \(1\leqslant i,k\leqslant 2\), \(0\leqslant x\leqslant 1\) and \(r>0\), we let \[t_{i,k;x}^{(r)}:=\sum_{s=1}^{r}(-1)^{r-s}\sum_{\begin{subarray}{c}(i_{m},j_{m},k _{m},l_{m})\\ m=1,...,s\end{subarray}}(-1)^{\#\{q=1,...,s-1|k_{q}\leqslant x\}}e_{i_{1},j_{1} ;k_{1},l_{1}}\cdots e_{i_{s},j_{s};k_{s},l_{s}}\in S(\bar{\mathfrak{p}}),\] where the sum is taken over all indexes such that for \(m=1,...,s\) \[1\leqslant i_{1},...,i_{s},k_{1},...,k_{s}\leqslant n,1\leqslant j_{m} \leqslant\lambda_{i_{m}}\text{ and }1\leqslant l_{m}\leqslant\lambda_{k_{m}} \tag{4.9}\] satisfying the following six conditions: * \(\sum_{m=1}^{s}(2l_{m}-2j_{m}+\lambda_{i_{m}}-\lambda_{k_{m}})=2(r-s)\); * \(2l_{m}-2j_{m}+\lambda_{i_{m}}-\lambda_{k_{m}}\geqslant 0\text{ for each }m=1,\ldots,s\); * if \(k_{m}>x\), then \(l_{m}<j_{m+1}\) for each \(m=1,\ldots,s-1\); * if \(k_{m}\leqslant x\) then \(l_{m}\geqslant j_{m+1}\) for each \(m=1,\ldots,s-1\); * \(i_{1}=i\), \(k_{s}=k\); * \(k_{m}=i_{m+1}\) for each \(m=1,\ldots,s-1\). **Lemma 4.10**.: _These satisfy the following properties:_ 1. _the elements_ \(t_{i,k;x}^{(r)}\) _lie in_ \(S(\bar{\mathfrak{p}})^{\mathrm{tw}(\bar{\mathfrak{n}})}\)_._ 2. _the elements_ \(t_{i,i;i-1}^{(2r)}\) _and_ \(t_{1,2;1}^{(r)}+(-1)^{r+\frac{\lambda_{2}-\lambda_{1}}{2}}t_{2,1;1}^{(r)}\) _are invariant under the action of_ \(\tau\) _on_ \(S(\bar{\mathfrak{p}})\)_._ 3. _The map_ \(S^{\tau}\to S(\mathfrak{p})^{\mathrm{tw}(\mathfrak{n})}=\mathbb{C}[\mathcal{S }_{e}]\) _from Lemma_ 4.9 _has the following effect:_ \[t_{i,i;i-1}^{(2r)} \longmapsto \eta_{i}^{(2r)},\] \[t_{1,2;1}^{(r)}+(-1)^{r+\frac{\lambda_{2}-\lambda_{i}}{2}}t_{2,1;1}^{(r)} \longmapsto \theta^{(r)}.\] _In other words the elements on the left hand side map to elements of_ \(\mathbb{C}[\mathcal{S}_{e}]\) _satisfying the relations of the symbols on the right, which are given in Theorem_ 3.1_._ Proof.: Part (1) and (2) both follow directly from [28, Proposition 4.5]. Part (3) follows from formulas (3.59), (3.61), (4.24) and Propositions 4.7 and 4.8 of _op. cit._ **Proposition 4.11**.: _We have \(\mathrm{Ad}(\gamma)\theta^{(r)}=-\theta^{(r)}\) and \(\mathrm{Ad}(\gamma)\eta_{i}^{(2r)}=\eta_{i}^{(2r)}\) for \(i=1,2\). If we define \(z\in\mathbb{C}[\mathcal{S}_{e}]\) via formula (4.1) then_ \[\mathrm{Ad}(\gamma)z=-z.\] Proof.: By (4.7) and Lemma 4.7(2) we see that \(\mathrm{Ad}(\gamma)\) preserves \(S(\bar{\mathfrak{p}})\). Now using Lemma 4.7(2) and the combinatorial condition (e) appearing in the definition of \(t_{i,k;x}^{(r)}\), it is easy to see that factors \(e_{i,j;k,l}\) with \(i\neq k\) occur with even parity in every summand of the expressions for \(t_{i,i;i-1}^{(2r)}\), whilst factors \(e_{i,j;k,l}\) with \(i\neq k\) occur with odd parity in the expressions for both \(t_{1,2;1}^{(r)}\) and \(t_{2,1;1}^{(r)}\). By Lemma 4.7(2) we see that \(\mathrm{Ad}(\gamma)(t_{i,i;i-1}^{(2r)})=t_{i,i;i-1}^{(2r)}\) and \(\tau(t_{1,2;1}^{(r)}+(-1)^{r+\frac{\lambda_{2}-\lambda_{1}}{2}}t_{2,1;1}^{(r) })=-(t_{1,2;1}^{(r)}+(-1)^{r+\frac{\lambda_{2}-\lambda_{1}}{2}}t_{2,1;1}^{(r)})\). Since the map \(S^{\tau}\to S(\mathfrak{p})^{\mathrm{tw}(\mathfrak{n})}\) described in Lemma 4.9 is \(\mathrm{Ad}(\gamma)\)-equivariant, it follows from Lemma 4.10(3) that \(\mathrm{Ad}(\gamma)\) acts as claimed on \(\theta^{(r)},\eta_{i}^{(2r)}\in S(\mathfrak{p})^{\mathrm{tw}(\mathfrak{n})}= \mathbb{C}[\mathcal{S}_{e}]\). Now the claim \(\mathrm{Ad}(\gamma)z=-z\) is a direct consequence of (4.1). Recall the Pfaffian \(p\in\mathbb{C}[\mathfrak{g}]^{G}\) from Section 4.1, and write \(p|_{\mathcal{S}_{e}}\) for the restriction to the slice. **Proposition 4.12**.: _We have \(z=p|_{\mathcal{S}_{e}}\) up to a non-zero scalar._ Proof.: By Proposition 4.4 we know that \(z\) is Poisson central in \(\mathbb{C}[\mathcal{S}_{e}]\). By Theorem 3.1 we also know that \(z\) lies in Kazhdan degree \(2n\). The proof concludes by combining Lemma 4.3, Lemma 4.7(4) and Proposition 4.11. ## 5. Exceptional isomorphisms and equivariant quantizations in type \(\mathsf{C}\) ### The case of partitions with two even parts We now fix the notation required to prove the main theorems of the paper. Let \(n\geq 3\), and set \(\tilde{G}:=\mathrm{O}_{2n}\), \(G:=\mathrm{SO}_{2n}\) and \(G_{0}=\mathrm{Sp}_{2n-2}\). Put \(\mathfrak{g}:=\mathrm{Lie}\,G\) and \(\mathfrak{g}_{0}:=\mathrm{Lie}\,G_{0}\). Fix a partition \((\lambda_{1},\lambda_{2})\vdash 2n\) satisfying (ii) of Section 3.2. Pick elements \(e\in\mathcal{N}(\mathfrak{g})\) and \(e_{0}\in\mathcal{N}(\mathfrak{g}_{0})\) with orbits labelled by by \((\lambda_{1},\lambda_{2})\) and \((\lambda_{1}-1,\lambda_{2}-1)\) respectively. Write \(\mathcal{N}_{e}:=\mathcal{S}_{e}\cap\mathcal{N}(\mathfrak{g})\) and \(\mathcal{N}_{e_{0}}:=\mathcal{S}_{e_{0}}\cap\mathcal{N}(\mathfrak{g}_{0})\). **Theorem 5.1**.: _The Poisson varieties \(\mathcal{N}_{e}\subset\mathfrak{g}\) and \(\mathcal{N}_{e_{0}}\subset\mathfrak{g}_{0}\) are isomorphic._ Proof.: We have the following equalities for dimensions of orbits. \[\dim G\cdot e=\dim\mathfrak{g}-\dim\mathfrak{g}^{e},\ \ \ \ \ \dim G_{0}\cdot e_{0}=\dim \mathfrak{g}_{0}-\dim\mathfrak{g}_{0}^{e_{0}}.\] Using the transversality of the Slodowy slices we have that \(\dim\mathcal{N}_{e}=\mathrm{codim}_{\mathcal{N}(\mathfrak{g})}(G\cdot e)\), and similarly for \(\mathcal{N}_{e_{0}}\). It now follows from [12, SS3.3(3)] that \[\dim\mathcal{N}_{e}=\lambda_{1}=\dim\mathcal{N}_{e_{0}}. \tag{5.1}\] Write \(\varphi\colon\mathbb{C}[\mathcal{S}_{e}]\to\mathbb{C}[\mathcal{S}_{e_{0}}]\) for the Kazhdan graded homomorphism (3.9). If we denote by \(I\subseteq\mathrm{Cas}\,\mathbb{C}[\mathcal{S}_{e}]\) the unique maximal graded ideal of the Poisson centre, and similarly for \(I_{0}\subseteq\mathrm{Cas}\,\mathbb{C}[\mathcal{S}_{e_{0}}]\), then we have \(\varphi(I)\subseteq I_{0}\). It follows from the remarks of Section 2.6 that \(I\) generates the defining ideal of \(\mathcal{N}_{e}\) inside \(\mathbb{C}[\mathcal{S}_{e}]\) and similarly \(I_{0}\) generates the defining ideal of \(\mathcal{N}_{e_{0}}\) inside \(\mathbb{C}[\mathcal{S}_{e_{0}}]\). Now from the remarks of the previous paragraph we see that the morphism of Poisson varieties \(\mathcal{S}_{e_{0}}\to\mathcal{S}_{e}\) induced by \(\varphi\) restricts to a closed embedding \(\mathcal{N}_{e_{0}}\hookrightarrow\mathcal{N}_{e}\). By [23, Theorem 5.4(ii)] we know that \(\mathcal{N}_{e}\) is an irreducible variety, and so from (5.1) it follows that \(\mathcal{N}_{e_{0}}\to\mathcal{N}_{e}\) is an isomorphism of Poisson varieties. ### Twisted Yangians and the case of two odd parts Keep the notation \(\mathfrak{g}=\mathfrak{so}_{2n}\) and \(\mathfrak{g}_{0}=\mathfrak{sp}_{2n-2}\), and now assume \(n\geq 4\) is even. Let \(\lambda=(n,n)\), let \(\lambda_{0}=(n-1,n-1)\) and pick nilpotent elements \(e\in\mathcal{N}(\mathfrak{g})\) and \(e_{0}\in\mathcal{N}(\mathfrak{g}_{0})\) in orbits with partitions \(\lambda\) and \(\lambda_{0}\) respectively. We remark that there are precisely two \(\mathrm{SO}_{2n}\)-orbits with partition \(\lambda\) (see the remarks following [8, Theorem 5.1.6]) and \(e\) may be chosen in either of these. We prove the last remaining case of Theorem 1.1 (1). **Theorem 5.2**.: \(\mathcal{N}_{e_{0}}\cong\mathcal{N}_{e}\) _as Poisson \(\mathbb{C}^{\times}\)-varieties._ The method we used in Theorem 5.1 depended entirely on Theorem 3.1, which only applies in the case where \(\lambda\) has parts of odd size. In order to carry out the argument in our setting, we recall a quantum analogue of Theorem 3.1 which applies to nilpotent elements which have all Jordan blocks of the same size. This leads to an analogue of Corollary 3.2 in our setting. The twisted Yangian \(Y_{n}^{-}\) is a non-commutative algebra introduced by Olshanski [22], see [19] for a good survey of its properties. It is generated by symbols \(S_{i,j}^{(r)}\) with \(i,j=1,2,..,n\) and \(r>0\) subject to the _quaternary_ and _symmetry_ relations [19, (2.6), (2.7)]. The canonical filtration on \(Y_{n}^{-}\) places \(S_{i,j}^{(r)}\) in degree \(r\). The following is a special case of [4, Theorem 1.2 & 2.3] **Theorem 5.3**.: 1. _There is a surjective filtered algebra homomorphism_ \(Y_{2}^{-}\twoheadrightarrow U(\mathfrak{g},e)\) _with kernel generated by_ \[\{S_{i,j}^{(r)}\mid r>n\}.\] (5.2) _._ 2. _There is a surjective filtered algebra homomorphism_ \(Y_{2}^{-}\twoheadrightarrow U(\mathfrak{g}_{0},e_{0})\) _with kernel generated by_ \[\{S_{i,j}^{(r)}-\frac{1}{2}S_{i,j}^{(r-1)}\mid r>n-1\}.\] (5.3) We let \(y_{2}^{-}=\operatorname{gr}Y_{2}^{-}\) be the associated graded algebra with respect to the canonical filtration. It is a commutative algebra and comes equipped with a Poisson structure, see [1, (2.2)], [19, Remark 2.4.5]. From the above we can now deduce the existence of required map between Slodowy slices. Let \((e,h,f)\) and \((e_{0},h_{0},f_{0})\) be choices of \(\mathfrak{sl}_{2}\)-triples for \(e\) and \(e_{0}\) respectively. **Corollary 5.4**.: _There is a \(\mathbb{C}^{\times}\)-equivariant closed Poisson embedding \(e_{0}+\mathfrak{g}_{0}^{f_{0}}\hookrightarrow e+\mathfrak{g}^{f}\)._ Proof.: Consider the associated graded maps of the surjections appearing in Theorem 5.3. Write \(I\) for the kernel of \(y_{2}^{-}\twoheadrightarrow\mathbb{C}[e+\mathfrak{g}^{f}]\) and \(I_{0}\) for the kernel of the map \(y_{2}^{-}\twoheadrightarrow\mathbb{C}[e_{0}+\mathfrak{g}_{0}^{f_{0}}]\). We claim that we have an inclusion of graded ideals \(I\subseteq I_{0}\), and this will imply the corollary. Write \(s_{i,j}^{(r)}\) for the image of \(S_{i,j}^{(r)}\) in \(y_{2}^{-}\). The claim will follow if we can show that \(I_{0}\) is generated by \(\{s_{i,j}^{(r)}\mid r>n\}\) and \(I_{0}\) is generated by \(\{s_{i,j}^{(r)}\mid r>n-1\}\), as associative algebra ideals. Examining (5.2) and (5.3) it is clear that the specified elements lie in \(I\) and \(I_{0}\) respectively. The fact that they generate can be easily deduced from [4, Theorem 2.3]. Proof of Theorem 5.2.: Following the same line of reasoning as the first part of the proof of Theorem 5.1 we see that \(\dim\mathcal{N}_{e_{0}}=\dim\mathcal{N}_{e}\). It follows that the inclusion from Corollary 5.4 is actually an isomorphism. ### Equivariant deformations and quantizations In this final section we prove Theorem 1.3. Let \(n\geqslant 3\), and \(\mathfrak{g}\coloneqq\mathfrak{so}_{2n}\) and \(\mathfrak{g}_{0}\coloneqq\mathfrak{sp}_{2n-2}\). Fix a partition \(\lambda=(\lambda_{1},\lambda_{2})\vdash 2n\) satisfying (ii) from Section 3.2, so that \(\lambda_{0}=(\lambda_{1}-1,\lambda_{2}-1)\) satisfies (i). Pick \(\mathfrak{sl}_{2}\)-triples \((e,h,f)\) in \(\mathfrak{g}\) and \((e_{0},h_{0},f_{0})\) in \(\mathfrak{g}_{0}\) for these partitions. In Sections 2.6 and 2.7, we explained that \((\mathbb{C}[\mathcal{S}_{e_{0}}],\iota)\in\operatorname{PD}_{\mathbb{C}[ \mathcal{N}_{e_{0}}]}(\mathbb{C}[\mathfrak{g}_{0}]^{G_{0}})\) and \((U(\mathfrak{g}_{0},e_{0}),\iota)\in\operatorname{Q}_{\mathbb{C}[\mathcal{N}_ {e_{0}}]}(\mathbb{C}[Z(\mathfrak{g}_{0})])\). By Lemma 4.7(4) there is a subgroup \(\Gamma\subset\tilde{G}\) which splits the outer automorphism group of \(\mathfrak{g}\), and fixes \((e,h,f)\) pointwise. The \(\Gamma\)-action on \(\mathcal{S}_{e}\) stabilises \(\mathbb{C}[\mathfrak{g}]^{G}\). \[\psi\colon\mathbb{C}[\mathfrak{g}]^{G}\to\operatorname{Cas}\mathbb{C}[ \mathcal{S}_{e}]\stackrel{{\varphi}}{{\longrightarrow}} \operatorname{Cas}\mathbb{C}[\mathcal{S}_{e_{0}}]\to\mathbb{C}[\mathfrak{g}_{0 }]^{G_{0}} \tag{5.4}\] where the homomorphism \(\varphi\) is described in (3.9), and the other two maps arise from Lemma 2.1. **Lemma 5.5**.: _The map \(\psi\) is surjective and its kernel coincides with the kernel of the quotient of \(\mathbb{C}[\mathfrak{g}]^{G}\) to the \(\Gamma\)-coinvariants, therefore the algebras \(\mathbb{C}[\mathfrak{g}_{0}]^{G_{0}}\) and \((\mathbb{C}[\mathfrak{g}]^{G})_{\Gamma}\) are graded isomorphic._ Proof.: We have the equality \(\ker\varphi\cap\operatorname{Cas}\mathbb{C}[\mathcal{S}_{e}]=(z)\), by Corollary 4.5, whilst by Proposition 4.12 we have \((p|_{\mathcal{S}_{e}})=(z)\) as ideals in \(\mathbb{C}[\mathcal{S}_{e}]\). This implies that \(\ker\psi=(p)\), where \(p\) denotes the Pfaffian. Since \(p\) is a homogeneous generator of \(\mathbb{C}[\mathfrak{g}]^{G}\), and since the dimensions of the graded components of \(\mathbb{C}[\mathfrak{g}]^{G}\) and \(\mathbb{C}[\mathfrak{g}_{0}]^{G_{0}}\) are uniquely determined by the degrees of the homogeneous generators, we obtain surjectivity of the map \(\psi\) by comparison of the graded components. By Lemma 4.1 we see that the algebra of \(\Gamma\)-coinvariants \((\mathbb{C}[\mathfrak{g}]^{G})_{\Gamma}\) is obtained from \(\mathbb{C}[\mathfrak{g}]^{G}\) after quotienting with the ideal \((p)\). This completes the argument. By Theorem 5.1 we may regard \(\Gamma\) as a group of Kazhdan graded Poisson automorphisms of \(\mathbb{C}[\mathcal{N}_{e}]\simeq\mathbb{C}[\mathcal{N}_{e_{0}}]\). Thus we may consider the fixed-points functors \(\operatorname{PD}_{\mathbb{C}[\mathcal{N}_{e_{0}}]}^{\Gamma}\) and \(\operatorname{Q}_{\mathbb{C}[\mathcal{N}_{e_{0}}]}^{\Gamma}\) from Sections 2.4 and 2.5. _Remark 5.6_.: Observe that, in the above situation, \(e\) is regular if and only if \(e_{0}\) is regular, if and only if \(\mathcal{N}_{e}=\{e\}\) if and only if \(\mathcal{N}_{e_{0}}=\{e_{0}\}\), so Theorem 5.1 holds trivially. Moreover, in such cases the deformation theory of \(\mathcal{N}_{e_{0}}\simeq\mathcal{N}_{e}\) degenerates to a triviality, see also the remark following [1, Theorem 3.5]. For this reason, in what follows, we exclude the regular case. We now prove Theorem 1.3. **Theorem 5.7**.: _Let \(\mathfrak{g}_{0},e_{0}\) be as fixed in this section, and suppose that \(e_{0}\) is not regular._ 1. \(\mathrm{PD}^{\Gamma}_{\mathbb{C}[\mathcal{N}_{e_{0}}]}\) _is represented by_ \(\mathbb{C}[\mathfrak{g}_{0}]^{G_{0}}\) _and_ \((\mathbb{C}[\mathcal{S}_{e_{0}}],\iota_{0})\) _is a universal element._ 2. \(\mathrm{Q}^{\Gamma}_{\mathbb{C}[\mathcal{N}_{e_{0}}]}\) _is represented by_ \(Z(\mathfrak{g}_{0})\) _and_ \((U(\mathfrak{g}_{0},e_{0}),\iota_{0})\) _is a universal element._ Proof.: Since \(\mathcal{N}_{e}\) is a conical symplectic singularity the functors \(\mathrm{PD}_{\mathbb{C}[\mathcal{N}_{e}]}\) and \(\mathrm{Q}_{\mathbb{C}[\mathcal{N}_{e}]}\) are both representable (Section 2.5). In [1, Theorem 3.5 & 3.6] it was demonstrated that they are both represented over \(\mathbb{C}[\mathfrak{g}]^{G}\) and \(Z(\mathfrak{g})\) respectively, and that we have the following universal elements (see Sections 2.6 and 2.7): 1. \((\mathbb{C}[\mathcal{S}_{e}],\iota)\in\mathrm{PD}_{\mathbb{C}[\mathcal{N}_{e} ]}(\mathbb{C}[\mathfrak{g}]^{G})\) 2. \((U(\mathfrak{g},e),\iota)\in\mathrm{Q}_{\mathbb{C}[\mathcal{N}_{e}]}(Z( \mathfrak{g}))\) By Theorem 5.1 the same remarks hold if we replace \(e\) by \(e_{0}\). As we explained in Section 2.5 the functor \(\mathrm{PD}^{\Gamma}_{\mathbb{C}[\mathcal{N}_{e_{0}}]}\) is represented over the base \((\mathbb{C}[\mathfrak{g}]^{G})_{\Gamma}\). Furthermore an universal element is given by the base change of \((\mathbb{C}[\mathcal{S}_{e}],\iota)\) through the natural map \(\mathbb{C}[\mathfrak{g}]^{G}\to(\mathbb{C}[\mathfrak{g}]^{G})_{\Gamma}\). Now part (1) of the Theorem follows from Lemma 5.5. Since \(U(\mathfrak{g}_{0},e)\) is a filtered quantization of \(\mathbb{C}[\mathcal{S}_{e_{0}}]\) part (2) of the Theorem follows from [1, Theorem 1.1]. _Remark 5.8_.: Although \(\mathcal{N}_{e}\cong\mathcal{N}_{e_{0}}\) when \(e\) has Jordan type \((n,n)\), \(e_{0}\) has type \((n-1,n-1)\) and \(n\) is even (by Theorem 5.2), there can be no analogue of Theorem 5.7 formulated for a group of automorphisms \(\Gamma\subseteq\mathrm{CA}(e)\) (see Remark 4.8 for notation). The orbit of \(e\) is not characteristic in \(\mathfrak{g}\) and the outer reductive centralizer satisfies \(\mathrm{CA}(e)\subset\mathrm{SO}_{2n}\). Hence the action of \(\mathrm{CA}(e)\) on \(\mathbb{C}[\mathcal{S}_{e}]\) restricts to a trivial action on the Poisson centre, and for any subgroup \(\Gamma\subset\mathrm{CA}(e)\) the universal element for \(\mathrm{PD}^{\Gamma}(\mathbb{C}[\mathcal{N}_{e_{0}}])\) is \(\mathbb{C}[\mathcal{S}_{e}]\). The same remarks apply in the non-commutative setting, thanks to [1, Theorem 1.1]. It would be interesting to know whether \(\mathcal{N}_{e_{0}}\) is equipped with any symmetries which do not come from restricting elements of \(\mathrm{Aut}(\mathfrak{sp}_{2n-2})\) or of \(\mathrm{Aut}(\mathfrak{so}_{2n})\), which could serve as a replacement for \(\Gamma\) when trying to formulate such a statement.
2306.08877
Linguistic Binding in Diffusion Models: Enhancing Attribute Correspondence through Attention Map Alignment
Text-conditioned image generation models often generate incorrect associations between entities and their visual attributes. This reflects an impaired mapping between linguistic binding of entities and modifiers in the prompt and visual binding of the corresponding elements in the generated image. As one notable example, a query like "a pink sunflower and a yellow flamingo" may incorrectly produce an image of a yellow sunflower and a pink flamingo. To remedy this issue, we propose SynGen, an approach which first syntactically analyses the prompt to identify entities and their modifiers, and then uses a novel loss function that encourages the cross-attention maps to agree with the linguistic binding reflected by the syntax. Specifically, we encourage large overlap between attention maps of entities and their modifiers, and small overlap with other entities and modifier words. The loss is optimized during inference, without retraining or fine-tuning the model. Human evaluation on three datasets, including one new and challenging set, demonstrate significant improvements of SynGen compared with current state of the art methods. This work highlights how making use of sentence structure during inference can efficiently and substantially improve the faithfulness of text-to-image generation.
Royi Rassin, Eran Hirsch, Daniel Glickman, Shauli Ravfogel, Yoav Goldberg, Gal Chechik
2023-06-15T06:21:44Z
http://arxiv.org/abs/2306.08877v3
# Linguistic Binding in Diffusion Models: ###### Abstract Text-conditioned image generation models often generate incorrect associations between entities and their visual attributes. This reflects an impaired mapping between _linguistic binding_ of entities and modifiers in the prompt and _visual binding_ of the corresponding elements in the generated image. As one notable example, a query like "a _pink sunflower_ and a _yellow flamingo_" may incorrectly produce an image of a _yellow sunflower_ and a _pink flamingo_. To remedy this issue, we propose _SynGen_, an approach which first syntactically analyses the prompt to identify entities and their modifiers, and then uses a novel loss function that encourages the cross-attention maps to agree with the linguistic binding reflected by the syntax. Specifically, we encourage large overlap between attention maps of entities and their modifiers, and small overlap with other entities and modifier words. The loss is optimized during inference, without retraining or fine-tuning the model. Human evaluation on three datasets, including one new and challenging set, demonstrate significant improvements of SynGen compared with current state of the art methods. This work highlights how making use of sentence structure during inference can efficiently and substantially improve the faithfulness of text-to-image generation.1 Footnote 1: We make our code publicly available [https://github.com/RoyiRa/Syntax-Guided-Generation](https://github.com/RoyiRa/Syntax-Guided-Generation) ## 1 Introduction Diffusion models for text-conditioned image generation produce impressive realistic images [1; 2; 3; 4]. Users control the generated content through natural-language text prompts that can be rich and complex. Unfortunately, in many cases the generated images are not faithful to the text prompt [5; 6]. Specifically, one very common failure mode results from **improper binding**, where _modifier_ words fail to influence the visual attributes of the _entity-nouns_ to which they are grammatically related. As an illustration, consider the prompt "a _pink sunflower_ and a _yellow flamingo_". Given this prompt, current models often confuse the modifiers of the two entity-nouns, and generate an image of a _yellow_ sunflower and a _pink_ flamingo (Fig. 1, bottom left, semantic leak in prompt). In other cases, the attribute may semantically leak to areas in the image that are not even mentioned in the prompt (Fig. 1, bottom center, semantic leak outside prompt) or the attribute may be completely neglected and missed from the generated image (Fig. 1, bottom right, attribute neglect). Such mismatch can be addressed by providing non-textual control like visual examples [7; 8], but the problem of correctly controlling generated images using text remains open. A possible reason for these failures is that diffusion models use text encoders like CLIP [9], which are known to fail to encode linguistic structure [10]. This makes the diffusion process "blind" to the linguistic bindings, and as a result, generate objects that do not match their described attributes. Building on this intuition, we propose to make the generation process aware of the linguistic structure of the prompt. Specifically, we suggest to intervene with the generation process by steering the cross-attention maps of the diffusion model. These cross-attention map serve as a link between prompt terms and the set of image pixels that correspond to these terms. Our linguistics-based approach therefore aims to generate an image where the **visual bindings** between objects and their visual attributes adhere to the **syntactic bindings** between entity-nouns and their modifiers in the prompt. Several previous work devised solutions to improve the relations between prompt terms and visual components, with relative success [11; 12; 13], but did not focus on the problem of modifier-entity binding. Our approach specifically addresses this issue, by constructing a novel loss function that quantifies the distance between the attention patterns of grammatically-related (modifier, entity-noun) pairs, and the distance between pairs of unrelated words in the prompt. We then optimize the latent denoised image in the direction that separates the attention map of a given modifier from unrelated tokens and bring it closer to the noun to which it is grammatically related. We show that by intervening in the latent code, we can markedly improve the pairing between attributes and objects in the generated image while at the same time not compromising the quality of the generated image. We evaluate our method on three datasets. (1) For a natural language setting, we use the natural compositional prompts in the ABC-6K benchmark [13]; (2) To provide direct comparison with previous state-of-the-art [11], we replicated prompts from their setting; (3) Finally, to evaluate binding in a challenging setting, we design a set of prompts that includes a variety of modifiers and entity-nouns. On all datasets, we find that SynGen shows significant improvement in performance based on human evaluation, sometimes doubling the accuracy. Overall, our work highlights the effectiveness of incorporating linguistic information into text-conditioned image generation models and demonstrates a promising direction for future research in this area. Figure 1: Visual bindings of objects and their attributes may fail to match the linguistic bindings between entities and their modifiers. Our approach, SynGen, corrects these errors by matching the cross-attention maps of entities and their modifiers. The main contributions of this paper are as follows: (1) A novel method to enrich the diffusion process with syntactic information, using inference-time optimization with a loss over cross-attention maps; (2) A new challenge set of prompts containing a rich number and types of modifiers and entities. ## 2 Syntax-Guided Generation Our approach, which we call SynGen, builds on two key ideas. First, it is easy to analyze the syntactic structure of natural language prompts to identify bindings of entity-nouns and their modifiers. Second, one can steer the generation of images to adhere to these bindings by designing an appropriate loss over the cross-attention maps of the diffusion model. We describe the two steps of our approach: extracting syntactic bindings and then using them to control generation. ### Identifying entity-nouns and their modifiers To identify entity-nouns and their corresponding modifiers, we traverse the syntactic dependency graph, which defines the syntactic relation between words in the sentence. Concretely, we parse the prompt using spaCy's transformer-based dependency parser [14] and identify all entity-nouns (either proper-nouns or common-nouns) that are not serving as direct modifiers of other nouns. These are the nouns that correspond to objects in the generated image (such as cats and dogs). We then recursively collect all modifiers2 of the noun into its modifier set. The set of modifier-labels includes a range of syntactic relations between nouns and their modifiers, such adjectivial modification (amod; "the _regal_ dog"), compounds (compound; "the _treasure_ map"), nominal modification through an intervening marker, adverbial modifiers (npadvmod; "A _watermelon_-styled chair"), and coordination between modifiers (conj; "A _black and white_ dog"). Footnote 2: We consider modifiers from the set {amod, nmod, compound, npadvmod, conj}. We exclude the conj when determining the top-level nouns. ### Controlling generation with language-driven cross-attention losses Consider a pair of a noun and its modifier. We expect the cross-attention map of the modifier to largely overlap with the cross-attention map of the noun, while remaining largely disjoint with the maps corresponding to other nouns and modifiers. To encourage the denoising process to obey these spatial relations between the attention maps, we design a loss that operates on all cross-attention maps. We then use this loss with a pretrained diffusion model during inference. Specifically, we Figure 2: The SynGen workflow and architecture. (a) The text prompt is analyzed to extract entity-nouns and their modifiers. (b) SynGen adds intermediates steps to the diffusion denoising process. In that step, we update the latent representation to minimize a loss over the cross attention maps of entity-nouns and their modifiers (Eq 3). optimize the _noised latents_ by taking a gradient step to reduce that loss. See illustration in Fig. 2. Fig. 3 illustrates the effect of the loss over the cross-attention maps. Loss functions:Consider a text prompt with \(N\) tokens, for which our analysis extracted \(k\) non-modifier sets \(\{S_{1},S_{2},\ldots,S_{k}\}\). Let \(P(S_{i})\) represent all pairs \((m,n)\) of tokens between the noun root \(n\) and its modifier descendants \(m\) in the \(i\)-th set \(S_{i}\). For illustration, the set of "A black striped dog" contains two pairs ("black", "dog") and ("striped", "dog"). Next, denote by \(\{A_{1},A_{2},\ldots,A_{N}\}\) the attention maps of all \(N\) tokens in the prompt, and denote by \(dist(A_{m},A_{n})\) a measure of distance (lack of overlap) between attention maps \(A_{m}\) and \(A_{n}\). Our first loss aims to minimize that distance (maximize the overlap) over all pairs of modifiers and their corresponding entity-nouns \((m,n)\), \[\mathcal{L}_{pos}(A,S)=\sum_{i=1}^{k}\sum_{(m,n)\in P(S_{i})}dist(A_{m},A_{n}). \tag{1}\] We also construct a loss that compares pairs of modifiers and entity-nouns with the remaining words in the prompt, which are grammatically unrelated to these pairs. In other words, this loss is defined between words within the (modifiers, entity-nouns) set and words outside of it. Formally, let \(U(S_{i})\) represent the set of unmatched words obtained by excluding the words in \(S_{i}\) from the full set of words and \(A_{u}\) is the corresponding attention map for a given unrelated word \(u\). The following loss encourages moving apart grammatically-unrealted pairs of words: \[\mathcal{L}_{neg}=-\sum_{i=1}^{k}\frac{1}{|U(S_{i})|}\sum_{(m,n)\in P(S_{i})} \sum_{u\in U(S_{i})}\tfrac{1}{2}\bigg{(}dist(A_{m},A_{u})+dist(A_{u},A_{n}) \bigg{)}. \tag{2}\] Our final loss combines the two loss terms \[\mathcal{L}=\mathcal{L}_{pos}+\mathcal{L}_{neg}. \tag{3}\] For a measure of distance between attention maps we use a Symmetric Kullback-Leibler divergence \(dist(A_{i},A_{j})=\tfrac{1}{2}D_{KL}(A_{i}||A_{j})+\tfrac{1}{2}D_{KL}(A_{j}||A_ {i})\), where \(A_{i}\), \(A_{j}\) are attention maps normalized to a sum of 1, \(i\) and \(j\) are generic indices, and \(D_{KL}(A_{i}||A_{j})=\sum_{pixels}A_{i}\log(A_{i}/A_{j})\).3 Figure 3: Evolution of cross-attention maps and latent representation along denoising steps, for the prompt “a red crown and a golden strawberry”. At first, the attention maps of all modifiers and entity-nouns are intertwined, regardless of the expected binding. During denoising, attention maps gradually becomes separated, adhering the syntactic bindings. The vertical line indicates that after 25 steps intervention stops, but the attention maps remain separated. ### The workflow We use the above loss to intervene in the first 25 out of 50 diffusion steps. Empirically, a lower number of steps fails to address the issue of improper binding, while a larger number of steps leads to the generation of blurred images, as detailed in Appendix B. In each of these first 25 steps, a pretrained denoiser (U-Net) was first used to denoise the latent variable \(z_{t}\). Then, we obtained the cross-attention maps as in [15]. Next, we used the loss \(\mathcal{L}\) to update the latent representation \(z_{t}\) with a gradient step \(z^{\prime}t=z_{t}-\alpha\cdot\nabla z_{t}\mathcal{L}\). Finally, the U-Net architecture denoises the updated latent variable \(z^{\prime}_{t}\) for the next timestep. ## 3 Experiments ### Compared baseline methods We compare SynGen with three baseline methods. (1) Stable Diffusion 1.4 (SD) [1]; (2) Structured Diffusion (StructDiff) [13], extracts noun-phrases from the prompt and embeds them separately, to improve the mapping of the semantics in the cross-attention maps; and (3) Attend-and-Excite (A&E) [11], a method that given a predetermined set of tokens, updates the latent a certain number of timesteps, to eventually incorporate these tokens in the generated image. To automate token selection in Attend-and-Excite, we follow the recommendation of the authors to select the nouns using a part-of-speech tagger. ### Datasets We evaluate our approach using two existing benchmark datasets, and one new dataset that we designed to challenge methods in this area. (1) ABC-6K [13].This benchmark consists of 3.2K natural compositional prompts from MSCOCO [16], which were manually written by humans, using natural language and contain at least two color words modifying different noun-entities. In addition, the dataset contains 3.2K counterparts, where the position of modifiers in the original prompts are swapped. (e.g., "a _white_ bench in front of a _green_ bush" and "a _green_ bench in front of a _white_ bush"). We randomly sample 600 prompts. (2) Data from Attend-and-Excite [11].This dataset was initially constructed to evaluate the Attend-and-Excite method, which was successful compared to previous methods in tackling the improper binding problem. The prompts in the dataset can be summarized to three categories: (1) "a {color} {in-animate object} and a {color} {in-animate object}"; (2) "a {color} {in-animate object} and an {animal}"; (3) "an {animal} and an {animal}". Following the split in Attend-and-Excite, we sample 33 prompts from type (1) and 144 prompts from type (2), but exclude type (3), as it does not contain modifiers. This is a very simple dataset, which we use to facilitate direct comparison with previous work. (3) Diverse Visual Modifier Prompts (DVMP).The datasets proposed by [11; 13] are basic in the number and types of modifiers, as well as the number of entity-nouns per prompt. To challenge our model, we design a dataset consisting of coordination sentences, in similar fashion to the dataset from Attend-and-Excite, but with strong emphasis on the number and types of modifiers per prompt. Specifically, we aim to compare the models with prompts that contain numerous and uncommon modifiers, creating sentences that would not usually be found in natural language or training data, such as "a _pink spotted_ panda". Key aspects of the DVMP dataset include: **Expanded modifiers:** We have extended the number of modifiers referring to an entity-noun from one to up to three. For instance, "a _blue furry spotted_ bird". We also added types of modifiers besides colors, including material patterns ("a _metal_ chair"), design patterns ("a _checkered_ shoe"), and even nouns modifying other noun-entities ("a _baby_ zebra"). **Visually verifiable and semantically coherent:** The modifiers selected for DVMP are visually verifiable, with a deliberate avoidance of nuanced modifiers. For instance, "big" is a relative modifier dependent on its spatial context, and emotional states, such as in the prompt "an _excited_ dog", are largely excluded due to their subjective visual interpretation. Simultaneously, DVMP maintains semantic coherence by appropriately matching modifiers to noun-entities, thereby preventing the creation of nonsensical prompts like "a sliced bowl" or "a curved zebra". In total, we have generated 600 prompts through random sampling. For a more comprehensive description of the dataset's creation, see Appendix F. ### Human Evaluation We evaluate image quality using the Amazon Mechanical Turk platform. Raters were provided with a multiple-choice task, consisting of a single text prompt and four images, each generated by our baselines and SynGen. Raters could also indicate that there is no clear winner, by selecting "equally good" or "equally bad". We provided each prompt to three raters, and report the majority decision. When no majority decision was reached, we treat that prompt as a single "no winner" response. We evaluate generated images in two main aspects: (1) _concept separation_ (sometimes known as editability [17]) and (2) visual appeal. Concept separation refers to the ability of the model to distinctly represent different concepts or objects in the generated image. The effectiveness of concept separation is assessed by asking raters, "Which image best matches the given description?". To asses visual quality, raters were asked "Which image is more visually appealing?". To maintain fairness and reduce biases, the order of images was randomized in each task. Full rater instructions and further details are provided in Appendix G.2 of the supplemental materials. ## 4 Results ### Quantitative Results Table 1, provides results of the human evaluation experiment. SynGen is consistently ranked first in all three datasets, and by a large margin, sometimes double the approval rate of the second ranked method, A&E. These results are observed for concept separation, which measures directly the semantic leak, and for visual appeal. The high number of "no winner" cases reflects the large difficulty of some of the prompts, for which no method provides good enough generated images. Population results, before majority aggregation, are given in Appendix G.2 of the supplemental material. Comparisons with StableDiffusion are given in Fig. 18 of the supplemental. \begin{table} \begin{tabular}{l l c c} \hline \hline & & \multicolumn{2}{c}{Concept} & \multicolumn{1}{c}{Visual} \\ Dataset & Model & & \\ \hline \multirow{4}{*}{ABC-6K [13]} & SynGen (ours) & **28.0\%** & **18.3\%** \\ & A\&E [11] & \(11.1\%\) & \(10.0\%\) \\ & Structure Diffusion [13] & \(5.8\%\) & \(6.3\%\) \\ & Stable Diffusion [1] & \(4.8\%\) & \(7.8\%\) \\ & No winner & \(50.1\%\) & \(57.5\%\) \\ \hline \multirow{4}{*}{Attend \& Excite dataset [11]} & SynGen (ours) & **38.4\%** & **37.8\%** \\ & A\&E [11] & \(18.0\%\) & \(18.64\%\) \\ & Structure Diffusion [13] & \(4.5\%\) & \(4.5\%\) \\ & Stable Diffusion [1] & \(1.69\%\) & \(2.3\%\) \\ & No winner & \(37.3\%\) & \(36.7\%\) \\ \hline \multirow{4}{*}{DVMP (challenge set)} & SynGen (ours) & **24.8\%** & **16.0\%** \\ & A\&E [11] & \(13.3\%\) & \(12.1\%\) \\ \cline{1-1} & Structure Diffusion[13] & \(4.3\%\) & \(7.8\%\) \\ \cline{1-1} & Stable Diffusion [1] & \(3.8\%\) & \(7.1\%\) \\ \cline{1-1} & No winner & \(53.6\%\) & \(56.8\%\) \\ \hline \hline \end{tabular} \end{table} Table 1: Human evaluation of all methods on the three datasets. The table reports scores for concept separation (how well the image matches the prompt) and visual appeal. Values are the fraction of majority vote of three raters, normalized to sum to 100. ### Qualitative Analysis Figures 4-6 provide qualitative examples from the three datasets, comparing SynGen with the two strongest baselines. The qualitative examples illustrate several failure modes of our baselines. First, _semantic leak in prompt_, occurs when a modifier of an entity-noun "leaks" onto a different entity-noun in the prompt, as shown in Fig. 6, for the prompt "a pink clock and a brown chair", in columns 3 and 4. In this case, all baselines incorrectly apply pink hues to the chair, despite the prompt explicitly defining it as brown. A more nuanced variant of this issue is _semantic leak out of prompt_, when a modifier is assigned to an entity-noun that is not mentioned in the prompt. For instance, the "spiky" attribute in "a spiky bowl and a green cat" leaks to a plant, which is not referred to in the prompt, or the green coloration in the background of the images generated by the baselines, as seen in Fig. 5, columns 5 and 6. Figure 4: Qualitative examples for ABC-6K prompts. For every prompt, the same three seeds are used for all methods. Figure 5: Qualitative comparison for prompts from the DVMP dataset. For every prompt, the same three seeds are used for all methods. _Attribute neglect_ occurs when a modifier from the prompt is absent from the generated image. As exhibited in Fig. 6, for "a frog and a brown apple", both baselines do not include a brown color at all. All methods, barring SynGen, grapple with _entity entanglement_[18; 19; 20], where some objects tend to strongly associate with their most common attribute (e.g., tomatoes are typically red). This is evident in columns 3 and 4 of Fig. 4, where other methods fail to correctly visually associate the blue attribute with the dog in "a blue and white dog sleeps in front of a black door". Instead, they resort to typical attributes of the objects, generating a black and white dog. _Entity casting_ is another failure type where a modifier is treated as a standalone entity, a phenomenon commonly observed with noun modifiers. For example, the prompt "a wooden crown and a furry _baby_ rabbit", (column 1 in Fig. 5) has all methods, apart from ours, generate human infants. Presumably, this occurs because "baby" is interpreted as a noun rather than as a modifier, leading other methods to treat it as a separate object due to the lack of syntactic context. Conversely, SynGen correctly interprets "baby" as a modifier and accurately binds it to the rabbit. Similarly, in the prompt "white fire hydrant sitting in a field next to a red building", "fire" is wrongly interpreted as an entity-noun, which leads to the unwarranted inclusion of a fire in the scene. ### Ablation study The importance of both positive and negative losses.We evaluated the relative importance of the two terms in our loss Eq. (3). The positive term \(\mathcal{L}_{pos}\), which encourages alignment of the attention map of an object and its modifier, and the negative loss term, \(\mathcal{L}_{neg}\), which discourages alignment with other modifiers and objects. We sampled 100 prompts from the DVMP dataset and generated images with and without each of the two loss terms. See example in Fig. 7. Then, a human rater was asked to select the best option out of four variants. Table 2 shows that raters preferred the variant that combined both the positive and the negative terms. More examples are given in the supplemental Appendix B. ## 5 Related Work **Semantic leakage**. [2] pointed out cases of semantic leakage in diffusion models, where properties of one entity-noun influence the depiction of another. [21] attributed this to a lack of understanding of syntax, specifically noting failures when processing texts requiring subtle syntactic binding comprehension. [6] also identified semantic leakage issues in DALL-E, where the properties of one entity-noun influence the way by which _other_ entity nouns are depicted. In this work, we pinpoint the leakage as a consequence of improper syntactic and visual binding mapping. Figure 6: Qualitative comparison for prompts from the Attend-and-Excite dataset. For every prompt, the same three seeds are used for all methods. **Attention-based interventions.**[15] demonstrated that the cross-attention mechanism determines the spatial layout of entities in generated images. This result suggested that cross-attention is causally involved in the aforementioned issues. The Attend-and-Excite method [11] addresses the problem of entity omission, where certain entities mentioned in the prompt do not appear in the generated image. They propose a loss function that encourages each noun token in the image to significantly attend to a corresponding image patch, thereby preventing its omission. Our approach is similar to theirs in that it intervenes in the latent generation process through attention guidance. Syntax-based generation was also explored in the Structured Diffusion method [13]. It aims to verify that the generated image represents all entities mentioned in the prompt and prevents semantic leakage of attributes. This is achieved by parsing the prompt, extracting phrases corresponding to nouns and modifiers, and encoding them separately. They also intervene in the attention patterns, ensuring that each individual phrase influences the attention patterns. We demonstrate that it is sufficient, and empirically better, to implicitly influence the attention patterns through our loss which we dynamically optimize. In contrast, their intervention remains fixed. Concurrent to this work, [22] proposed an alternative approach that combines syntactic control and attention-based optimization. They extract entity nouns from prompts and train a layout predictor to identify the corresponding pixels for each noun. Subsequently, they optimize the latents by encouraging the pixels corresponding to the objects to attend to CLIP representations of simple phrases containing those objects. While similar in spirit, this paper demonstrates intervention in the generation process solely based on syntax, without explicitly learning the correspondence between image entities and tokens. ## 6 Limitations Like previous methods, the performance of SynGen degrades with the number of attributes to be depicted (see supplemental Fig. 12). However, its decline is remarkably less pronounced. This decay in performance, can be attributed to two primary factors: (1) an image begins to lose its visual appeal when the negative loss term becomes excessively large; (2) an overly cluttered image poses challenges in crafting a cohesive "narrative" for all the concepts. Moreover, the effectiveness of our method is intrinsically tied to the efficiency of the parser. When the parser falls short in extracting the stipulated syntactic relations, our method essentially operates akin to SD. ## 7 Conclusions In this work, we target the improper binding problem, a common failure mode of text-conditioned diffusion models, where objects and their attributes incorrectly correspond to the entity-nouns and their modifiers in the prompt. To address it, we propose SynGen, an inference-time intervention method, with a loss function that encourages syntax-related modifiers and entity-nouns to have overlapping cross-attention maps, and discourages an overlap from cross-attention maps of other words in the prompt. We challenge our method with three datasets, including DVMP - a new \begin{table} \begin{tabular}{l c c} \hline \hline Loss & Concept & Visual \\ & Separation & Appeal \\ \hline Both losses \(\mathcal{L}_{pos}+\mathcal{L}_{neg}\) & **27\%** & 22\% \\ Positive only \(\mathcal{L}_{pos}\) & \(0\%\) & 11\% \\ Negative only \(\mathcal{L}_{neg}\) & \(3\%\) & **35\%** \\ Stable Diffusion & \(4\%\) & 28\% \\ No winner & \(66\%\) & 4\% \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation of loss components. Human evaluation Figure 7: **Ablation of loss components. Removing \(\mathcal{L}_{neg}\) results in semantic leakage (the bird is white) and entity neglect (there is no crown). Removing \(\mathcal{L}_{pos}\) also leads to semantic leakage (generating a bird and background with white parts), as well as failed attribution binding (generating a crown that is not white).** dataset that is specially-designed to draw out hard cases of improper-binding problem. Our method demonstrates improvement of over 100% across all three datasets over the previous state-of-the-art. Finally, our work highlights the importance of linguistic structure during denoising for attaining faithful text-to-image generation, suggesting promising avenues for future research. ## Acknowledgements We would like to thank Avi Caciularu and Asaf Yehudai for their valuable feedback. This study was funded by a grant to GC from the Israel Science Foundation (ISF 737/2018) and an equipment grant to GC and Bar-Ilan University from the Israel Science Foundation (ISF 2332/18). This project has also received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT). Shauli Ravfogel is grateful to be supported by the Bloomberg Data Science PhD Fellowship. ## References * [1] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 10684-10695, 2022. * [2] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. _arXiv preprint arXiv:2204.06125_, 2022. * [3] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding, 2022. * [4] Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, et al. cdiffi: Text-to-image diffusion models with an ensemble of expert denoisers. _arXiv preprint arXiv:2211.01324_, 2022. * [5] Colin Conwell and Tomer Ullman. Testing relational understanding in text-guided image generation. _arXiv preprint arXiv:2208.00005_, 2022. * [6] Royi Rassin, Shauli Ravfogel, and Yoav Goldberg. DALLE-2 is seeing double: Flaws in word-to-concept mapping in Text2Image models. In _Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP_, pages 335-345, Abu Dhabi, United Arab Emirates (Hybrid), December 2022. Association for Computational Linguistics. * [7] Lvmin Zhang and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models, 2023. * [8] Yoad Tewel, Rinon Gal, Gal Chechik, and Yuval Atzmon. Key-locked rank one editing for text-to-image personalization. In _ACM SIGGRAPH 2023 Conference Proceedings_, SIGGRAPH '23, 2023. * [9] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision, 2021. * [10] Mert Yukeksgonul, Federico Bianchi, Pratyusha Kalluri, Dan Jurafsky, and James Zou. When and why vision-language models behave like bags-of-words, and what to do about it? In _The Eleventh International Conference on Learning Representations_, 2023. * [11] Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, and Daniel Cohen-Or. Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models. _arXiv preprint arXiv:2301.13826_, 2023. * [12] Nan Liu, Shuang Li, Yilun Du, Antonio Torralba, and Joshua B Tenenbaum. Compositional visual generation with composable diffusion models. In _Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XVII_, pages 423-439. Springer, 2022. * [13] Weixi Feng, Xuehai He, Tsu-Jui Fu, Varun Jampani, Arjun Akula, Pradyumna Narayana, Sugato Basu, Xin Eric Wang, and William Yang Wang. Training-free structured diffusion guidance for compositional text-to-image synthesis. _arXiv preprint arXiv:2212.05032_, 2022. * [14] Matthew Honnibal and Ines Montani. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear, 2017. * [15] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aherman, Yael Pritch, and Daniel Cohen-Or. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626, 2022. * [16] Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Dollar. Microsoft coco: Common objects in context, 2015. * [17] Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion, 2022. * [18] Yuval Atzmon, Felix Kreuk, Uri Shalit, and Gal Chechik. A causal view of compositional zero-shot recognition. _Advances in Neural Information Processing Systems_, 33:1462-1473, 2020. * [19] Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross B. Girshick. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. _CoRR_, abs/1612.06890, 2016. * [20] Ishan Misra, Abhinav Gupta, and Martial Hebert. From red wine to red tomato: Composition with context. In _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 1160-1169, 2017. * [21] Evelina Leivada, Elliot Murphy, and Gary Marcus. Dall-e 2 fails to reliably capture common syntactic processes. _arXiv preprint arXiv:2210.12889_, 2022. * [22] Qiucheng Wu, Yujian Liu, Handong Zhao, Trung Bui, Zhe Lin, Yang Zhang, and Shiyu Chang. Harnessing the spatial-temporal attention of diffusion models for high-fidelity text-to-image synthesis. _arXiv preprint arXiv:2304.03869_, 2023. Supplementary Material ## Appendix A Implementation Details Computing resources.Experiments were run on an NVIDIA DGX Station with four v100-SXM2-32GB GPUs. The overall duration of all experiments in the paper was about two weeks. Hyperparameters.The hyperparameters we used consist of 50 diffusion steps, a guidance scale of 7.5, a scale-factor of 20, and 25 latent update steps. The choices of scale factor and latent update steps are described in Appendix B.2 and Appendix B.3 respectively. Parser.Throughout this project, we use the space parser with the out-of-the-box en_core_web_trf model. Attending word pieces.When a relatively uncommon word is encountered by the tokenizer of the text encoder, it is split to sub-words (i.e., word pieces). In the context of our loss function, when an entity (or modifier) is split into word pieces, we compute our distance function (the Symmetric-KL) for each word piece. Then, only the word piece that maximizes the distance is added to the loss. Cross-attention maps.We describe more formally the cross-attention maps on which we intervene. Let \(N\) be the number of tokens in the prompt, and let \(D^{2}\) be the dimensionality of the latent feature map in an intermediate denoising step. The denoising network defines a cross-attention map \(A^{\textit{patches}\to\textit{tokens}}\in\mathbb{R}^{D^{2}\times N}\) between each of \(D^{2}\) patches in the latent feature map and each token. Intuitively, the attention maps designates which tokens are relevant for generating each patch. Our goal it to derive an attention distribution \(A^{\textit{tokens}\to\textit{patches}}\in R^{N\times D^{2}}\) such that its \(i\)-th row \(A^{\textit{tokens}\to\textit{patches}}_{i}\) contains the attention distribution of token \(i\) over patches. For this goal, we define a score matrix \(S\) to be the transpose of \(A^{\textit{patches}\to\textit{tokens}}\), i.e., a matrix whose \(i^{th}\) row contains the attention scores from each patch to token \(i\). Since \(S\) is not normalized, we divide each row by its sum to get a distribution over patches. Unless stated otherwise, across the paper, we refer to \(A^{\textit{tokens}\to\textit{patches}}\in R^{N\times D^{2}}\) when mentioning the "cross-attention maps" \(A\) and its \(i^{th}\) row \(A_{i}\) corresponding to the attention map from the \(i^{th}\) token. ## Appendix B Additional Ablation Experiments ### Further Investigation of the Positive and Negative Loss Terms In Section 4.3, we investigate the importance of the positive and negative loss function terms using a human rater. Here, we accompany the rating with a qualitative analysis, to examine the effect of each term. To this end, we generate images for 15 randomly selected prompts, five from each dataset. Fig. 8 depicts a sample of the generated prompts. We find that proper binding necessitates both the positive and negative terms: excluding the negative term from the loss function results in two noteworthy observations. First, the number of missing objects increase, evident by the missing crown, cat, metal chair, and tomato, in columns 1, 2, 4, and 5 in Fig. 8. One consequence of missing objects is the apparent improper binding, indicated by the red backpack and black shirt in columns 1 and 3. On the other hand, excluding the positive term results in fuzzier separation between objects. For instance, the cat is not completely formed, and is "merged" with the pillow; and while it appears that there is some green residue on the dog, it is not colored green. Moreover, the grass is green, which indicates a semantic leakage. Putting these insights together, we observe that to some extent, the effect the loss terms is complementary. In addition to the increase of objects and proper binding, the images are more coherent (less cases of objects mixed into each other, such as the cat in the only-negative loss or the elephant in the only-positive loss). ### Number of Timesteps for Intervention Recall that our method intervenes in latent denoising generation. In this appendix, we study the effect of the hyperparameters determining the number of steps in which we intervene. To identify an ideal number of timesteps to intervene, we experiment with 100 randomly selected prompts from the DVMP dataset, a fixed random seed, and a number of update steps from 5 to 50, in increments of 5. Examples of this experiment are shown in Fig. 9. We observe that when intervening in a small number of timesteps, our method failed to adequately mitigate semantic leakage or that images are not completely formed. For instance, the apple in column 1 in the 15-steps example is cartoon-ish, while the dog is not. Conversely, intervening for the full 50 timesteps resulted in an increase rate of blurred images (potentially due to the significant modification of the latent, which shifts it away from the learned distribution). We conclude that the optimal number of timesteps for intervention is 25, as this allows for effective mitigation of improper binding, while still generating visually appealing images. ### Setting the Scale Factor The scale factor affects the update step size. Recall the update step stated in Section 2\(z^{\prime}t=z_{t}-\alpha\cdot\nabla z_{t}\mathcal{L}\). Here, \(\alpha\) is the scale-factor. To determine a good selection for the scale-factor, we generate 100 randomly sampled prompts from the DVMP dataset, with a scale-factor value from 1 to 40, in increments of 10. As can be seen in Fig. 10, we observe that merely updating the latent using a scale-factor of 1 yields relatively good results in terms of improper binding, which confirms the utility of our loss function. However, such a low scale-factor also consistently leads to missing objects. Interestingly, for greater scale-factor values, the generations become alike in their overall look, but are nonetheless very different. As an example, for both values, 10 and 30, the sliced banana is missing from the image in column 2, but the 30-value does result in a spotted teal skateboard. In column 3, values below 20 lead to images that contain two pandas (none of which are spotted), which indicates the proper binding process, and that the latent was not updated enough. On the other hand, a value greater than 20 leads to an image of a striped rabbit, instead of a spotted rabbit. Figure 8: We examine the effect of employing only one of the two losses instead of both. All images were generated using the same random seed. One interesting conclusion from this experiment is that the greater the scale-factor, the stronger the concept separation. However, this is only true to a point. For a great enough value, generations become too blurred or simply lose their visual appeal. ## Appendix C Additional Quantitative Analyses To study the efficacy of SynGen relative to the baselines in improper binding setting, we analyze the results under three perspectives. (1) as a function of repeating entities and modifiers; (2) as a function of the number of modifiers; and (3) degree of entanglement. Samples of generations are shown in Fig. 14. Number of repeating modifiers and entities.In this analysis, we examine the performance of all methods for prompts containing recurring modifiers (e.g., "a _sliced_ strawberry and a _sliced_ tomato) or entities (e.g., "a sliced _tomato_ and a skewered _tomato_"). Aggregated results are illustrated in Fig. 11. Our observations reveal a decrease in performance across all methods when modifiers are repeated. However, the relative success between SynGen and the baselines in performance remains the same. Moreover, there is no substantial decline in results when entities are repeated. Number of modifiers in prompt.We hypothesize that since our method is specifically designed to tackle improper binding, it handles prompts containing many modifiers with more success. This is confirmed in Fig. 12, which shows the gap between SynGen and the baselines widens as the number of modifiers in a prompt increases. Entangled entities.As we describe in Section 4.2, entangled entities are strongly associated with their most common attribute. For instance, a tomato is typically red, and thus, it is common for images to depict _red_ tomatoes. Figure 9: We experiment with varying number of diffusion steps and examine the effect of changing the number of diffusion steps for which we intervene with the cross attention maps. All images were generated using the same random seed. We categorize the prompts into three groups: (1) entangled prompts, which contain entangled entities with a modifier that overrides a common modifier (e.g., a _purple_ strawberry); (2) common entangled prompts, which contain entangled entities with their common modifiers; and (3) neutral prompts, which do not contain entangled entities at all. Performance as a function of these groups is demonstrated in Fig. 13. ## Appendix D Additional Qualitative Results ### Comparison to Spatial-Temporal Diffusion As described in Section 5, concurrent to this work, [22] developed a method to optimize the latents. While they primarily attend spatial and temporal relations, they too report on improper binding, namely, attribute mismatch. Thus, we extend the tables from Section 4, to include Spatial-Temporal Diffusion, see Fig. 15, Fig. 16, Fig. 17. Based on these 18 images, we observe that Spatial-Temporal Diffusion consistently misses at least one entity from the prompt. As an example, see Fig. 15. The images in columns 1 and 2 miss a crown (but include "wooden" objects), and columns 3 and 4 miss a lion and exhibit semantic leakage. In other cases, we note many cases of semantic leakage in and out of the prompt. For instance, in Fig. 17, in column 2 the clock is brown and the wall is pink, and in column 3, the chair is pink. Figure 10: Qualitative comparison between scale factor values for SynGen. For every prompt, the same seeds are applied. We anecdotally show our scale-factor value (we use the value 20) provides superior results. Figure 11: The performance of SynGen and the baselines in concept separation on prompts containing (a) repeating modifiers; and (b) repeating entities in the DVMP dataset. Figure 12: Concept Separation as a function of number of modifiers in a prompt in the DVMP dataset, introduced in Section 3.2. Only the top-competing method (Attend-and-Excite) is plotted for readability. Figure 15: Extended qualitative comparison for prompts from the DMVP dataset. SynGen and Spatial-Temporal Diffusion [22]. ### Stable Diffusion and Structured Diffusion Comparison A comparison between Stable Diffusion and Structured Diffusion is depicted in Fig. 18. The findings from the study by [11] suggest that the generated images from Structured Diffusion are often similar to those generated by Stable Diffusion, with limited improvements in addressing semantic flaws and enhancing image quality. This is further supported by the comparable results presented in our findings Table 1. Therefore, while we include all baselines in our evaluations, our qualitative analysis only showcases images produced by the slightly superior Structured Diffusion. Figure 14: Samples from the analyses in Appendix C. (a) a case of recurring entity (strawberry); (b) a recurring modifier (black) and entity (apple); (c) and (d) contain entangled entities (a blue bear and a purple strawberry); (e), (f), (g) are examples of prompts with more than two modifiers. Figure 13: The performance of SynGen and the baselines in concept separation when grouping the prompts with respect to entangled modifiers in the DVMP dataset. ## Appendix E SynGen Failures We observe three recurring types of failure SynGen displays Fig. 19. First, when there are many modifiers and entities in the prompt, despite the results in Fig. 12, we note that sometimes the negative loss component becomes exceedingly large, and thus, pushes the latent out of the distribution the decoder was trained on. Consequently, images become blurred, or contain concepts which are successfully separated, but are incoherent. This is likely because our method over-fixates on incorporating all elements described in the prompt. Figure 16: Extended qualitative comparison for prompts from the ABC6K dataset. SynGen and Spatial-Temporal Diffusion [22]. Figure 17: Extended qualitative comparison for prompts from the Attend-and-Excite dataset. SynGen and Spatial-Temporal Diffusion [22]. Figure 18: Side-by-side generations of StableDiffusion and StructureDiffusion. Second, while SynGen typically successfully addresses the possible error cases described in Section 4.2, at times it can neglect generating all objects, unify separate entities, or neglect generating attributes. We conjecture that it is because the cross-attention maps of the modifier and its corresponding entity do not overlap enough. We note it usually occurs when there are many modifiers that refer to the same entity. Finally, as common with many diffusion models, we report a recurring issue with faithfulness to the number of units specified in the prompt, for a certain entity. For instance, upon receiving prompts containing "a strawberry", SynGen generates images with multiple strawberries, instead of just one. One explanation to this problem is that the representation of a certain entity begins "scattered", and is never quite formed into a single cluster. Interestingly, the opposite problem, where multiple units are "merged" into one, occurs far less in the generations of SynGen. Possibly, because of the inherent objective function of our loss, which "pushes away" foreign concepts from one another. ## Appendix F The Diverse Multiple Modifiers Prompts (DVMP) dataset In Section 3.2 we describe DVMP, a new dataset containing rich and challenging combinations, for the purpose of evaluating improper binding. In total, DVMP has 18 types of objects, 16 types of animals, and 4 types of fruit. There are four animal modifiers, 7 object modifiers, two fruit modifiers, and 13 colors. A comprehensive account of the entities and their possible modifiers is shown in Table 3. ## Appendix G Extended Evaluation ### Phrases-to-Image Similarity A common approach to automatically assess text-based image generation is by computing the cosine similarity between an image and prompt, using a vision-language model like CLIP [9]. However, the very challenge we tackle here is rooted in CLIP's failure in establishing correct mapping between syntactic bindings and visual bindings, functioning like a bag-of-words model [10]. As an example, suppose we give CLIP the prompt "a blue room with a yellow window". If we present CLIP with an image of a yellow room with a blue window, it may yield a similar score to an image that accurately depicts a blue room with a yellow window. In an attempt to address this flaw, we segment prompts to phrases containing entity-nouns and their corresponding modifiers (e.g., "a blue room" and "a yellow window"), and compute the similarity between these segmented phrases and the image. We then aggregate the result to a single score by computing the mean. With this approach, we expect CLIP to properly associate the modifiers (e.g., "blue" and "yellow") with the correct entity-noun (i.e., "room" and "window") as there is only one Figure 19: Frequent failure modes in SynGen. (a) depicts a case of blurred image, (b) incoherent image which maintains concept separation. Both are a result of excessive updates to the latent, resulting from a large negative loss term. In example (c), the zebra and lion are merged into a single entity and (d) omits the sleepy lion. We conjecture (c) and (d) are a result of too little updates. (e) and (f) exhibit the well-known issue of flawed mapping between the number of units an entity is mentioned in the prompt to the generated image. entity-noun in each segment. Unfortunately, this metric achieves relatively low agreement with the majority selection of human evaluation, only 43.5% of the time, where 25% is random selection. Despite the low agreement, we note the overall trend of selections of this automatic metric is very similar to the human majority selection. Table 4 shows the results of our automatic evaluation. ### Additional Details on Human Evaluation Experiments In the manual evaluation procedure detailed in Section 3.3 the evaluator is tasked with comparing various image generations and selecting the optimal image based on multiple criteria. The guidelines and examples given to the evaluators are presented in Fig. 20 and Fig. 21. Fig. 22 provides a screenshot of the rating interface. The full results of the human evaluation are given in Table 5 Rater Compensation.Raters were selected based on their performance history, requiring a minimum of 5,000 approved HITs with an approval rate exceeding 98%. They were required to pass a qualification exam with a perfect score before given access to the task. The hourly compensation was $10, ensuring fair renumeration for their contributions. \begin{table} \begin{tabular}{l c c c} \hline \hline Method & DVMP (ours) & ABC-6K [13] & A\&E dataset [11] \\ \hline SynGen (ours) & **47.33\%** & **41.33\%** & **44.63\%** \\ A\&E [11] & 27.66\% & 24.33\% & 27.11\% \\ Structured Diff. [13] & 12.84\% & 17.84\% & 11.87\% \\ Stable Diff. [1] & 12.17\% & 16.50\% & 16.39\% \\ \hline \hline \end{tabular} \end{table} Table 4: Automatic evaluation of all methods on the three datasets. The table reports scores for concept separation (how well the image matches the prompt) and visual appeal. Values are the fraction of majority vote of three raters, normalized to sum to 100. \begin{table} \begin{tabular}{l c c} \hline \hline **Category** & **Entities** & **Modifiers** \\ \hline General & backpack, crown, suitcase, chair, & modern, spotted, wooden, metal, \\ & balloon, bow, car, bowl, bench, clock, & curved, spiky, checkered \\ & camera, umbrella, guitar, shoe, hat, & \\ & surfboard, skateboard, bicycle & \\ \hline Fruit & apple, tomato, banana, strawberry & sliced, skewered \\ \hline Animals & cat, dog, bird, bear, lion, horse, & furry, baby, spotted, sleepy \\ & elephant, monkey, frog, turtle, rabbit, & \\ & mouse, panda, zebra, gorilla, penguin & \\ \hline \hline \end{tabular} \end{table} Table 3: List of entities and their modifiers in the DMVP dataset. Colors are not restricted to categories.
2305.11876
Challenges and Trends in User Trust Discourse in AI
The Internet revolution in 1990, followed by the data-driven and information revolution, has transformed the world as we know it. Nowadays, what seam to be 10 to 20 years ago, a science fiction idea (i.e., machines dominating the world) is seen as possible. This revolution also brought a need for new regulatory practices where user trust and artificial Intelligence (AI) discourse has a central role. This work aims to clarify some misconceptions about user trust in AI discourse and fight the tendency to design vulnerable interactions that lead to further breaches of trust, both real and perceived. Findings illustrate the lack of clarity in understanding user trust and its effects on computer science, especially in measuring user trust characteristics. It argues for clarifying those notions to avoid possible trust gaps and misinterpretations in AI adoption and appropriation.
Sonia Sousa, Jose Cravino, Paulo Martins
2023-05-05T07:02:57Z
http://arxiv.org/abs/2305.11876v2
Challenges and Trends in User Trust Discourse in AI ## Abstract The Internet revolution in 1990, followed by the data-driven and information revolution, has transformed the world as we know it. Nowadays, what seam to be 10 to 20 years ago, a science fiction idea (i.e., machines dominating the world) is seen as possible. This revolution also brought a need for new regulatory practices where user trust and artificial Intelligence (AI) discourse has a central role. This work aims to clarify some misconceptions about user trust in AI discourse and fight the tendency to design vulnerable interactions that lead to further breaches of trust, both real and perceived. Findings illustrate the lack of clarity in understanding user trust and its effects on computer science, especially in measuring user trust characteristics. It argues for clarifying those notions to avoid possible trust gaps and misinterpretations in AI adoption and appropriation. ## Introduction Current digital transformation events forced new digital interaction patterns that did not exist 10 or 20 years ago, impacting how we play, work, and build communities. As a result, society, organizations, and individuals must rapidly adapt and adjust to these new digital norms and behaviours. In the short term, the division between online and physical activities diminished, increasing the capacity to act in a larger digital market and society. Consequently, these digital transformation events forced us to become more vulnerable to the actions of digital parties without adequately understanding or being able to assess their risk, competency, and intentions. Digital social media platforms like Facebook (2004), Twitter (2006), and Youtube (2005) or messaging services such as WhatsApp (2009) have promoted this new era of communication that resulted in continuous attempts to subvert their original purposes (i.e., malicious acts). Examples of this can be the generation of mistrust against vaccines, the creation of content supporting climate denial theories or disinformation campaigns, or the surge in data breaches [6, 59, 75, 77, 83]. This attempt of '_science and engineering of making intelligent machines_,' as [52] conceptualized Artificial intelligence (AI), resulted in the spread of software that uses computer technology to generate outputs such as content, predictions, recommendations, or decisions influencing the environment they interact. Moreover, adding to this notion of AI shared by the European Commission's Artificial Intelligence Act1) is the fact that nowadays, we can find AI systems that mimic, think and act like humans [64]. What highlights the potential of those AI mechanisms, not just as information revolution tools but as well as data threats, take the example reported in books like _'Weapons of math destruction: How big data increases inequality and threatens democracy'_, or _'The AI delusion'_. Therefore, what once was seen as science fiction has become a reality, emphasising people's fears and mistrust of the possibility of machines dominating the world. Those fears have led to the surge of new reports, articles, and principles that seek more trustworthy AI (TAI) visions that provide a Human-centered vision of users' trust and socio-ethical characteristics towards AI [23, 38, 88, 85, 86, 88]. It also increased the discourse on AI and the need to complement existing data protection regulatory mechanisms (e.g., GDPR - ISO/IEC 27001)2). It also highlights the need for seeking new'responsible' AI ethical solutions that are less technical and profit-oriented. Footnote 2: [https://gdpr-info.eu/](https://gdpr-info.eu/) This Human-centered vision of AI, however, is hard to understand, especially if, like [14], we try to make accountable the service providers, in fact, to the detriment of the default, mainstream attention is given to it for AI providers, developers, and designers, it is unclear how to ensure that the AI system designed by humans can be reliable, safe, and trustworthy [68]. More, AI's complexity makes it challenging to guarantee that AI is not prone to incorrect decisions or malevolent surveillance practices. Like the GDPR, as AI's popularity increase, it also increases its potential to create opaque practices and harmful effects and AI's unpredictability, making it hard to be audited, externally verified, or question (i.e., black box) [89]. Additionally, AI's unpredictability makes it difficult to avoid unforeseen digital transformations, harmful practices, and risks. It also makes it hard to predict behaviour and identify errors that can lead to biased decisions, unforeseen risks, and fears [7, 26, 38, 69, 47]. In conclusion, the increase in AI's popularity also increased its complexity, the number of decentralized and distributed systems solutions, increased as well the AI's opacity and unpredictability. When mixed with poor design practices, these AI characteristics can produce vulnerable interactions that lead to further breaches of trust (both real and perceived). With this work, we aim to share our vision regarding the challenges and trends in user trust discourse in AI popularity from a Human-Computer Interaction (HCI) perspective. Results presented are supported by the author's work in mapping the trust implications in HCI during the past decade and situated in the context of three recent systematic literature reviews performed on trust and technology [76, 3, 9]. Hoping that this clarifies the nature of user trust in recent AI discourse (RQ) and also avoids designing vulnerable AI artefacts that build on trust without understanding its influence in the uptake and appropriation of those AI complex systems. This work's main contribution is to link the previous trust in technology discourse with recent AI popularity and user trust trends. Then, it illustrates the importance of providing an HCI perspective of user trust discourse. Finally, establish a link between past trust in technology practices, current thoughts, and research gaps. ### AI popularity and the discourse on users' trust The recent waves of technology innovations are marked by AI popularity, the social network revolution, distributed data, automated decision-making, and the ability to trace and persuade behaviours [13, 36, 31, 46, 47]. These AI information-driven revolutions recently resulted in the spread of AI complex and distributed software solutions that generate automated and unpredictable outputs that cannot guarantee that they are not prone to provide incorrect content, predictions, or recommendations or mislead people into incorrect decisions that can have potentially harmful consequences in environments they interact, like malevolent surveillance practices and disinformation practices [8, 17, 19, 47, 50, 60, 91]. This technological revolution wave also resulted, in an increased discourse toward trust in AI, seeking solutions to regulate, diminish people's fears, and guarantee a user trust approach to the topic. Take the example of the European Commission draft EU AI act3, the Organisation for Economic Co-operation and Development (OECD), the Business Machines Corporation (IBM) and their efforts to clarify the Trustworthy AI (TAI) principles and practices [58, 20, 85]. Footnote 3: [https://artificialintelligenceact.com/](https://artificialintelligenceact.com/) This increase and new TAI discourse challenge HCI practitioners, a need to establish new trust boundaries (e.g., regulations, socio-ethical practices, etc.) to ensure Humans' ability to monitor or control their actions [51, 69]. However, like AI, addressing trust in technology can be a complex subject for non-experts, as it acknowledges the deterministic models (that aggregate system technical characteristics) and the human-social characteristics that envision trust through a set of indirect parameters. This has raised another challenge to AI popularity, seeking solutions to trigger users' trust in AI [7, 26, 38, 47, 69]. However, with the increased popularity of AI software, it is unavoidable for society to be susceptible to its opaque practices, which can lead to further breaches of trust. Adding to this, the fast spread and dependency on AI prevent individuals from fully grasping the intricate complexity of those machines' capabilities, which can lead to potentially harmful consequences. Consequently, we believe that a new trend in user trust research will be revealed, similar to the rise of the Special Interest Group in 1982, to address the need for the design of Human-Computer Interactions (e.g., SIGCHI). This will lead to new international standards, expert groups, and international norms to tackle this problem. Take the example of the high-level expert group (AIHLEG) 4[20]. Or the Working group 3 -- trustworthiness referred to in the international standards for Artificial intelligence (ISO/IEC JTC 1/SC 42/WG 3) 5. Footnote 4: [https://digital-strategy.ee.europa.eu/en/policies/expert-group-ai](https://digital-strategy.ee.europa.eu/en/policies/expert-group-ai) Footnote 5: [https://www.iso.org/committee/6794475.html](https://www.iso.org/committee/6794475.html) The above initiatives attempt to defy the surveillance capitalism practices and fight the corporate approach to data as the new oil of this century, seeking short-term profits without considering the ethical consequences of their actions. However, their focus is on tackling the AI problem and not so much focus on understanding or mapping the influence or consequences of user trust in its adoption and appropriation practices. The recent EU's AI act6 shares a broader vision of this problem and represents an attempt to incorporate the notions of risk and trust within AI's characteristics like complexity, opacity, unpredictability, autonomy, and data-driven qualities. highlighting the need for finding new user trust solutions that foster feelings of safety, control, and trustworthiness in current AI solutions. Footnote 6: [https://artificialintelligenceact.com/](https://artificialintelligenceact.com/) In their regulatory scope (i.e. ethical guidelines for trustworthy AI), the EU encourage building public trust as an investment for ensuring AI innovation and respecting fundamental rights and European values. It also classifies trust as a need to be understood within four potential AI risks to health, safety, and fundamental rights from minimal or no risk, AI with specific transparency obligations (e.g., 'Impersonation' (bots)), High risk (e.g., recruitment, medical Devices), and an unacceptable risk (e.g., social scoring). Those demand different Trustworthy AI (TAI) frames to ensure public trust in AI computing speech or facial recognition techniques in applications like social chatbots, human-robot interaction, etc. For AI providers and non-expert in trust, however, it is challenging to fully understand the user trust influence in AI acceptance and appropriation, as current, trustworthy AI principles provide a set of requirements and obligations with an unclear trust notion, sometimes associated with notions of ethics and accountability. In sum, for now, the EU's AI act is a very recent regulatory framework, but those principles are likely to be extended to the world, similarly to the GDPR. If so, it becomes unavoidable to clarify the nature of user trust in recent AI discourse (RQ1). Including clarifying the link between past trust in technology practices, current thoughts, and research gaps. ### TAI conceptual challenges The above-described misconceptions and malevolent practices, followed by the EU AI Act draft and adopting a risk-based approach (unacceptable risk, high risk, & limited or minimal risk), raised the need for addressing the challenges and trends in user trust discourse in AI. As well as for providing further conceptual clarifications and strategies that demystify the discourse of trust and socio-ethical considerations, user characteristics when facing risk-based decisions, and design and technical implementations of trust [9, 39, 84]. Avoiding trust in AI solutions that are marrow framed from technical or single constructs like explainable AI (XAI), privacy, security, or computational trust. That eventually cannot guarantee that humans do not misinterpret the causality of complex systems with which they interact and lead to further breaches of trust and fears [17, 18]. Take, for instance, the following socio-ethical considerations design toolkits, guidelines, checklists, and frameworks whose goal is to bring more clarity to the problem. Like the IDEO toolkits to established by the entitled trust Catalyst Fund 7, and the IBM Trustworthy AI, a human-centered approach8, or [71], an agile framework for producing trustworthy AI was detailed in [45], and an article entitled Human-centered artificial intelligence: Reliable, safe & trustworthy was presented by Shneiderman (2020) [69]. Similarly the Assessment List for Trustworthy Artificial Intelligence (ALTAI)9, and the EU Ethics guidelines for trustworthy AI10. They neither offer clarity on trust notions nor explain how the proposed practices leverage user trust in AI. Footnote 7: [https://www.ideo.com/post/ai-ethics-collaborative-activities-for-designers](https://www.ideo.com/post/ai-ethics-collaborative-activities-for-designers) Footnote 8: [https://www.ibm.com/watson/trustworthy-ai](https://www.ibm.com/watson/trustworthy-ai) Footnote 9: [https://altai.insight-centre.org/](https://altai.insight-centre.org/) Footnote 10: [https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1](https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1) As [9] and [3] findings confirm, measuring trust remains challenging for researchers. Currently, there is more than one way to define and measure trust. According to [9], Out of the 23 empirical studies explored, only seven explicitly define trust. At the same time, eight conceptualize it, and the remaining nine provide neither. Therefore, user trust is still an underexplored topic. According to [3], there is still a lack of clarity on measuring trust in real-time, and few solutions provide stable and accurate ensemble models (results from 51 publications). Those that exist are narrow, subjective, and context-specific, leading to an oversupply of models lowering the adoption. Especially when using psychophysiological signals to measure trust in real time. This phenomenon happens despite computational trust research emerging in 2000 as a need to provide a technical-centred and automated way to tackle the trust factors in system design, i.e., to authenticate trustee's reputation and credibility [66]. Ultimately, and to avoid the past mistake of forward-push regulations, trust measures that ultimately are technically implemented without considering the tensions between the current state of creating new technical innovations, profit-oriented deployment, and its socio-technical implications across societies. Researchers need to look beyond the technical-centred vision of trust in computing and produce new user trust notions that help clarify the role of trust in these new technical profit-oriented AI innovations. As [63] (p.132) argues, to fully gauge AI potential benefits, trust needs to be established, both in the technology itself and in those who produce it, and to put such principles to work, we need robust implementation mechanisms. Yet, researchers need to ensure its proper application by providing frameworks that clarify its implementation and avoid misinterpretation and misguided implementations [26, 70]. Claiming once more for a shift from emphasis system's technical qualities toward human-centred qualities, similar to the move between usability and user experiences, i.e., from a focus on design features towards a focus on experiences of use [37, 44, 53]. ## Discussion The challenges mentioned above have shifted current literature discourse towards Human-centered design (HCD) as a strategy to address the socio-technical characteristics of design features and lessen misinterpretation gaps in regulatory practices. These needs are followed by a need to clarify the current trust lenses of analysis, as trust can be a key to lessening the risks to the development, deployment, and use of AI in the EU or when it will affect people in the EU. However, as seen in the literature, trust divergent notions can prevent non-experts from adequately addressing this need from an HCD perspective, which can lead to an increased risk of failure, increasing its misinterpretation gaps that can be more harmful than good [4]. Therefore, needs and challenges that were not recognized 10 to 20 years ago are now a reality, which can create gaps in IT education. Currently, few curricula contemplate this socio-technical view or Human-centered design vision nor the ethical focus on measuring the risks of their potential misuse. As a result, IT and AI specialists might not be equipped with the necessary skills to address the challenges mentioned above, let alone know how to deal with this topic's complexity and application challenges, i.e., the Trustworthy AI (TAI) risk-based approach promoted by the EU. In that regard, despite agreeing that HCI researchers can contribute to broadening this analysis and helping IT, specialists adopt more user-centred strategies to prevent building systems that can lead to risky, unsafe life or long-term threatening social ramifications like the examples presented above [69, 87]. They also need novel theories and validated strategies to address the socio-technical effects of trust in System complexity. Like in the past, the focus shifted from measuring the usability characteristics of a system (e.g., efficiency and effectiveness) towards or focusing on hedonic characteristics (e.g., emotion and satisfaction), and now to a risk-based approach where trust is part of users' experiences. However, this needs to be followed by clear notions of trust, psychologically validated analysis, and associated underlying theories in context [5, 37, 39, 63]. Trust, like satisfaction, is a human characteristic, not a machine characteristic. Past narrow views and assumptions on trust in technology might not fit in current Human-centered TAI applications [39]. A vision highlighted and shared in figure 1, based on a culmination of various works performed in the past ten years (e.g., literature reviews, participatory research, teaching, supervising, etc.) to understand the nature of user trust in Human-Computer Interaction (HCI) [25, 43, 48, 51, 78, 79, 81, 82]. ### The nature of trust research in HCI Trust in HCI, as illustrated in figure 1, is a social-technical quality that supports the interrelationship between a trustee (the entity being trusted) and the trustor (the person who trusts). Trusting is a will to be vulnerable. Note that vulnerability implies risk acceptance based on how well the 'trustor' can perceive the 'trustee' as trustworthy, as [51]. However, past views of trust tend to associate it with single constructs, like 'credibility, 'privacy,' and'security.' Associating user trust as a characteristic of disclosure of certain types of information (i.e., privacy) or preventing cybersecurity threats to ensure system trustworthiness [16]. Maybe a reason why trust notions and applications in computer science literature provide an oversupply of trust visions, solutions, and applications. Take the example of [76] findings (results from 69 publications) that reveal that trust notions and solutions can differ and depend on the application context. Mainly trust is addressed as a quality to influence technology adoption, data sharing credibility, and positively influencing user's intentions and behaviours. Take, for example, how trust is addressed within the privacy and security research topic. Herein, researchers see trust as avoiding potential misuse and access to personal data. It is sometimes mentioned as an information credibility attribute. Trust visions in e-Commerce, eHealth, or eGovernment are connected with 'risk,''reputation,' and'assurance' mechanisms to establish loyalty, minimize risk and uncertainty and support decision-making. Solutions range from data-driven trust models to observing the impact of trust in encouraging decision-making and encouraging technology adoption (e.g., commercial transactions, diagnostic tools, adoption of services, etc.). In social networks, trust emerged as a way to sustain interaction processes between members of actor networks in emerging scenarios and argue that trust contributes to promoting the regulation of interaction processes. Trust is also useful in creating sustainable computer support collaborative work to support interpersonal interactions online. Regarding its associated concepts, trust is associated with transparency, assurances, data credibility, technical and design feature, trustworthiness, users' predispositions to trust, explicability, etc. Mainly ways to reduce the uncertainty and risk of unpredictable and opaque systems, e.g., speech and facial recognition systems, crewless aerial vehicles (e.g., drones), IoT applications, or human-robot interactions (HRI). However, most trust studies present a narrow and simplified view, focusing on data-driven computational trust mechanisms to rate a system or a person as reliable. Presenting a view of trust as rational expectation, person, object, or good reliability or credibility when a first Figure 1: The nature of trust in HCI: Conceptualization encounter occurs and no trust has been established, i.e., establish trust between two strangers. Discarding more complex aspects o trusted relations through time, the Human-system relationship is established through various indirect attributes like risk perceptions, competency, and benevolence [28, 29, 30, 79, 80]. The above paragraph illustrates the pertinence of providing new user trust visions that can be adjusted to new digital AI regulations, behaviours, and innovations. It also illustrates the complexity of both subjects, AI and user trust. On the one hand, the new EU AI act sees public trust as a way to guarantee AI innovations, guaranteeing that it is not prone to high risks like leading users to incorrect decisions or malevolent surveillance practices. On another, the AI providers are not experts in trust in technology, which make it hard for them to acknowledge the deterministic models (that aggregate system technical characteristics) and the human-social characteristics that envision trust through a set of indirect parameters. In literature, for instance, trust is associated with narrow views like'reputation,' 'privacy,' and'security.' Literature on trust and computing also comes associated with computational trust model [62]. With the same regard to security and privacy measures and their role in fostering AI trustworthiness, recent malevolent use demonstrates that new visions need to be adjusted to prevent mistrust in technology. Just addressing trustworthy AI measures as a way of preventing intrusion, allowing the individual the right to preserve the confidentiality, integrity, and availability of information might not be enough within today's socio-technical complexity [34, 61]. Privacy refers to the individual's right to determine how, when, and to what extent they will disclose their data to another person or an organization [15]. As figure 1 illustrates, user trust considers Socio-ethical considerations, Technical artefact, Application context, and Trustee & trustor characteristics [9, 22, 33, 42, 65, 76]. A trustful relationship requires assessing the risks (i.e., gains and losses). Requires evaluates the tool's ability (e.g., competence, reliability, accuracy, etc.) to perform the desired outcomes; and assesses if an entity (i.e., company, organization, institution, etc.). Requires individuals exceptions that digital relationships follow expected social norms and ethical principles. For instance, trustworthiness is an attribute or quality of a system, a person, or an organizational structure. As the Oxford dictionary describes it, **Trustworthiness** is the quality or fact of being trustworthy (= able to be trusted)". Work like the Assessment List for Trustworthy Artificial Intelligence (ALTAI), or Trustworthy AI (TAI), is human-centred, or even Human-centered artificial intelligence: Reliable, safe & trustworthy do not address it or only address it from a shallow view. On the other hand, if **Trust** is to believe that someone is good and honest and will not harm you or that something is safe and reliable. **Trustworthy**, on the other hand, is the ability to be trusted, and trustworthiness is a judgment of trust. Trusting reflects the strength of a person's belief that someone else will help them to achieve an acceptable outcome [11, 35]. Trustworthiness and trustworthy are characteristics of trust, and in any complex construct, both (qualities and characteristics) are measured through indirect and direct interrelated factors TAI regulations are one example. Measuring the attribute or quality of a system (e.g., privacy or security) might not be enough to address it. Or, for instance, take the example of system explainability (XAI) or computational trust models called by some reputation mechanisms [1, 62]. As [17] claim, some technical-centred explanations might mislead individuals' understanding of the subject. As [18] work illustrates, in some cases, humans' limitations to understanding a complex subject might prevent them from misunderstanding their work. As [74] result revealed, the interrelations between trust and performance can be negative, i.e., the higher the trust, the lower the performance. Yet, some limited-risk applications do not prevent them from using and benefiting from these tools. I do not need to understand a car's or aeroplane's mechanics to trust and use it. Individuals already (successfully) interact with complex AI systems daily. But I should be aware of its potential threats to making knowledgeable decisions, especially when adopting an AI system leads to an unacceptable or high-risk approach as the EU act describes it. Thus, to successfully maintain trust in specific events, we should not look at it from a narrow technical perspective. User trust in AI (i.e. technical artefact) can also be influenced by users' cultural traits, past experiences, and applications context [39, 63]. Therefore, it is important to include both visions: **Trust as a personal trait**, understood as a perceived skill set or competencies of trustee characteristics (e.g., how teachers are perceived in a school system); **Trust as a social trait**, understood as the mutual "faithfulness" on which all social relationships ultimately depend. **Trust** reflects an attitude, or observable behaviour, i.e., the extent to which a trustee can be perceived in society. For instance, to want to do good, be honest - "beenvolence.' Follow privacy and security regulations. **Trust as reciprocal trait**, closely related with ethical and fair. For instance, the extent to which the trustee adheres to a set of principles that the trustor finds acceptable--for instance, an economic transaction. This led to another application challenge, trust measurements [9, 76]. Current trust Figure 2: The user trust influence in shaping the interaction context: conceptualization misconceptions lead to an oversupply of computational trust assessment models that can only be used in narrow AI applications. [32] recommendation trust model for P2P networks and [40] trust beyond security: an expanded trust model is an example of that. Some, however, measures of trust across gender stereotyping and self-esteem indicate that trust can be measured in a broader socio-technical perspective [57, 41]. The EU self-assessment mechanisms for Trustworthy Artificial Intelligence created by the AIHLEG expert group is another example [21] of broadening the view. The same regards the Human-Computer trust (HTC) psychometric scales proposed by [49], SHAPE Automation trust Index (SATI) [27]. We need new HCI mechanisms to measure potentially faulty TAI design practices, which can lead to risky, unsafe life or threatening social ramifications [69, 87]. If not, HCI researchers might continue looking for specific and narrow solutions that can fail when applied in broader contexts. An example is the latest pursuit of AI system explainability (XAI) or computational trust models might not be enough to foster trust in users. On the contrary, some technical-centred explanations might mislead individuals' understanding of the subject. Or, some computational trust models are so narrow in their application that they might successfully maintain the initial trust formation in e-commerce but are not valuable for e-health. Generally, looking at the above concepts, many researchers understand trust as a specific quality of the relationship between a trustee (the entity being trusted) and the trustor (the person who trusts). In other words, the trust dynamic contemplates a subtle decision based on the game's complexity that they find themselves playing, as [10] and [48] describe it. However above paragraph also represents the need for the trustor (i.e., human) to perceive the trustee as trustworthy. For instance, this system can have all the technical mechanisms to be secure, but if the trustor cannot see those who see these mechanisms in action, they might perceive that they are not in place. So, trustworthiness and being trustworthy are two complementary aspects of trust. Perceived trustworthiness is an individual's assessment of how much an object, a person, or an organization, can be trusted. People assess trustworthiness based on information they perceive and or receive influenced by cultural and past experiences, and both qualities can evolve through time. In conclusion, in the Socio-ethical AI context, trust notions still need further clarification to ensure that solutions foster public trust and fundamental rights for minimal or no risk in the AI data protection process and non-bias (see EU's AI act). Same regards how we connect trust with information credibility and ethical practices. As well as studying the socio-ethical AI implications (i.e., explainable AI) in the acceptance and use of the technology. Or even when using a trust to seek more control in automated and intelligent system predictions. Or, provide socio-ethical AI as transparency and responsibility solutions through trusted agencies and other audition mechanisms [2, 76]. ## Conclusions The first digital revolution, i.e. Internet revolution in 1990, has brought big changes to how we communicate and interact across-country. However, recent digital revolutions characterised by the data-driven and information revolutions transformed the world and society as we know it. AI systems enabled by the social network, followed by the ability to trace and persuade behaviours, have altered social democratic practices and applications. The challenge nowadays is finding ways to adjust current regulatory practices to these new digital practices. Including looking for ways to fight the advancement of potential AI malpractices and minimize the risk of malevolent use. The above findings reveal the importance of clarifying the user trust notions in recent AI discourse. This is to lessen possible misinterpretation of trust and notion gaps in these new ethical guidelines for trustworthy AI regulatory practices. This is to avoid misconceptions about user trust in AI discourse and fight the tendency to design vulnerable interactions that lead to further breaches of trust, both real and perceived. Provide also evidence of the lack of trust and understanding of computer science, especially in assessing trust user characteristics and user-centred perspectives [3, 9, 76]. Also, frame the term 'trustworthy computing' as critical for technology adoption and complex system appropriations. As [69] stresses, we need to conceive a more Human-Centered Artificial Intelligence (HCAI) paradigm where human-machine relationships are perceived as safe, reliable, and trustworthy. We are now acknowledging that despite the attention given to technical characteristics like 'privacy,''security,' or 'computational trust and reputation,' malevolent technological practices still prevail. Also, widespread AI-driven applications and their associated risks bring new challenges to distrust and fear across-country discourse. Take the examples of persuasive malevolent design, deceptive designs, unethical business decisions associated with increasing concerns on socio-ethical AI, technical and design features, and user characteristics that [9] work to mention. Another challenge addressed is the human-likeness that misguides users to misplace the attributes of a trusted human, human-to-human exchanges mediated through technology and their trust in a human-artefact relationship [72, 12, 54, 73]. Even though some researchers claim that 'people trust people, not technology, as technology does not possess moral agency and the ability to do right or wrong [24, 48, 56, 67, 90]. Researchers fail to acknowledge trust complexity and how its indirect measures affect users' trust perceptions in the system's adoption and appropriation. After two decades of investment and advances, we now change the discourse towards a human-centred view and the need to develop, deploy and measure the quality of trust perceptions from an HCD perspective. Addressing AI-related attributes like reliability, safety, security, privacy, availability, usability, accuracy, robustness, fairness, accountability, transparency, interpretability, explainability, ethics, and trustworthiness. ## Acknowledgments This study was partly funded by the Trust and Influence programme under grant agreement number 21IOE051, the European Office of Aerospace Research and Development (EOARD), and the U.S. Air Force Office of Scientific Research (AFOSR). This study was partly funded by AI-Mind, which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 964220.
2308.11652
Accelerating Exact Combinatorial Optimization via RL-based Initialization -- A Case Study in Scheduling
Scheduling on dataflow graphs (also known as computation graphs) is an NP-hard problem. The traditional exact methods are limited by runtime complexity, while reinforcement learning (RL) and heuristic-based approaches struggle with determinism and solution quality. This research aims to develop an innovative approach that employs machine learning (ML) for addressing combinatorial optimization problems, using scheduling as a case study. The goal is to provide guarantees in optimality and determinism while maintaining the runtime cost of heuristic methods. Specifically, we introduce a novel two-phase RL-to-ILP scheduling framework, which includes three steps: 1) RL solver acts as coarse-grain scheduler, 2) solution relaxation and 3) exact solving via ILP. Our framework demonstrates the same scheduling performance compared with using exact scheduling methods while achieving up to 128 $\times$ speed improvements. This was conducted on actual EdgeTPU platforms, utilizing ImageNet DNN computation graphs as input. Additionally, the framework offers improved on-chip inference runtime and acceleration compared to the commercially available EdgeTPU compiler.
Jiaqi Yin, Cunxi Yu
2023-08-19T15:52:43Z
http://arxiv.org/abs/2308.11652v1
Accelerating Exact Combinatorial Optimization via RL-based Initialization - A Case Study in Scheduling ###### Abstract Scheduling on dataflow graphs (also known as computation graphs) is an NP-hard problem. The traditional exact methods are limited by runtime complexity, while reinforcement learning (RL) and heuristic-based approaches struggle with determinism and solution quality. This research aims to develop an innovative approach that employs machine learning (ML) for addressing combinatorial optimization problems, using scheduling as a case study. The goal is to provide guarantees in optimality and determinism while maintaining the runtime cost of heuristic methods. Specifically, we introduce a novel two-phase RL-to-ILP scheduling framework, which includes three steps: 1) RL solver acts as coarse-grain scheduler, 2) solution relaxation and 3) exact solving via ILP. Our framework demonstrates the same scheduling performance compared with using exact scheduling methods while achieving up to 128 \(\times\) speed improvements. This was conducted on actual EdgeTPU platforms, utilizing ImageNet DNN computation graphs as input. Additionally, the framework offers improved on-chip inference runtime and acceleration compared to the commercially available EdgeTPU compiler. ## I Introduction Combinatorial optimization (CO) is an optimization problem that finds the optimal solution from a vast search space, where graph-based scheduling is one of the classic CO problems: allocate the operators in computational graph into the given number of stages and minimize the on-cache memory, and communication cost simultaneously. Computational graph scheduling is fundamental to a wide range of hardware deployment domains such as FPGAs, CPU/GPU, and domain-specific accelerators. For example, the DNN model execution is computationally intensive, which requires many computational resources including on-cache memory, I/O-bandwidth, etc [1]. This problem becomes more critical for edge devices considering the small buffer size and fewer processing elements. More importantly, in the increasing uses of cloud-based domain-specific accelerators, enabling a fast runtime compilation (scheduling) is critical to the global management of cloud computing resources in efficiency and cloud utilization [2]. All together make scheduling an important task in pursuing both optimal quality of results and fast runtime. **Challenges of graph-level scheduling:** To clearly motivate this work, we provide a simple case study by evaluating conventional scheduling methods, including exact methods [3, 4], heuristic methods (commercial EdgeTPU compiler), and stand-alone RL/ML based scheduling methods. Specifically, we pick Google Edge TPU as the case study platform, with ImageNet model ResNet152 [5]. The properties of the three scheduling methods are presented in Table I. The key conclusions are summarized as follows: **1)** The advantage of exact methods [3, 4, 6] (e.g., ILP or SAT) is the optimality of scheduling results. ILP constructs the formulations and constraints to solve the scheduling problem, which guarantees solution optimality. However, due to the nature of the complexity, exact methods suffer from scalability issues with the highest solving runtime; **2)**. Heuristic methods [7, 8] have less solving runtime overhead but the scheduling results are sub-optimal. Similar to the EdgeTPU compiler, many vendor-specific libraries such as Nvidia cuBLAS, TVM, TensorFlow-Lite, and EdgeTPU compiler [9, 10, 11] are built upon the heuristic approach. **3)** Machine learning (ML) techniques have been applied to many combinatorial problems such as scheduling and other EDA problems [12, 13, 14, 15, 16, 17], which try to solve the given problem at inference runtime to accelerate the solving process, mostly aiming at generating near-optimal scheduling with inference runtime [18, 19, 20, 21, 19]. However, there are well known limitations in lacking determinism and guarantees for the performance and solving procedure. **Our solutions:** In this work, we use DNNs computation graph scheduling as a case study to illustrate the possibility of accelerating exact CO problem solving using a combination of reinforcement learning (RL) and exact solving techniques. Specifically, we propose Inc-ILP, a novel graph scheduling framework that combines both of me \begin{table} \begin{tabular}{l|l|l|l} \hline & Runtime (s) & QoR(MB) & Determinism \\ \hline Exact Methods & Highest (\(268\times\)) & Optimal & ✓ \\ \hline Heuristic & High (\(66\times\)) & Sub-optimal & ✓ \\ \hline Ours (RL only) & Lowest (\(1\times\)) & Sub-optimal & ✗ \\ \hline Ours (RL+ILP) & Low (\(5.5\times\)) & Optimal & ✓ \\ \hline \end{tabular} \end{table} TABLE I: Comparison of existing scheduling methodologies with motivating example of ResNet152 computation graph. ing and the exact method, where RL agent performs coarse-grain solving, and the exact method refines the coarse-grained initial solving and produces the optimal result. Thus, as shown in Table I, Inc-ILP generates deterministic and optimal scheduling results with significantly reduced solving runtime cost. The main contribution of this work can be summarized as follow: **(1)** we adopt an RL-based framework to imitate the behavior of existing exact methods, which provides initialized scheduling results as output. **(2)** To refine the RL solution for optimality and determinism, we perform local relaxation w.r.t RL scheduling solution and use exact method to obtain optimal scheduling results. **(3)** We select Google Edge TPU [22] as the backend and integrate Inc-ILP in an end-to-end compile-to-deploy framework to evaluate real-world performance. **(4)** Finally, we perform a comprehensive evaluation of Inc-ILP in optimality and solving runtime compared to commercial EdgeTPU compiler (heuristic) and pure exact method [4]. The results confirm the capability of Inc-ILP in generating optimal and deterministic results with significant speedups. As a result, we believe this work can offer some new intuitions for broadly improving determinism and (semi)optimality for ML in EDA and general combinatorial optimization. ## II Preliminary ### _Computational graph scheduling_ DNN models can be represented as computation graph \(G(V,E)\) with nodes \(V\) and edges \(E\). The nodes represent the operators in DNN models and the edges describe the dataflow dependency that connects all operators. Specifically, the problem of multi-stage DNN model scheduling can be formulated as follow: given the input of a computation graph and a set of scheduling constraints including the number of pipeline stages and resource constraint (e.g., on-cache memory resource), DNN model scheduling aims to generate the optimal scheduling solution \(S\) = \(s_{0}\),\(s_{1}\),...\(s_{n}\), where \(n\leq|V|\) and \(s_{n}\) represents the stage that node \(n\) scheduled. The operators in computation graphs will be deployed to given pipeline stages with the minimum (or maximum) optimization objectives like on-cache memory, communication cost, etc. Figure 2 displays a simple 3-stage pipelined Edge TPU scheduling outcome. Considering the left graph, the nodes for start, Input1 and Input2 are designated to stage-0. Stage-1 is composed of BatchNorm, ReLU, Conv1, and Conv2, while stage-2 incorporates Conv3, Zeropad, Avgpad, and end nodes. While hardware device pipelining can improve DNN execution performance by decreasing the size of off-cache memory, the runtime performance is highly sensitive to scheduling solutions, and finding the optimal one is critical due to limited computation resources including memory, and communication bandwidth. To demonstrate the impacts of scheduling solutions on the pipelining system, we select Google Edge TPU as backends and perform a comprehensive runtime performance analysis using 4-stage, 5-stage ImageNet ResNet152 [5] models pipelining, which is shown in Figure 1. We can observe that (1) scheduling solutions make significant differences in pipelined Edge TPU systems, which could result in \(2\times\) runtime difference; (2) many scheduling solutions might perform similarly but finding the best performance scheduling is statistically challenging. Hence, there is a great need of developing an near-optimal scheduling approach at low solving complexity. Note that for all two benchmarks, the performance of our proposed Inc-ILP scheduler is located in the left bottom corner which provides near-optimal runtime performance. The optimal scheduling solution should aim to minimize costs, such as peak memory usage and communication expenses. Figure 2 displays two distinct scheduling approaches for a synthetic DNN computation graph, with the number of parameters represented in the vertices. Although both schedules are valid, they differ in the assignment of the \(N_{Conv2}\) node. Fig. 1: Measurement system illustration of multi-stage pipelined Edge TPUs (physical system setup included in Figure 6). The performance of Inc-ILP scheduling results is located in the bottom-left corner with star markers. Fig. 2: Two distinct scheduling examples for a straightforward DNN computation graph differ solely in the allocation of the Conv2 node - the left scheduling results in reduced off-chip memory overhead, while the right scheduling minimizes device-to-device communication expenses. In the context of a three-stage pipelining system, the total execution time (\(T\)) of the computational graph is the sum of on-device execution runtime (\(T_{d}\)) and communication runtime (\(T_{c}\)), such that \(T=T_{d}+T_{c}\). Here, \(T_{c}\) is primarily influenced by off-chip memory usage, while \(T_{d}\) is affected by the communication expenses, i.e. size of tensors passed between stages. The scheduling solution with lower off-chip memory usage and communication overhead will likely have a faster execution time. It is important to note that in a pipeline scenario, \(T_{d}\) is dominated by the stage with the longest runtime. Among the two scheduling solutions, the left scheduling in Figure 2 has a lower memory upper bound (76347 in stage 3), resulting in an improved \(T_{c}\). On the other hand, the right scheduling ([2,2,1024]+[2,2,256]+[2,2,1024]) has a smaller device-to-device communication overhead between stages 2 and 3 compared to the left scheduling solution ([2,2,1024]+[2,2,512]+[2,2,1024]). As a result, the left scheduling has better memory usage and lower communication costs, leading to an optimized \(T_{d}\). In this work, we concentrate on Edge TPU backends where \(T_{c}\) has a greater impact than \(T_{d}\). Our aim is to first optimize the memory upper bound and then minimize device-to-device communication expenses. ### _Combinatorial problem solving methods_ Graph scheduling is a NP-hard combinatorial optimization problem and there is no optimal solution in polynomial solving runtime. There existing some traditional approaches including reinforce learning, heuristic, and exact approach, etc. Reinforcement learning [23, 24] has been applied to many combinatoraial optimization problem [16, 25, 26]. In [24], Bello et al. present an innovative approach to addressing combinatorial optimization challenges through the application of neural networks and reinforcement learning. The authors specifically target the traveling salesman problem (TSP) and develop a recurrent neural network capable of predicting a probability distribution of various city arrangements, given a collection of city coordinates. The method generates high-quality solutions for various combinatorial optimization problems, often surpassing the performance of traditional heuristics and metaheuristics. However, the solution based on reinforcement learning is non-deterministic and lacking optimal performance guarantees. Several renowned heuristic algorithms exist, such as list scheduling and the force-directed algorithm. The force-directed algorithm incorporates a "force" on each operation and aims to balance computation across all stages by minimizing this "force". List scheduling, in contrast, ranks operations based on specific metrics, scheduling the operation with the highest priority when available. Despite their capabilities, both heuristic algorithms share common limitations: they lack determinism and cannot guarantee the optimal quality of results. On the other hand, exact approaches [27, 28, 29, 30] include integer linear programming (ILP), satisfiability modulo theories (SMT), and integer difference constraints (SDC). Integer Linear Programming (ILP) and Satisfiability Modulo Theories (SMT) guarantee finding optimal solutions and serve as potent methods for tackling intricate optimization problems. However, they share the same scalability issues as the size and complexity of the problems they address increase. For ILP, the combinatorial nature of integer variables leads to a rapid growth in the number of potential solutions, making it difficult for ILP solvers to find optimal solutions efficiently. The scalability of ILP solvers is further affected by the presence of multiple objectives or constraints, which can significantly increase the search space. On the other hand, SMT solvers deal with the satisfiability of logical formulas with respect to various background theories. As the size and complexity of these formulas grow, the search space for satisfying assignments becomes increasingly large and challenging to navigate. Additionally, SMT solvers may struggle with certain types of problems, such as those involving non-linear arithmetic, which can exacerbate the scalability issue. ## III Approach ### _Overview_ The overall workflow of Inc-ILP including input, output is demonstrated in Figure 3. The Inc-ILP scheduling framework includes four phases: 1) RL pre-scheduling agent generates the coarse-grained scheduling results, 2) search space relaxation based on graph topological order, 3) Inc-ILP produces partially optimal results using exact solving, 4) scheduling results deployment on actual hardware devices. In phase 1, we adopt the RL model as pre-scheduling agent architecture. We take the benefits of the RL model which can mimic the behavior of exact methods with short inference runtime. Phase 2 performs search space relaxation based on the order of computational graph topological sorting. Then we exactly solve the refined search space and generate the partial optimal solution in phase 3. Specifically, we build ILP constraint formulation to solve the refined search space, which includes dependency constraints, parameter constraints, etc. Finally, in phase 4, we select Google Edge TPU as the backend to deploy scheduling results. The multi-stage pipelined Edge TPU system is shown in Figure 6. ### _Phase 1: RL based pre-scheduling_ As we mentioned before, RL pre-scheduling generates coarse-grained results, and the quality of results determines if we can achieve global optimality by search space relaxation. Figure 4 is the expanded illustration of the RL pre-scheduling agent specifications, which are composed of 4 steps (Phase 1.1 - Phase 1.4): embedding, encoder, decoder, and RL model training. **Computation graph embedding (Phase 1.1) -** The embedding of the computational graph extracts the key information and passes it to the RL model. The graph embedding consists of several components: (1) the absolute and relative coordinates of each node: for node \(N_{i}\), its absolute coordinate is the topological level of \(N_{i}\), and the relative coordinate will be the topological level of parent node \(N_{i}\). Note that we perform depth-first-search (DFS) algorithm to generate topological level, where each node is ordered to the closest position from the source node. For example, the topological order of node \(Concat\) in Figure 5 is 2. (2) node IDs: it is generated by hashing all operator names. For node \(N_{i}\), its node ID is \(N_{i}\). (3) memory consumption of each node and its output volume size: scheduling objectives of DNN model pipelining include minimizing peak-memory parameter caching and device-to-device communication. So, memory consumption and output volume size are the most critical attributes of each operator, which can be obtained from model attributes. **RL agent architecture (Phase 1.2, 1.3) -** As shown in Figure 4. The RL agent architecture is composed of an encoder and a decoder. Both the encoder and decoder are built with 256 Long Short-Term Memory (LSTM) cells [31]. **Encoder -** The encoder digests the graph embedding and transforms it into context information. For each LSTM cell in the encoder, it produces a sequence of latent (\(enc_{i}\)) memory states which record the encoding information and propagate along the LSTM dataflow. The last latent memory state will be passed to the decoder as input. **Decoder -** With the computation graph embedding and the latent memory state from the encoder, decoder produces a selection probability distribution over candidate computation nodes using the pointing mechanism. Firstly, similar to the encoder, decoder also generates another latent memory state (\(dec_{i}\)) at each LSTM cell. Then the latent memory will be passed to an attention network [32] to produce candidate node probability distribution. We adopt the pointer network (PtrNet) [24] in the attention network, which has been applied to many graph combinatorial optimization problems. Finally, the candidate node probability distribution and latent memory state will be passed to the next LSTM cell. Note that the first decoding latent memory state is a trainable parameter. **RL model training (Phase 1.4) -** In this step, we iteratively optimize the parameter in the RL agent. We choose the cosine similarity as the basis for constructing the reward function. For each node i in computation graph G(V, E) (\(i\leq|V|\)), we use \(\eta(i)\) and \(\pi(i)\) to demonstrate the ground truth label and RL agent output sequence respectively. The cosine similarity can be shown in Equation 1. Here we introduce a constant \(\eta\) to ensure the validity of the equation. \[R=\ \frac{\sum(\pi(i)\cdot\eta(i))}{\max(\sqrt{\sum\pi(i)^{2}}\cdot\sqrt{ \sum\eta(i)^{2}},\epsilon)} \tag{1}\] The reward function for model parameter \(\theta\) optimization policy is the maximization of the _cosine similarity_. \[J(\theta|G)=\mathbb{E}_{\pi\sim p_{\theta}(\cdot|G)}(1-R(\pi|G)) \tag{2}\] For ground truth label \(\eta(i)\), we use exact methods to generate the global optimal scheduling results. RL pre-scheduling agent aims to imitate the behavior of the ground truth label. **Training dataset -** Collecting sufficient data for Reinforcement Learning (RL) model training is one of the most significant challenges. The DNN computational graphs is limited, which restricts the scalability of RL agents. This challenge is exacerbated by the need to balance dataset coverage and graph size. To overcome this issue, we employ randomly generated synthetic datasets to train our RL model, which Fig. 4: Four-steps Reinforcement Learning (RL) Agent: The encoder processes embedded data, converting it into contextual information, which is then relayed to the decoder. The decoder generates a predicted sequence, and a cosine similarity-based loss is computed to train the RL agent. Fig. 3: Inc-ILP overview including four-phases: RL pre-scheduling, search space relaxation, exact solving, and device deployment. offers better data coverage over graph complexity and size while avoiding the need for extensive data collection. To ensure the generated DAGs align with real-world distributions, we gather statistical data from real-world DNN models and analyze the key information including graph size and complexities (maximum in-degree for nodes in the computational graph), etc. Building on this insight, we introduce a DAG generator tailored for RL training. This generator consistently outputs computational graphs with a size of \(|V|=30\) and various graph complexities ranging from \(deg(V)\in{2,3,4,5,6}\), mirroring the distributions found in real-world scenarios. Our DAG generator provides a total of **1 million** computational graphs, with 200,000 graphs for each complexity \(deg(V)\) in \(2,3,4,5,6\). The synthetic dataset has superior generalizability with test datasets for two main reasons. Firstly, it mimics the graph structure and provides a similar degree of the test dataset, ensuring that the RL model is trained on synthetic data that closely resembles the test data. Secondly, the graph sampler generates a vast dataset of over 1 million synthetic graphs, providing extensive coverage of different types of test graphs. This large and diverse dataset improves the robustness and performance in real-world scenarios. ### _Phase 2: Search Space Relaxation_ The main drawback of coarse-grained scheduling results from RL pre-scheduling is lacking determinism and optimality. In this section, we perform relaxation to broaden the search space and obtain the partial optimum. Space relaxation enables the application of exact methods to produce partial optimums without considering about scalability issues. Figure 5 illustrates the search space relaxation of 2-stage pipelining with the relaxation level \(\gamma=1\). Firstly, RL pre-scheduling results pipeline the input model into 2 stages, where all boundary edges cut through stage-0 to stage-1. Second, we create an ASAP (As Soon As Possible) topological sort for the computation graph, positioning each node as near to the source node as feasible. The ASAP topological sorting can be seen on the left side of Figure 5. Thirdly, we find the earliest stage \(s_{i}\) of the source node and the latest stage \(s_{j}\) of the sink node in the boundary edges. Finally, we refine all nodes whose topological sorting order is located between \(s_{i}\) to \(s_{j}\). Using the computational graph in Figure 5 as an example, the boundary edges include {Relu_1, Conv_1}, {Conv_2, Add}, and {Concat, Add}. We use the star marker to identify the boundary edges. The earliest topological sorting level for source nodes is 2 (Relu_1, Concat). Similarly, the latest sorting level for sink nodes is 4 (Add). Because we assume \(\gamma=1\), all nodes whose sorting stage is located between 1 to 5 will be refined in the next step using exact methods. **While the exact methods based scheduling problem is an NP-hard problem, the relaxed problem on top of RL-based scheduling is significantly simplified due to drastic search space reduction.** ### _Phase 3: Exact Solving via Multi-Obj ILP_ Phase 3 utilizes SDC+ILP-based scheduling [4] to execute the final refinement solving, aiming for optimal solutions. According to the systematic design of our multi-stage Edge TPU pipelining system, we adopt the following three optimization objectives when constructing ILP formulations: (1) peak memory usage for parameter caching; (2) total off-cache memory across all stages; (3) maximum communication cost in the multi-stage pipelining system. This section discusses the development of the ILP formulation. **ILP formulation generation** - In order to reschedule the refined search space, we construct an Integer Linear Programming (ILP) formulation comprised of various constraints. This formulation includes data dependence constraints, parameter caching constraints, and communication cost constraints, among others. In this section, we provide a concise overview of these three representative constraints. _(1) Dependence constraint_ - Dependence constraint formulates the dependence correctness of operators in the computation graph. Specifically, for edge \((N_{i},N_{j})\), the dependence constraint requires the \(N_{i}\) to be executed before \(N_{j}\). We use \(s_{i}\) and \(s_{j}\) to represent the stage of node \(N_{i},N_{j}\). Then the dependence constraint [4] can be formulated as: \[s_{i}-s_{j}\leq 0 \tag{3}\] _(2) Parameter caching constraint_ - The most important optimization objective is the peak memory footprint \(m_{peak}\). We use \(m_{k}\) to denote the memory usage in stage k, which can be estimated by the sum of parameter sizes of operators that are scheduled to stage k. We use the following expression to formulate the parameter caching constraint: \[m_{\text{k}}=\sum_{\forall v\in V}p_{v}\cdot s_{v}^{k} \tag{4}\] where variable \(p_{v}\) is the estimation of memory consumption for operator \(v\) based on parameter size, and \(s_{v}^{k}\) is binary variable to denote if operator \(v\) is scheduled to stage \(k\). \(m_{peak}\) is the highest memory footprint among all stages: \[\left\{\begin{array}{ll}\textbf{min}&m_{\text{peak}}\\ \wedge&m_{\text{k}}\leq m_{\text{peak}}\\ \wedge&\text{Equations (\ref{eq stages \(k\) and stage \(k+1\). For edge the \(e\) from node \(v_{i}\) to node \(v_{j}\), we build the following ILP formulations: \[\small{\mathit{com}_{k,k+1}}=\sum_{e}t_{\mathsf{e}}\cdot\alpha_{\mathsf{e}} \tag{6}\] where \(t_{\mathsf{e}}\) represents the tensor size between nodes \(v_{i}\) and \(v_{j}\), and \(\alpha_{\mathsf{e}}\) is a binary variable to denote if \((s_{i}==k)\wedge(s_{i}<s_{j})\). In the previous formulation, we employ logical operators to represent the variable \(\alpha_{\mathsf{e}}\). Consequently, we require an additional set of constraints to represent these logical operators. For instance, consider the logical AND operation. Given two binary variables \(x\) and \(y\), the logical AND ILP\({}_{\wedge}(x,y)\) can be represented as follows: \[\left\{\begin{array}{l}\text{ILP}_{\wedge}(x,y)\geq x+y-1\\ \text{ILP}_{\wedge}(x,y)\leq x\\ \text{ILP}_{\wedge}(x,y)\leq y\end{array}\right. \tag{7}\] ### _Phase 4: Deployment and system integration_ We build an on-device framework that integrates the Inc-ILP with multi-stage pipelined Edge TPU systems and the illustration of the integration framework is shown in Figure 6. The workflow includes the following several steps: * **Graph construction -** The Inc-ILP approach takes a graph, representing the dataflow of deep neural networks (DNNs), as its initial input. Our framework initially extracts the computation graph from TensorFlow Lite (TFLite), which includes details such as operator type, parameter dimensions, and output volume size, among others. * **Scheduling via Inc-ILP -** With the computational graph, Inc-ILP solves the formulations generated in the prior step and extracts solutions corresponding to the original model(s) TensorFlow frozen graph. Subsequently, our framework employs the TFLite TOCO converter to generate model subgraphs, which will be assigned to particular Edge TPU devices based on the obtained solution. * **Deployment -** In the final step, our framework utilizes the Edge TPU Compiler to transform the subgraph operations into Edge TPU-specific operations and deploy the subgraphs onto Edge TPUs without any additional optimizations. As all Edge TPU models undergo INT8 quantization prior to deployment, our scheduling framework factors in this quantization when generating communication and memory caching constraints. ## IV Experiment In this section, we aim to experimentally demonstrate the results and effectiveness of Inc-ILP results. The comparison baselines include exact method (ILP) and the heuristic method (commercial EdgeTPU compiler). We demonstrate the performance of our work from three aspects: quality of results w.r.t. parameter caching, solving runtime speedup, and on-device runtime performance. ### _Experiment Setup_ We select Google Edge TPU as Inc-ILP backends and choose 10 ImageNet classification models as benchmarks to verify the effectiveness of Inc-ILP which is shown in Table II. During the RL model training process, we use Nvidia 2080 Ti with Intel Xeon Gold 6230 x20 CPUs. For RL training setup, we select \(10^{-4}\) as the learning rate with 300 epochs and batch size 128 using Adam optimizer. As we mentioned before, the RL encoder and decoder are built with 256 LSTM blocks. Besides, we adopt the IBM ILOG CPLEX 1 as the incremental ILP solver. Finally, we select the Edge TPU compiler as the baseline which is a heuristic-based DNN scheduling commercial tool for Edge TPU devices. We build a multi-stage Edge TPU system for on-device evaluation which is shown in Figure 5(b). Footnote 1: [https://www.ibm.com/analytics/cplex-optimizer](https://www.ibm.com/analytics/cplex-optimizer) ### _Quality of results and solving runtime analysis_ The main goal of Inc-ILP is to achieve optimal solutions at significantly reduced runtime. Thus, we thoroughly examine its capacity to achieve optimal solutions and analyze the runtime expenses involved in the process. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \begin{tabular}{c} Xception \\ [33] \\ \end{tabular} & \begin{tabular}{c} ResNet50 \\ [5] \\ \end{tabular} & \begin{tabular}{c} ResNet101 \\ [5] \\ \end{tabular} & \begin{tabular}{c} ResNet152 \\ [5] \\ \end{tabular} \\ \hline \(|\mathbf{V}|\) & 134 & 177 & 347 & 57 \\ \hline \(\mathbf{deg}(\mathbf{V})\) & 2 & 2 & 2 & 2 \\ \hline \(\mathbf{Depth}\) & 125 & 168 & 338 & 508 \\ \hline \multirow{3}{*}{\(|\mathbf{V}|\)} & DenseNet211 & ResNet1012\(\times\)2 & ResNet1522\(\times\)2 & DenseNet169 \\ & [34] & [35] & [35] & [34] \\ \cline{1-1} \cline{2-5} & 429 & 379 & 566 & 597 \\ \hline \(\mathbf{deg}(\mathbf{V})\) & 2 & 2 & 2 & 2 \\ \hline \multirow{3}{*}{\(|\mathbf{Depth}|\)} & 428 & 371 & 558 & 596 \\ & [34] & [36] & & \\ \hline \(|\mathbf{V}|\) & 709 & 782 & & \\ \hline \(\mathbf{deg}(\mathbf{V})\) & 2 & 4 & & \\ \hline \(\mathbf{Depth}\) & 708 & 571 & & \\ \hline \end{tabular} \end{table} TABLE II: Statistics of DNN computational graph benchmarks Fig. 6: Inc-ILP end-to-end overflow and multi-stage pipeline Edge TPUs and energy measurement system. First, we compare the solving runtime speedups of Inc-ILP with two baselines: ILP-based exact method and commercial Edge TPU compiler, while the quality-of-result metrics if the percentage of DNNs graphs are optimally scheduled (Figure 7). The vertical axis is the Inc-ILP average solving runtime speedup over all ten benchmarks in Table II, where the percentage of benchmarks that achieve global optimum are presented w.r.t the color-bar. As shown in Figure 7, Inc-ILP offers significant solving runtime speedup over exact method with identical optimal solutions, where full optimality (100%) is reached at relaxation level \(\gamma=10\). Specifically, Inc-ILP provides \(54\times\), \(128\times\), and \(115\times\) speedups for 4-stage, 5-stage, and 6-stage scheduling, respectively (\(\gamma=10\)), with full optimality achieved. This confirms that RL-based initialization successfully narrows the search space, which alleviates the scalability issue of exact methods and makes finding the optimal solution within short solving runtime possible. Secondly, Inc-ILP offers \(27\times\), \(22\times\), and \(19\times\) speedups (\(\gamma=10\)) over EdgeTPU compiler (heuristic), while offering guarantees in optimality. In conclusion, Inc-ILP not only provides the solving runtime improvement but also generates the optimal scheduling solution. We can also see that relaxation level \(\gamma\) is an important factor that drives the Fig. 8: Convergence analysis w.r.t relaxation depth using parameter caching metric as example. All three objectives reach optimal after the convergence point. Fig. 7: Inc-ILP solving runtime speedup over two baselines. According to the colorbar, the percentage of benchmarks reaching optimum is shown in marker color. Inc-ILP provides global optimal scheduling results with significant solving runtime speedup. trade-off between optimality and runtime. To understand the impacts of the relaxation level \(\gamma\), we perform a case-by-case analysis for each benchmarks, shown in Figure 8. While we use the peak parameter caching as the objective metric for this evaluation, the three metrics all reach optimal points after careful search space relaxation. In this figure, we compare the size of peak memory usage between Inc-ILP and exact methods where the \(x\)-axis and \(y\)-axis represent the relaxation level \(\gamma\) and the parameter caching results of Inc-ILP and the optimum. Three important conclusions can be derived from Figure 8. First, RL-based pre-scheduling provides a great initialization result for later exact ILP solving. For example, three benchmarks achieve the global optimum without ILP refinement using RL-based scheduling (\(\gamma=0\)). As for all benchmarks, the scheduling results from the RL pre-scheduling agent have peak parameter caching gaps of 1.88%, 1.12%, and 4.50% to optimum, in 4-stage, 5-stage, and 6-stage pipelining, respectively. The quality of the initialization confirms the great confidence in including the global optimal solution in the relaxed search space. Second, most benchmarks achieve optimal scheduling results with small relaxation levels. For example, ResNet152 depth is 508, while the optimal convergence point is reached by Inc-ILP with only \(\gamma=8\). We observe that for all graphs tested, the 10-level relaxation is sufficient to generate global optimal results for all cases. Increasing the relaxation level beyond 10 brings marginal benefits but could reduce the advantages of Inc-ILP in solving runtime speedups. Finally, although the graph size of DNN computation graph benchmarks is vary from 134 to 782, the relaxation level to achieve optimal peak-memory results remains stable for different benchmarks, which proves the scalability and generalizability of Inc-ILP. For example, both DenseNet121 and DenseNet201 achieve optimal results when \(\gamma=6\), but they differ in graph size (\(V\)=429 for DenseNet121 and \(V\)=709 for DenseNet201). ### _On-device inference runtime analysis_ Finally, we compare the runtime performance between Inc-ILP, exact methods, and commercial Edge TPU compiler. The test system is shown in Figure 6 and the pipelined Edge TPU devices are connected to stable 5V supplying voltage for all tests. The on-chip inference runtime results are shown in Figure 9. The vertical axis represents the relative runtime normalized to the Edge TPU compiler. Note that as we mentioned before, Inc-ILP has identical scheduling results with exact methods. Thus we merge the scheduling results of Inc-ILP and exact methods into one column. As shown in Figure 9, Inc-ILP has significant runtime performance improvement over the Edge TPU compiler. Inc-ILP benefits from the ILP refinement and guarantees partial optimality. Overall, Inc-ILP provides \(1.065\times\), \(1.068\times\), and \(2.056\times\) execution runtime performance speedup in 4-stage, 5-stage, and 6-stage pipelining. For example, Inc-ILP provides \(4.07\times\) runtime speedup for ResNet152 in 6-stage pipelining. Besides, we observe more substantial performance improvement in 6-stage pipelining. The main reason is the scheduling results quality degradation of heuristic methods when scheduling complexity increases for large number stage pipelining. ## V Conclusion This work proposes a novel approach in enhancing ML-based optimization approaches in determinism and optimality. The key concept leverages ML-based solving at earlier stage that is coarse but low runtime cost, and deploys exact solving to refine the coarse solution to reach optimality from significantly reduced search space. Specifically, we propose a novel framework for scheduling, namely Inc-ILP, a DNN model scheduling framework that combines both exact methods and RL-based scheduling approaches. RL pre-scheduling agent narrows the searching space and Inc-ILP strategically refines RL initialization results with the exact method. Inc-ILP benefits from the scalability of the RL-based scheduling approach as well as the scheduling optimality of exact methods. The experimental results confirm the proposed concept, where Inc-ILP obtains the exact optimal scheduling with significant speedups over exact methods, and heuristic-based commercial EdgeTPU compiler (non-optimal scheduling). Fig. 9: On-device runtime comparison (Inc-ILP meet same performance of exact method). Inc-ILP performs better than Edge TPU compiler for all benchmarks.
2308.00987
Percolation in higher order networks via mapping to chygraphs
Percolation theory investigates systems of interconnected units, their resilience to damage and their propensity to propagation. For random networks we can solve the percolation problems analytically using the generating function formalism. Yet, with the introduction of higher order networks, the generating function calculations are becoming difficult to perform and harder to validate. Here, I illustrate the mapping of percolation in higher order networks to percolation in chygraphs. Chygraphs are defined as a set of complexes where complexes are hypergraphs with vertex sets in the set of complexes. In a previous work I reported the generating function formalism to percolation in chygraphs and obtained an analytical equation for the order parameter. Taking advantage of this result, I recapitulate analytical results for percolation problems in higher order networks and report extensions to more complex scenarios using symbolic calculations. The code for symbolic calculations can be found at https://github.com/av2atgh/chygraph.
Alexei Vazquez
2023-08-02T07:43:01Z
http://arxiv.org/abs/2308.00987v2
# Percolation in higher order networks via mapping to chygraphs ###### Abstract Percolation theory investigates systems of interconnected units, their resilience to damage and their propensity to propagation. For random networks we can solve the percolation problems analytically using the generating function formalism. Yet, with the introduction of higher order networks, the generating function calculations are becoming difficult to perform and harder to validate. Here, I illustrate the mapping of percolation in higher order networks to percolation in chygraphs. Chygraphs are defined as a set of complexes where complexes are hypergraphs with vertex sets in the set of complexes. In a previous work I reported the generating function formalism to percolation in chygraphs and obtained an analytical equation for the order parameter. Taking advantage of this result, I recapitulate analytical results for percolation problems in higher order networks and report extensions to more complex scenarios using symbolic calculations. ## I Introduction The field of network science has matured and diversified. In recent years there has been an explosion of work on higher order networks, including multiplex networks, interacting networks, interdependent networks, their extensions to hypergraphs and simplicial complexes [1; 2; 3]. These mathematical constructions capture more elements of the real systems they represent. In turn, there is a significant growth in the number of independent calculations that are hard to grasp by any individual researcher. Should the field continue in that direction its risks to become fragmented. To consolidate the work on higher order networks, I introduced complex hypergraphs (chygraphs), a combinatorial construction defined as a set of complexes where complexes are hypergraphs with vertex sets in the set of complexes [4]. My goal was to apply results on chygraphs to all higher order networks in its class. In particular, I have obtained an analytical solution to the emergence of a giant component on chygraphs [4]. With this result at hand, solving percolation problems in higher order networks is reduced to plugging in the parameters specific to the structure under consideration and getting the result via symbolic calculations. Here I illustrate the mapping of higher order networks to chygraphs to solve percolation problems. The report is organized as follows. In Sec. II I reintroduce the definition of chygraphs [4] and illustrate the mapping of graphs, hypergraphs and higher order networks to chygraphs. In Sec. III I go over the main result of the generating function calculation in Ref. [4], an analytical expression for the order parameter of the phase transition from disconnected clusters to the emergence of a giant component. For the first time, I report the function to perform the symbolic calculation of the order parameter. Then, in Secs. IV-VI I illustrate the application of the symbolic calculation to specific examples. Finally, I end with the conclusions in Sec. VII. ## II Chygraphs Combinatorial constructions help to abstract complex systems into their basic units and their interactions. A _graph_ or network \(G(V,E)\) is a set vertices \(V\) and a set of edges \(E\), where edges are pairs of vertices. The vertices (nodes) represent the system basic units. The edges (links) represent the interactions between nodes. In some systems the associations between the basic units goes beyond pairwise interactions. A _hypergraph_\(H(V,E)\) is a set vertices \(V\) and a set of hyperedges \(E\), where hyperedges are subsets of \(V\) (\(E=\{e_{i}\subset V,i=1,\ldots,m\}\)). These constructions can be extended to higher order networks by taking into account the existence of layers with different properties and/or interacting between them [1; 2; 3]. My key point here is that many of these higher order network definitions can be encapsulated in the following combinatorial construction [4] **Definition 1**: _A complex hypergraph (chygraph) is a set of complexes where complexes are hypergraphs with a vertex set in the set of complexes [4]._ This concise definition spells two key properties of complex systems: self-reference and fine-grained structure. Self-reference: Complexes are build from other complexes and they are the building blocks for other complexes as well. Fine-grained structure: The complexes within a complex are organized as a hypergraph. The complexes are characterized by different properties. We say a complex A includes a complex B when B is in the complex set of A. In turn, we say B is included in A. The _chy-degree_ of a complex A is the number of other complexes including A. The _cardinality_ of a complex is the number of complexes it includes. An _atom_ is a complex that does not includes any complexes. A chygraph where the complexes are hyperedges (hypergraphs with one hyperedge) is called an _ubergraph_, as originally defined by Joslyn and Nowak[5]. Finally, a chygraph may contain _layers_ setting apart different types of complexes. Let us unravel these definitions with some examples. A graph is represented by a two layer chygraph (Fig. 1a). One layer of atoms representing the graph nodes and another for the graph links. Nodes are included in any number of links and links include exactly two nodes. A hypergraph is similar to a graph where links (hyperedges) include any number of nodes (Fig. 1b). A multiplex graph is a chygraph with a layer of atoms representing the nodes and layers of complexes corresponding to different link types (Fig. 1c). Two interacting hypergraphs are represented by two hypergraphs mapped to chygraphs, plus an additional layer of complexes representing the hyperedges containing nodes from both hypergraphs (Fig. 2). This construction can be extended to any number of hypergraphs introducing one additional layer for each pair of interacting hypergraphs. ## III Order parameter of the percolation transition Given a complex we can reach the complexes it includes and the complexes it is included in. Repeating this process recursively we will reach a subset of the set of complexes, which in some instances may be the whole set of complexes. That subset of complexes is called a _component_. The size of a component is the number of complexes it contains. A giant component is a component with a size of the order to the number of complexes. A chygraph with several complexes and a few complex-to-complex inclusions has many components of small size. As the number of complex-to-complex inclusions increases there will be a point where a giant component will emerge. Using the generating function formalism I have characterized the transition from finite size components to the emergence of a giant component [4]. The central object is the 6 dimensional tensor \[(A_{--})_{nk}^{ml} = \delta_{mn}\delta_{lk}-\langle\kappa\rangle_{nk}\delta_{nl},\] \[(A_{-+})_{nk}^{ml} = -\left[\langle\bar{s}\rangle_{nk}\delta_{mk}+\langle s\rangle_{ nk}(1-\delta_{mk})\right]\delta_{nl},\] \[(A_{+-})_{nk}^{ml} = -\left[\langle\bar{\kappa}\rangle_{nk}\delta_{mk}+\langle\kappa \rangle_{nk}(1-\delta_{mk})\right]\delta_{nl},\] \[(A_{++})_{nk}^{ml} = \delta_{mn}\delta_{lk}-\langle s\rangle_{nk}\delta_{nl}, \tag{1}\] where the upper bar denotes the excess averages \[\langle\bar{\kappa}\rangle_{nk}=\frac{\langle\kappa(\kappa-1)\rangle_{nk}}{ \langle\kappa\rangle_{nk}}, \tag{2}\] \[\langle\bar{s}\rangle_{nk}=\frac{\langle s(s-1)\rangle_{nk}}{\langle s\rangle _{nk}}. \tag{3}\] The interpretation of \(A\) is as follows. The indexes \(m\), \(l\) and \(k\) label layers from 0 to \(L-1\), where \(L\) is the number of layers. \((A_{ij})_{lk}^{ml}\) is the expected component size that can be reached coming from a complex at layer \(m\) that is included (\(i=-\)) or includes (\(i=+\)) a complex C in layer \(l\) and then recursively following the complexes that C includes from layer \(k\) (\(j=-\)) and the complexes in layer \(k\) that include C (\(j=+\)). The matrix \(\langle\kappa\rangle_{nk}\) is the expected number of inclusions of a given complex in layer \(n\) into complexes in layer \(k\) Figure 1: Chygraph representation of a graph, hypergraph and multiplex hypergraph. Nodes are represented by circles and edges/hyperedges by squares. Figure 2: Chygraph representation of two interacting hypergraphs. Nodes are represented by circles and hyperedges by squares. One hypergraph is colored black and the other white. The hyperedges containing nodes in both hypergraphs are colored gray. The matrix \(\langle s\rangle_{nk}\) is the expected number of inclusions of complexes in layer \(k\) by a complex in layer \(n\). The matrix \(\langle\bar{\kappa}\rangle_{nk}\) is the excess chy-degree. When \(m=k\), there is a bias proportional to the chy-degree of layer \(n\) complexes to layer \(m\) and the inclusion used to come from layer \(m\) into \(n\) is subtracted when coming back to layer \(m\). \(\langle\bar{s}\rangle_{nk}\) is the excess inclusion size. When \(m=k\), there is a bias proportional to the number of inclusions of layer \(n\) complexes into layer \(m\) complexes and the inclusion used to come from layer \(m\) into \(n\) is subtracted when coming back to layer \(m\). The order parameter \(\Lambda\) is the maximum eigenvalue of the matrix \(-\mathrm{vec}^{2}A\) \[\Lambda=\max_{\lambda\in\mathrm{eigenvals}(-\mathrm{vec}^{2}A)}\lambda \tag{4}\] where \(\mathrm{vec}^{2}A\) is defined by \[\mathrm{vec}^{2}A=\begin{bmatrix}\mathrm{vec}(A_{--})&\mathrm{vec}(A_{-+}) \\ \mathrm{vec}(A_{+-})&\mathrm{vec}(A_{++})\end{bmatrix}, \tag{5}\] and vec is the vectorization operator \((\mathrm{vec}A)_{mL+l,nL+k}=A_{nk}^{ml}\). When \(\Lambda<0\) the chygraph is made of finite size components, while when \(\Lambda>0\) there is a giant component. Since \(\Lambda(\mathrm{vec}^{2}A)=0\) implies \(\mathrm{det}(\mathrm{vec}^{2}A)=0\), we define the pseudo-order parameter \[\theta=-\det(\mathrm{vec}^{2}A). \tag{6}\] ### Symbolic calculation The specificities associated with different chygraphs are encoded in the matrices \(\langle\kappa\rangle\), \(\langle\bar{\kappa}\rangle\), \(\langle s\rangle\) and \(\langle\bar{s}\rangle\). Once those matrices have been specified the order parameter \(\Lambda\) and \(\theta\) are calculated using Eqs. (1)-(6). In general such calculations are cumbersome and a symbolic calculator is recommended. For example, in symbolic python we can calculate \(\mathrm{vec}2\mathrm{A}(\langle\kappa\rangle,\langle\bar{\kappa}\rangle, \langle s\rangle,\langle\bar{s}\rangle)\) using the python class from sympy import * class vec2A: def _init_(self,k,K,s,S): L = k.shape[0] Lv = L**2 A = zeros(2*Lv, 2*Lv) def delta(i,j): return int(i==j) for i in range(2): for j in range(2): for m in range(L): for l in range(L): for n in range(L): for o in range(L): if i==0 and j==0: A[i*Lv + m*L + l, j*Lv + n*L + o] = ( delta(m,n)*delta(l,o) - k[n,o]*delta(n,l) ) elif i==0 and j==1: A[i*Lv + m*L + l, j*Lv + n*L + o] = ( -(S[n,o]*delta(m,o) + (1-delta(m,o))*s[n,o])*delta(n,l) ) elif i==1 and j==0: A[i*Lv + m*L + l, j*Lv + n*L + o] = ( delta(m,n)*delta(l,o) - s[n,o]*delta(n,l) ) self.A = A def theta(self): return -self.A.det() def eigenvals(self): return list((-self.A).eigenvals().keys()) where \(k\), \(K\), \(s\) and \(S\) stand for \(\langle\kappa\rangle\), \(\langle\bar{\kappa}\rangle\), \(\langle s\rangle\) and \(\langle\bar{s}\rangle\), respectively (See Supplemental Material at [URL will be inserted by publisher]). ## IV Graphs and Hypergraphs The best way to illustrate the applicability of the function vec2A is by example. We start with known results for graphs a hypergraphs. Consider a hypergraph with arbitrary vertex degree and hyperedges cardinality distributions. Let \(\langle k\rangle\) and \(\langle k(k-1)\rangle/\langle k\rangle\) be the average degree and excess nodes degree, respectively. Let \(\langle c\rangle\) and \(\langle c(c-1)\rangle/\langle c\rangle\) be the average hyperedges cardinality and excess cardinality, respectively. The hypergraph nodes and hyperedges are said to be present with probability \(p\) and \(q\), respectively. A hypergraph is represented by a chygraphs with a layer for nodes and a layer for hyperedges (Fig. 1b). Let us label the nodes by layer 0 and the hyperedges by layer 1. Then \[\langle\kappa\rangle=\begin{bmatrix}0&p\langle k\rangle\\ 0&0\end{bmatrix} \tag{7}\] \[\langle\bar{\kappa}\rangle=\begin{bmatrix}0&p\langle\bar{k}\rangle\\ 0&0\end{bmatrix}, \tag{8}\] \[\langle s\rangle=\begin{bmatrix}0&0\\ q\langle c\rangle&0\end{bmatrix}, \tag{9}\] \[\langle\bar{s}\rangle=\begin{bmatrix}0&0\\ q\langle\bar{c}\rangle&0\end{bmatrix}. \tag{10}\] Using vec2A we obtain the order parameter (See Supplemental Material at [URL will be inserted by publisher], Supp. Eqs. 1-3) \[\theta_{H}=pq\langle\bar{k}\rangle\langle\bar{c}\rangle-1, \tag{11}\] \[\Lambda_{H}=\sqrt{\theta_{H}+1}-1. \tag{12}\] There is a giant component when \(\theta_{H}\geq 0\), in agreement with previous reports [6; 7; 8]. A graph is a restricted version of a hypergraph where the edges contain exactly two nodes. In this case \(\langle\kappa\rangle\) and \(\langle\bar{\kappa}\rangle\) are given by Eqs. (7) and (8), while \[\langle s\rangle=\begin{bmatrix}0&0\\ 2q&0\end{bmatrix}, \tag{13}\] \[\langle\bar{s}\rangle=\begin{bmatrix}0&0\\ q&0\end{bmatrix}. \tag{14}\] Using vec2A we obtain the order parameter (See Supplemental Material at [URL will be inserted by publisher], Supp. Eqs. 4-6) \[\theta_{G}=pq\langle\bar{k}\rangle-1, \tag{15}\] \[\Lambda_{G}=\sqrt{\theta_{G}+1}-1, \tag{16}\] as reported in 1998 by Molloy & Read [9] for the case \(p=q=1\) and in 2000 by Callaway _et al_[10] for any \(p\) and \(q\). ## V Multiplex A multiplex hypergraph with \(L\) hyperedge types is mapped to a chygraph with a layer of atoms representing nodes (layer 0) and \(L\) layers representing hyperedge types (Fig. 1c). The hypergraph nodes and hyperedges are said to be present with probability \(p\) and \(q_{l}\), \(l=1\ldots,L\), respectively. I solved the problem of bond percolation (\(p=1\)) in multiplex networks using methods from multi-type branching processes [11]. Later on Allard _et al_ solved the problem using the more standard generating function formalism for multiplex networks [12]. Their generating function calculations contain the precursors to my vectorization technique in Ref. [4]. Here we recapitulate these results and extend them to the case of multiplex hypergraphs. For random multiplex hypergraphs \(\langle\kappa\rangle\), \(\langle\bar{\kappa}\rangle\), \(\langle s\rangle\) and \(\langle\bar{s}\rangle\) are \((L+1)\times(L+1)\) matrices and the only non-zero matrix elements are \[\langle\kappa\rangle_{0l}=p\langle k\rangle_{l}, \tag{17}\] \[\langle\bar{\kappa}\rangle_{0l}=p\langle\bar{k}\rangle_{l}, \tag{18}\] \[\langle s\rangle_{l0}=q_{l}\langle c\rangle_{l}, \tag{19}\] \[\langle\bar{s}\rangle_{l0}=q_{l}\langle\bar{c}\rangle_{l}, \tag{20}\] for \(l=1,\ldots,L\). The cases \(L=2\) and \(L=3\) are reported in the Supplemental Material at [URL will be inserted by publisher], Supp. Eqs. 7-10. For \(L=2\) both \(\theta\) and \(\Lambda\) contain several terms. For \(L=3\) the expression of \(\theta\) is a very long polynomial and the equation for \(\Lambda\) is not worth displaying. It is best to handle these equations with a symbolic calculator and evaluate them numerically for specific parameter sets. ### Poisson multiplex One may question why \(\theta\) gets so complicated. The answer is excess degrees. To illustrate the point, consider a multiplex hypergraph with type dependent Poisson distributions of nodes degrees and hyperedges cardinalities. If \(x\) is a random variable with a Poisson distribution then \(\langle x(x-1)\rangle/\langle x\rangle=\langle x\rangle\). Applying this equality for both degrees and cardinalities in Eqs. (18) and (20) yields \[\langle\bar{k}\rangle_{0l}=p\langle k\rangle_{l},\ \ \ \ \langle\bar{s} \rangle_{l0}=q_{l}\langle c\rangle_{l}, \tag{21}\] for \(l=1,\ldots,L\). Using vec2A we then obtain (See Supplemental Material at [URL will be inserted by publisher], Supp. Eqs. 11-13) \[\theta_{MP}=p\sum_{l=1}^{L}q_{l}\langle k\rangle_{l}\langle c\rangle_{l}-1, \tag{22}\] \[\Lambda_{MP}=\sqrt{\theta_{MP}+1}-1. \tag{23}\] For Poisson multiplex hypergraphs the contribution of hyperedge types becomes additive. This result was previously reported by Sun and Bianconi [7] (Eq. 33). Comparing the simplicity of Eq. (22) to that for arbitrary distributions of degrees and cardinalities (See Supplemental Material at [URL will be inserted by publisher], Supp. Eqs. 7-10) we arrive to the conclusion that excess averages are the cause of the increased polynomial terms in the expression for \(\theta\). ### Network motifs Real networks contain motifs, subgraphs that are at higher abundance than what expected by chance [13; 14]. If we interpret the node degree as the number of links where the node is included, then we define the type-\(l\) degree as the numbers of type-\(l\) motifs where the node is included. This interpretation was introduced by Mann _et al_ to tackle the generating function formalism given the type degree distributions [15]. The combinatorial construction by Mann _et al_ can be mapped to a multiplex chygraph, with a layer representing nodes and \(L\) additional layers representing each network motif under consideration (links, triangles,...). The case of network motifs introduces additional complexity with respect to the canonical multiplex hypergraphs discussed above. This complexity is best illustrated by the problem of bond percolation. When links are present with some probability \(q\) (bond percolation), then we need to characterize the connectivity within the motifs. This is an example where complexes have fine-grained structure. I'll expand on this point by solving the problem of bond percolation in a graph with overrepresented triangles. When we reach a triangle via one of its nodes, we can reach different intra-complex component excess sizes (Fig. 3). Thus, labeling the nodes, links and triangles layers by 0, 1 and 2 respectively we obtain the average and excess average matrices \[\langle\kappa\rangle=\begin{bmatrix}0&\langle k\rangle_{|}&\langle k\rangle_ {\triangle}\\ 0&0&0\\ 0&0&0\end{bmatrix} \tag{24}\] \[\langle\bar{\kappa}\rangle=\begin{bmatrix}0&\langle\bar{k}\rangle_{|}& \langle\bar{k}\rangle_{\triangle}\\ 0&0&0\\ 0&0&0\end{bmatrix} \tag{25}\] \[\langle s\rangle=\begin{bmatrix}0&0&0\\ s_{(}q)&0&0\\ s_{\triangle}(q)&0&0\end{bmatrix} \tag{26}\] \[\langle\bar{s}\rangle=\begin{bmatrix}0&0&0\\ \bar{s}_{(}q)&0&0\\ \bar{s}_{\triangle}(q)&0&0\end{bmatrix}, \tag{27}\] where \[s_{|}(q)=1+q, \tag{28}\] \[\bar{s}_{|}(q)=q, \tag{29}\] \[s_{\triangle}(q)=3(q^{3}+3q^{2}(1-q))+\frac{3}{2}3q(1-q)^{2}+(1-q)^{3}, \tag{30}\] \[\bar{s}_{\triangle}(q)=2q(1+q-q^{2}). \tag{31}\] Substituting the above matrices into vec\({}^{2}A\) we obtain the pseudo-order parameter (See Supplemental Material at [URL will be inserted by publisher], Supp. Eqs. 14-16) \[\theta_{NM} = \sum_{l=|,\triangle}\bar{s}_{l}(q)\langle\bar{k}\rangle_{l}-1 \tag{32}\] \[+ \bar{s}_{(}q)\bar{s}_{\triangle}(q)(\langle k\rangle_{|}\langle k \rangle_{\triangle}-\langle\bar{k}\rangle_{|}\langle\bar{k}\rangle_{ \triangle}),\] Figure 3: Possible intra-complex components for a triangle with links present with probability \(q\). The filled circle represents the node from where we arrived to the triangle. a) Excess component size \(\bar{s}=2\) with probability \(q^{3}+3q^{2}(1-q)\). b) Excess component size \(\bar{s}=1\) with probability \(2q(1-q)^{2}\). c) Excess component size \(\bar{s}=0\) with probability \((1-q)^{3}+p(1-p)^{2}\). while the expression for \(\Lambda\) is too long to be displayed here. Let's go over some particular cases to verify this result. When there are no triangles (\(\langle\bar{k}\rangle_{\triangle}=\langle k\rangle_{\triangle}=0\)) we recover the order parameter of bond percolation on graphs with arbitrary degree distribution (Eq. (16), \(p=1\)). When the distributions of nodes participation in links and triangles are Poisson \(\langle\bar{k}\rangle_{\triangle}=\langle k\rangle_{\langle}k\rangle_{\triangle}\) and Eq. (32) is reduced to (See Supplemental Material at [URL will be inserted by publisher], Supp. Eqs. 17-19) \[\theta_{\rm NMP}=\sum_{l=|,\triangle}\bar{s}_{l}(q)\langle\bar{k}\rangle_{l}-1. \tag{33}\] \[\Lambda_{NMP}=\sqrt{\theta_{NMP}+1}-1. \tag{34}\] For Poisson distributions the contribution of links and triangles is additive, as it is for multiplex hypergraphs with Poisson distributions Eq. (22). An interesting question is when is the network more resilient, when links are distributed independently of or within triangles, provided the total number of links is the same. The number of links independent of and within triangles are \(E_{|}=\langle k\rangle_{|}/2\) and \(E_{|}=3E_{\triangle}=\langle k\rangle_{\triangle}\), respectively. To preserve the total number of links, the Poisson network with parameters (\(\langle k\rangle_{|},\langle k\rangle_{\triangle}\)) should be compared to the network without triangles (\(\langle k\rangle_{|}+2\langle k\rangle_{\triangle},0\)). Using Eq. (33) we obtain \[\theta_{\rm P}(\langle k\rangle_{|},\langle k\rangle_{\triangle})-\theta_{\rm P }(\langle k\rangle_{|}+2\langle k\rangle_{\triangle},0)=2q^{2}(1-q)\langle k \rangle_{\triangle}. \tag{35}\] This difference is greater than \(0\) for \(q>0\), as it is illustrated in Fig. 4 for a specific set of parameters. With a fixed number of links, random networks where links are distributed as triangles are more robust than those where links are not distributed as triangles. Since spreading of an infectious agent in a network is equivalent to bond percolation [16], the latter means that having links linked to triangles increases the propensity for spreading. This coincides with the earlier observation by Newman [17]. ## VI Interacting graphs and hypergraphs Interacting networks are layers of different type networks connected by inter-network links. Interacting hypergraphs are layers of different type hypergraphs connected by inter-hypergraphs hyperedges. Figure 2 shows the mapping of two interacting hypergraphs to a chograph with 5 layers. Two layers of nodes and hyperedges for each hypergraphs, plus one layer of hyperedges containing nodes from both hypergraphs. For two interacting hypergraphs we can assign labels from \(0\) to \(4\) to the different layers in Fig. 2. For \(g\gg 1\) interacting hypergraphs we better use some clever indexing. Here I restrict my attention two hyperedges containing nodes of at most two distinct hypergraphs. I choose to label the hypergraphs \(H_{l},l=0,\ldots,g-1\) and assign their nodes to layers \(l=0,\ldots,g-1\), respectively. The hyperedges containing nodes from \(H_{l}\) and \(H_{m}\), \(l\leq m\), are assigned to the layer indexed by \[i_{lm} = g+\sum_{m=0}^{l-1}(g-m)+m-l \tag{36}\] \[= g+\frac{2g-(l-1)}{2}l+m-l\] For the case of two interacting hypergraphs, the hyperedges of \(H_{0}\) are assigned to layer \(i_{00}=2\), those of \(H_{1}\) to \(i_{11}=4\) and the interacting hyperedges to \(i_{01}=3\). With this notation we obtain the mean and excess mean Figure 4: The \(q\) dependency of a) the pseudo-order parameter \(\theta\) and b) the order parameter \(\Lambda\) for the case of two Poisson networks with the same number of links. One with triangles (solid line) and one with no triangles (dashed-dotted line). matrices of \[\langle\kappa\rangle=\begin{bmatrix}0&0&\langle k\rangle_{02}&\langle k\rangle_{03 }&0\\ 0&0&0&\langle k\rangle_{13}&\langle k\rangle_{14}\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\end{bmatrix} \tag{37}\] \[\langle\bar{\kappa}\rangle=\begin{bmatrix}0&0&\langle\bar{k}\rangle_{02}& \langle\bar{k}\rangle_{03}&0\\ 0&0&0&\langle\bar{k}\rangle_{13}&\langle\bar{k}\rangle_{14}\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\end{bmatrix} \tag{38}\] \[\langle s\rangle=\begin{bmatrix}0&0&0&0&0\\ 0&0&0&0&0\\ \langle c\rangle_{20}&0&0&0&0\\ \langle c\rangle_{30}&\langle c\rangle_{31}&0&0&0\\ 0&\langle c\rangle_{41}&0&0&0\end{bmatrix} \tag{39}\] \[\langle\bar{s}\rangle=\begin{bmatrix}0&0&0&0&0\\ 0&0&0&0&0\\ \langle\bar{c}\rangle_{20}&0&0&0&0\\ \langle\bar{c}\rangle_{30}&\langle\bar{c}\rangle_{31}&0&0&0\\ 0&\langle\bar{c}\rangle_{41}&0&0&0\end{bmatrix} \tag{40}\] We can go ahead and plug in these matrices in vec2A to obtain \(\theta\) and \(\Lambda\)(See Supplemental Material at [URL will be inserted by publisher], Supp. Eqs. 20-22). For the general case we obtain a polynomial with several terms. ### Interacting Poisson hypergraphs and graphs For hypergraphs with Poisson distributions of degrees and cardinalities \(\langle\bar{\kappa}\rangle=\langle\kappa\rangle\), \(\langle\bar{s}\rangle=\langle s\rangle\). Theequations for \(\theta\) and \(\Lambda\) are simplified (See Supplemental Material at [URL will be inserted by publisher], Supp. Eqs. 23-25), but they are still too long to display here. For two interacting graphs with Poisson degree distributions the average and excess average matrices are \[\langle\kappa\rangle=\begin{bmatrix}0&0&\langle k\rangle_{02}&\langle k \rangle_{03}&0\\ 0&0&0&\langle k\rangle_{13}&\langle k\rangle_{14}\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\end{bmatrix}, \tag{41}\] \[\langle\bar{\kappa}\rangle=\langle\kappa\rangle, \tag{42}\] \[\langle s\rangle=\begin{bmatrix}0&0&0&0&0\\ 0&0&0&0&0\\ 2&0&0&0&0\\ 1&1&0&0&0\\ 0&2&0&0&0\end{bmatrix}, \tag{43}\] \[\langle\bar{s}\rangle=\begin{bmatrix}0&0&0&0&0\\ 0&0&0&0&0\\ 1&0&0&0&0\\ 0&0&0&0&0\\ 0&1&0&0&0\end{bmatrix}. \tag{44}\] Notice that \(\langle s\rangle_{40}=\langle s\rangle_{40}=1\). The complexes at layer 4, representing the inter-graph links, include only one node from each of the two interacting graphs. Furthermore, \(\langle\bar{s}\rangle_{40}=\langle\bar{s}\rangle_{40}=0\). That means, if you come from one graph using the interacting links you cannot go back to the same graph. Plugging in Eqs. (41)-(44) we obtain (See Supplemental Material at [URL will be inserted by publisher], Supp. Eqs. 23-25) \[\theta_{IGP}=\langle k\rangle_{04}\langle k\rangle_{24}-(\langle k\rangle_{01 }-1)(\langle k\rangle_{23}-1), \tag{45}\] \[\Lambda_{IGP} = \frac{1}{\sqrt{2}}\bigg{(}\langle k\rangle_{02}+\langle k \rangle_{14} \tag{46}\] \[+ \sqrt{(\langle k\rangle_{02}-\langle k\rangle_{14})^{2}+4\langle k \rangle_{03}\langle k\rangle_{13}}\bigg{)}^{1/2}\] \[- 1.\] The result for \(\theta_{IGP}\) was previously obtained by Leicht and Souza [18]. You can crosscheck that \(\Lambda_{IGP}=0\) is equivalent to \(\theta_{IGP}=0\). The extension to interacting graphs with arbitrary degree distributions and any number of layers is straightforward with the symbolic calculator, albeit the results are very long equations. The case of two interacting graphs with arbitrary degree distributions is reported in the Supplemental Material at [URL will be inserted by publisher], Supp. Eqs. 23-25. This result recapitulates the calculations of Bianconi for percolation in multilayer interacting graphs [19]. ## VII Conclusions Chygraphs are a general combinatorial construction that includes as special cases many of the higher order structures defined so far. When that mapping is possible, we can use the generating function calculation for the order parameter characterizing the emergence of a giant component in chygraphs. This is done by the symbolic calculation reported in this work. When the degree of participation in complexes follow Poisson distributions there is some simplification in the analytical expressions. In many instances \(\Lambda=\sqrt{\theta}-1\), where \(\Lambda\) and \(\theta\) are the largest eigenvalue and the determinant of the generalized average excess component tensor \(-A\). However, there are exceptions, as shown for the case of two interacting graphs with Poisson degree distributions. The example of bond percolation in graphs with over-represented triangles illustrates the relevance of having complexes with a fine-structure. It helps to decompose the problem into the large scale connectivity and the local connectivity within the complex. The recapitulation of previous results demonstrates the range of applicability of chygraphs and the convenience of the symbolic calculations. When we tackle a problem in chygraphs we're tackling the problem for a wide range of higher order networks. This is the way to move forward.
2307.09698
A Unified Diode Equation for Organic Photovoltaic Devices
Organic photovoltaics (OPVs) are promising candidates for future sustainable technologies, including applications within the renewable energy sector, such as solar cells and indoor light recycling, and photodetection. However, the performance of OPVs is still inferior compared to established technologies, partially due to the intrinsically low charge carrier mobilities and large recombination losses in the low-permittivity, molecular organic semiconductors. To better understand these losses, accurate analytical diode models capable of capturing the underlying device physics are imperative. However, previously proposed analytical models have neglected the effects of injected charge carriers, which is the predominant source for bimolecular recombination in thin-film systems with ohmic contacts. In this work, we derive a unified diode equation for current in OPVs, which accounts for the interplay between charge carrier extraction, injection, and bimolecular recombination. To this end, we use a regional approximation approach to solve the coupled charge transport equations in sandwich-type thin film devices. The diode model is further validated by numerical simulations and experimental data. The derived theoretical framework not only provides vital insights into the underlying device physics of OPVs but is generally applicable to sandwich-type thin-film photovoltaic device based on semiconductors with low charge carrier mobilities.
Oskar J Sandberg, Ardalan Armin
2023-07-19T01:03:27Z
http://arxiv.org/abs/2307.09698v1
## A Unified Diode Equation for Organic Photovoltaic Devices ## Abstract Organic photovoltaics (OPVs) are promising candidates for future sustainable technologies, including applications within the renewable energy sector, such as solar cells and indoor light recycling, and photodetection. However, the performance of OPVs is still inferior compared to established solar cell technologies, partially due to the intrinsically low charge carrier mobilities and large recombination losses of low-permittivity, molecular organic semiconductor systems. To better understand these losses, analytical diode models capable of capturing the underlying device physics are imperative. However, previously proposed analytical models have neglected the effects of injected charge carriers, which is the predominant source for bimolecular recombination in thin-film systems with ohmic contacts. In this work, we derive a unified diode equation for OPVs. Based on a regional approximation approach, we derive an analytical model for the current accounting for the interplay between charge carrier extraction, injection, and bimolecular recombination in organic solar cells. The analytical model is validated by numerical simulations and experimental data. Our findings provide key insights into the mechanisms driving and limiting charge collection, and ultimately the power conversion efficiency, in OPV devices. The presented framework is material-agnostic and generally applicable to sandwich-type thin-film photovoltaic devices, including photodiodes and indoor light-harvesting cells. ## 1 Introduction Organic semiconductor-based photovoltaics (OPVs) have shown great promise as an emerging and sustainable solar cell technology [1, 2]. In addition, OPVs are suitable for low-light-intensity applications such as indoor light harvesting and photodetection [3, 4]. However, OPVs suffer from low charge carrier mobilities and relatively large non-radiative recombination losses. This generally limits the magnitude of the collected photo-induced current and, ultimately, the power conversion efficiency (PCE) of OPV-based solar cells [2, 5]. To enhance the device performance of OPVs, all loss mechanisms limiting the current need to be understood and minimized, necessitating a comprehensive understanding of the underlying device physics. In this regard, quantitative analytical diode models which relate the current to key material parameters are invaluable. However, owing to the convoluted and non-linear nature of the physical processes underpinning charge collection in OPVs, an analytical diode model that accurately captures the relevant device physics in these systems has remained elusive. The typical OPV structure constitutes a thin active layer (often on the order of 100 nm), sandwiched between hole-collecting anode and electron-collecting cathode contact layers. In state-of-the-art OPV cells, the active layer is composed of a blend of electron-accepting (acceptor) and electron-donating (donor) organic semiconductors [2, 6]. Under illumination, free electrons and holes are generated in the acceptor and donor phase, respectively, being the result of photo-induced excitons dissociating at donor-acceptor interfaces. The electrons (holes) are then driven toward the cathode (anode) contact by the built-in asymmetry induced by the work function difference of the electrodes. On the way to the respective charge collecting contact, however, a free electron (hole) has a finite probability to encounter a free hole (electron) at a donor-acceptor interface and recombine. This recombination process is generally of a bimolecular nature [7, 8], involving the reformation and the subsequent recombination of charge-transfer states or excitons. This latter process is believed to be the leading source of non-radiative recombination in organic solar cells [9, 10, 11]. Because of the low carrier mobilities, the current-voltage (_J-V_) relationship in OPVs critically depends on the competition between the processes of charge extraction and bimolecular recombination of photogenerated electrons and holes [12, 13, 14]. This competition is further complicated by the inevitable presence of injected charge carriers diffused into the active layer from the contacts [15, 16]. Injected carriers are characterized by highly non-uniform, voltage-dependent carrier profiles and have been found to cause both energy level bending, screening the electric field inside the active layer [17, 18], and first-order recombination losses, manifesting as non-ideal Fill Factors (FFs) regardless of light intensity regime [19, 20]. While reproduced by numerical simulations, it has been challenging to understand and model these processes analytically due to the multi-dimensional parameter spaces and the non-linear nature of underpinning kinetic processes [21]. Nonetheless, analytical models for the _J-V_ behaviour of OPVs have been proposed in the past [22, 23, 24]. However, these approximations typically assume simplified situations which neglect effects of injected carriers. To our knowledge, an explicit diode equation fully describing the interplay between charge carrier extraction, injection, and bimolecular recombination in low-mobility OPVs has yet to been established. In this work, we derive a diode equation describing the _J-V_ characteristics of thin-film OPV devices. Using a regional approximation approach, expressions for the current are obtained which, for the first time, fully accounts for the interplay between charge carrier extraction, injection, and bimolecular recombination in sandwich-type thin-film devices. The derived approximations are validated by numerical drift-diffusion simulations and applied to experimental results. Further, the obtained findings provide valuable insight into the mechanisms driving and limiting charge collection in OPVs. Additionally, the presented analytical framework provides a figure-of-merit which parametrises _J-V_ curves more accurately than previous models, useful for fitting experimental data to extract key material properties. Such parametrisation may also find applications in labelling input _J-V_ data for training machines used for device optimisation [25, 26]. The developed theoretical framework is material-agnostic and applicable to thin-film photovoltaic devices based on low-mobility semiconductors in general. ## II 2. Theoretical background For a given applied voltage \(V\) across the active layer, the total current density \(J\) of a thin-film diode device is determined by the flow of electrons and holes within the device. Under steady-state conditions, \(J=J_{n}(x)+J_{p}(x)\), where \(J_{n}(x)\) and \(J_{p}(x)\) is the electron and hole current density, respectively, at any position \(x\) inside the active layer. The electron and hole current densities are related to the photogeneration rate \(G\) and recombination rate \(R\) of charge carriers within the active layer via the carrier continuity equations: \[-\frac{1}{q}\frac{\partial J_{n}}{\partial x}=\frac{1}{q}\frac{\partial J_{p }}{\partial x}=G(x)-R(x), \tag{1}\] where \(q\) is the elementary charge and \(0<x<d\), with \(d\) being the active layer thickness, assuming an active layer sandwiched between an anode (\(x=0\)) and a cathode (\(x=d\)) contact. Further, assuming an effective medium description, \(J_{n}(x)\) and \(J_{p}(x)\) depend on their respective electron and hole densities \(n(x)\) and \(p(x)\) through the drift-diffusion relations [27, 28]: \[J_{n}(x)=\mu_{n}n(x)\,\frac{\partial E_{Fn}}{\partial x}=\mu_{n}\,\Big{[}qn(x)F(x)+ kT\,\frac{\partial n}{\partial x}\Big{]}, \tag{2}\] \[J_{p}(x)=\mu_{p}p(x)\,\frac{\partial E_{Ep}}{\partial x}=\mu_{p}\,\Big{[}qp(x)F (x)-kT\,\frac{\partial p}{\partial x}\Big{]}. \tag{3}\] Here, \(\mu_{n}\,(\mu_{p})\) is the electron (hole) mobility, \(E_{Fn}\,(E_{Ep})\) the quasi-Fermi level of electrons (holes), \(F\) the electric field, \(k\) the Boltzmann constant, and \(T\) the absolute temperature. The classical Einstein relation for the electron (hole) diffusion constant, \(D_{n(p)}=\mu_{n(p)}kT/q\), has been assumed in the second equality of Eq. (2) (Eq. (3)). The electric field inside the active layer depends on the densities of free charge carriers through the Poisson equation. In case of an undoped and trap-free active layer, the Poisson equation is given by \[\frac{\partial F}{\partial x}=\frac{q}{\varepsilon\varepsilon_{0}}\,[p(x)-n( x)], \tag{4}\] with \(\varepsilon\) being the relative permittivity of the active layer, and \(\varepsilon_{0}\) the vacuum permittivity. The electric field is related to the electrical potential \(\phi(x)\) inside the active layer via \(F(x)=-\,\partial\phi/\partial x\); in this work we define \(\phi(x)=-\int_{0}^{x}F(x^{\prime})\,dx^{\prime}\) with \(\phi(0)=0\) as the reference potential level. Note that \(\phi(x)\) is proportional to the effective conduction level \(E_{c}(x)\) in the acceptor and valence level \(E_{p}(x)\) in the donor as \(q[\phi(x)-\phi(0)]=E_{v}(0)-E_{v}(x)\). Furthermore, the bulk recombination of charge carriers is assumed to be bimolecular, with the net recombination rate being of the form \[R(x)=\beta n(x)p(x)-\beta n_{l}^{2}, \tag{5}\] where \(\beta\) is the bimolecular recombination rate coefficient. Here, the term \(\beta n_{l}^{2}\) defines the thermal (dark) generation rate of free charge carriers within the active layer, where \(n_{l}^{2}=N_{c}N_{v}\,\exp\big{(}-E_{g}/kT\big{)}\) with \(N_{c}\,(N_{v})\) being the effective density of conduction (valence) level and \(E_{g}=E_{c}-E_{v}\) the effective transport level gap. Since the generation and recombination of free charge carriers in OPVs take place via CT states and excitons [2, 29, 30, 31, 32], the corresponding charge carrier rate \(G\) and recombination coefficient \(\beta\) in Eq. (1) and Eq. (5), respectively, depend on exciton and CT state kinetics [33]. In this picture, \(\beta=(1-P_{\rm CT})\beta_{0}\) where \(\beta_{0}\) is the rate constant for free electrons and holes to encounter and form bound CT states at D-A interfaces in the bulk [34], and \(1-P_{\rm CT}\) represents the subsequent probability for CT states to decay to the ground state either directly or indirectly (e.g. after back-transfer to excitons) [33]. \(\beta\) is also commonly compared to the Langevin recombination constant \(\beta_{\rm L}=q[\mu_{n}+\mu_{p}]/(\varepsilon\varepsilon_{0})\)[35], considered to be the upper limit for \(\beta\) in OPV systems. Under these conditions, the total current density can be expressed as \[J=-J_{\rm gen}+q\beta\int_{0}^{d}\bigl{[}n(x)p(x)-n_{i}^{2}\bigr{]}\,dx+J_{n}(0)+J_ {p}(d), \tag{6}\] where \(J_{\rm gen}\equiv q\int_{0}^{d}G(x)\,dx\) is the saturated photogeneration current density. In this work, we consider contacts that are ideally selective for the extraction of majority carriers (electrons at cathode, holes at anode). In other words, majority carriers are assumed to remain at thermal equilibrium at the contacts, such that \(p(0)=p_{an}\) and \(n(d)=n_{cat}\), while \(J_{n}(0)=J_{p}(d)=0\) (no surface recombination of minority carriers). Here, \(p_{an}\) (\(n_{cat}\)) is the corresponding thermal equilibrium density for holes (electrons) at the anode (cathode). Finally, \(V\) is related to \(F(x)\) via \[V-V_{bi,0}=\int_{0}^{d}F(x)\,dx, \tag{7}\] where \(qV_{bi,0}=kT\ln\bigl{(}p_{an}n_{cat}/n_{i}^{2}\bigr{)}\) corresponds to the work function difference between the anode and cathode contact. To evaluate \(J\) for a given \(V\), the set of coupled differential equations Eqs. (1)-(5) needs to be solved for \(\phi(x)\), \(n(x)\) and \(p(x)\). In the next section, we will present an analytical framework for \(J\) based on approximative solutions of \(\phi(x)\), \(n(x)\) and \(p(x)\). To validate the obtained approximate expressions, we will compare them with the results of numerical simulations representing the "exact" solutions. For this purpose, we use a previously established numerical device model (see Ref. [36]), based on the Scharfetter-Gummel discretization scheme and Gummel's iteration method [37, 38, 39]. In the simulations, we consider an active layer having \(d=100\) nm, \(N_{v}=N_{c}=10^{21}\) cm\({}^{-3}\), and an energy level gap of \(E_{g}=1.42\) eV. Further, we assume a uniform photogeneration rate (\(G(x)=G\)) that scales linearly with light intensity and takes a value of \(6.24\times 10^{21}\) cm\({}^{-3}\)s\({}^{-1}\) under 1 sun conditions. Unless otherwise stated, balanced mobilities (\(\mu=\mu_{n}=\mu_{p}\)) will be assumed in the simulations. Finally, we will assume that the contacts are ohmic, corresponding to injecting contacts at which the carrier density is high enough (negligible resistance) not to limit the majority-carrier current. To ensure this condition, we assume \(p_{an}=N_{v}\) and \(n_{cat}=N_{c}\). ## III Results and Discussion In Fig. 1 the effect of the interplay between charge collection and bimolecular recombination on the \(J\)-\(V\) characteristics is demonstrated. A photovoltaic device with \(\mu=10^{-4}\) cm\({}^{2}\)/Vs is simulated and compared to the idealized case with perfect charge collection (\(\mu\rightarrow\infty\)), assuming a fixed \(\beta=10^{-10}\) cm\({}^{3}\)/s. Fig. 1(a) and (b) shows the normalized currents \(J/J_{\rm gen}\) under 1 sun (solar cell) and 0.001 suns (relevant for indoor PV) illumination conditions, respectively. Independent of the light intensity, decreasing \(\mu\) generally reduces both the short-circuit current density (\(J_{\rm SC}\)) and the FF. This suggests that, even at low light intensities, there can be a substantial charge collection loss caused by bimolecular recombination in case of low mobilities. Conversely, in the limit of high mobilities, both \(J_{\mathrm{SC}}\) and FF saturate to their maximum values (set by the prevailing intensity) as \(J_{\mathrm{SC}}\to J_{\mathrm{gen}}\). Note that the open-circuit voltage (\(V_{\mathrm{OC}}\)) is independent of \(\mu\) in this case (since \(\beta\) is fixed) and only depends on the light intensity. For an analytical model of the \(J\)-\(V\) characteristics to be accurate, it is imperative that it reproduces the mobility and intensity dependent behaviours in Fig. 1. In the following, approximations relating the current and bimolecular recombination in thin-film OPV devices with ohmic contacts will be derived. ### Analytical approach To obtain approximate solutions for the charge carrier transport equations, we use the following ansatz for the electron and hole density: \[n(x)=n_{0}(x)+\Delta n(x), \tag{8}\] \[p(x)=p_{0}(x)+\Delta p(x). \tag{9}\] Figure 1: Simulated \(J\)-\(V\) characteristics of an OPV device under (a) 1 sun and (b) 0.001 sun s illumination conditions demonstrating the effect of finite mobilities on the current density. The case of balanced mobilities with \(\mu=10^{-4}\) cm\({}^{2}\)/Vs (red line) is shown and compared to the idealized case of \(\mu\rightarrow\infty\) (blue line). A fixed bimolecular recombination coefficient \(\beta=10^{-10}\) cm\({}^{3}\)/s is assumed for both cases. The predictions based on the ideal diode equation Eq. (17) are indicated by black dashed lines. Here, \(n_{0}(x)\) and \(p_{0}(x)\) are defined as the respective ideal carrier densities expected in case of flat quasi-Fermi levels (\(\partial E_{Fn}/\partial x=\partial E_{Fp}/\partial x=0\)) separated by \(qV\). In accordance with Eq. (2) and (3), we have by definition that \[qn_{0}(x)F(x) =-kT\frac{\partial n_{0}(x)}{\partial x}\,, \tag{10}\] \[qp_{0}(x)F(x) =kT\frac{\partial p_{0}(x)}{\partial x}\,, \tag{11}\] for every \(x\) inside the active layer. Noting that \(J=J_{n}(x)=J_{p}(x)=0\) at open circuit, it directly follows that \(n(x)=n_{0}(x)\) and \(p(x)=p_{0}(x)\) at \(V=V_{\text{OC}}\). In other words, \(n_{0}(x)\) and \(p_{0}(x)\) at any given applied voltage, \(V\) in the dark, are identical to the respective \(n(x)\) and \(p(x)\) obtained at open circuit under a light intensity that results in a \(V_{\text{OC}}\) equal to \(V\). Conversely, \(\Delta n(x)\) and \(\Delta p(x)\) represent the corresponding deviations of \(n(x)\) and \(p(x)\) from their ideal carrier densities. \(\Delta n(x)\) and \(\Delta p(x)\) are determined by the charge carrier transport properties inside the device. Making use of Eq. (10) and (11), in conjunction with Eq. (2) and (3), in Eq. (1) the electron and hole continuity equations can be expressed as \[\mu_{n}F\frac{\partial\Delta n}{\partial x}+\mu_{n}\Delta n\frac{\partial F}{ \partial x}+\mu_{n}\frac{kT}{q}\frac{\partial^{2}\Delta n}{\partial x^{2}}=-G +\beta\big{[}p_{0}n_{0}-n_{l}^{2}+p_{0}\Delta n+n_{0}\Delta p+\Delta n\Delta p \big{]}, \tag{12}\] \[\mu_{p}F\frac{\partial\Delta p}{\partial x}+\mu_{p}\Delta p\frac{\partial F}{ \partial x}-\mu_{p}\frac{kT}{q}\frac{\partial^{2}\Delta p}{\partial x^{2}}=G -\beta\big{[}p_{0}n_{0}-n_{l}^{2}+p_{0}\Delta n+n_{0}\Delta p+\Delta n\Delta p \big{]}. \tag{13}\] While general analytically tractable solutions to Eq. (12) and (13) cannot be found, approximate expressions can be obtained depending on the dominant recombination terms. Note that \(\Delta n\) and \(\Delta p\) should primarily be considered mathematical tools which may take positive or negative values. In regions where \(\Delta n\gg n_{0}\) (\(\Delta p\gg p_{0}\)) apply, however, \(\Delta n\) (\(\Delta p\)) may be identified as the excess density of photogenerated electrons (holes). #### iii.2.1 Ideal charge collection We first consider the idealized zeroth order case, when \(n(x)\to n_{0}(x)\) and \(p(x)\to p_{0}(x)\). Excluding open-circuit conditions, this corresponds to the limit of infinite mobilities (or ideal conductivities). This follows from the fact that for \(J_{n}(x)\) and \(J_{p}(x)\) to remain finite as \(\mu_{n}\) and \(\mu_{p}\) approaches infinity, we must generally have \(\frac{\partial E_{Fn}}{\partial x}=\frac{\partial E_{Fp}}{\partial x}\to 0\). Fig. 2(a) and (b) show the carrier densities inside the active layer (under 1 sun illumination condition) for different mobilities at \(V=0\) and \(V=0.5\) V, respectively. The corresponding energy level diagrams are depicted in Fig. 2(c) and (d). As expected, \(E_{Fn}\) and in the limit of high mobilities are flat throughout the active layer and separated by \(qV\) in other words, the free carriers are in electrochemical equilibrium with their collecting electrodes. As \(n(x)\to n_{0}(x)\) and \(p(x)\to p_{0}(x)\), the recombination rate is dominated by the \(\beta p_{0}(x)n_{0}(x)\) term in Eq. (12) and (13) (i.e., the zeroth order term with respect to photogenerated carriers). This term can be directly evaluated noting that \(n_{0}(x)\) and \(p_{0}(x)\), as per Eq. (10) and (11), can be equivalently expressed as \[n_{0}(x) =n_{cat}\exp\Big{(}\frac{q[V-v_{bl,0}+\phi(x)]}{kT}\Big{)}, \tag{14}\] \[p_{0}(x) =p_{an}\exp\Big{(}-\frac{q\phi(x)}{kT}\Big{)}. \tag{15}\] As a result, in this limit \(n_{0}(x)p_{0}(x)\), and thus the bimolecular recombination rate, is independent of the position inside the active layer. Making use of the definition of \(V_{bl,0}\), the generalised mass action law is further obtained as: \[n_{0}(x)p_{0}(x)=n_{l}^{2}\exp\Big{(}\frac{qV}{kT}\Big{)}. \tag{16}\] Hence, for selective contacts (\(J_{n}(0)=J_{p}(d)=0\)), the total current density [Eq. (6)] in the limit of ideal charge collection (\(\mu_{n,p}\rightarrow\infty\)) is given by \[J_{\mathrm{ideal}}(V)=-J_{\mathrm{gen}}+J_{0}\left[\exp\Big{(}\frac{qV}{kT} \Big{)}-1\right], \tag{17}\] where \(J_{0}=q\beta n_{i}^{2}d\) is the dark saturation current density induced by thermal generation of charge carriers in the active layer. As expected, in the high-mobility limit, \(J(V)\) is of an identical form to the ideal diode equation. Indeed, Eq. (17) (indicated by the dashed lines) precisely reproduces the simulated \(J\)-\(V\) curves when \(\mu\rightarrow\infty\) in Fig. 1. #### ii.2.2 Regional approximation Based on Eq. (14) and (15), expressions for the ideal carrier densities can be derived by making use of a regional approximation. In the ideal limit, the density of electrons and holes dominate at cathode and anode contact, respectively, while decreasing exponentially away from the contact (see Fig. 2a,b). Subsequently, the active layer can be divided into a hole-dominated (\(0<x<x^{*}\)) and an electron-dominated region (\(x^{*}<x<d\)), where \(x^{*}\) is the width of the hole-dominated region. Assuming that \(p_{0}\gg n_{0}\) in the hole-dominated region, analytical approximations for \(p_{0}(x)\) can be obtained [40, 41]. For \(x<x^{*}\), Eq. (4) then simplifies as \[\frac{\partial F(x)}{\partial x}=-\frac{\partial^{2}\phi(x)}{\partial x^{2}} \approx\frac{qp_{0}(x)}{\varepsilon\varepsilon_{0}}. \tag{18}\] Substituting Eq. (15) into Eq. (18) and solving for \(\phi(x)\) yields \[\phi(x)=\frac{2kT}{q}\ln\left(\frac{2kT}{qF_{0}\lambda_{\mathrm{an}}}\sinh \left[\frac{qF_{0}x}{2kT}+\sinh^{-1}\left(\frac{qF_{0}\lambda_{\mathrm{an}}}{2 kT}\right)\right]\right), \tag{19}\] for \(F_{0}\gg 2kT/qd\), where \(\lambda_{\mathrm{an}}=\sqrt{2\varepsilon\varepsilon_{0}kT/[q^{2}p_{\mathrm{an }}]}\) represents the Debye screening length for holes at the anode contact and \(F_{0}\) is an integration constant of the electric field. Thus, the electric field in this region is obtained as \[F(x)=-F_{0}\coth\left[\frac{qF_{0}x}{2kT}+\sinh^{-1}\left(\frac{qF_{0}\lambda_ {\mathrm{an}}}{2kT}\right)\right]. \tag{20}\] Note that sufficiently far away from the contact, \(F\rightarrow-F_{0}\) in Eq. (20). Concomitantly, \(F_{0}\) represents the magnitude of the electric field deep inside the active layer. A completely analogous treatment applies for \(x>x^{*}\) (assuming \(n_{0}\gg p_{0}\)) [42]. Demanding that \(F(x)\) is continuous at \(x=x^{*}\), it can be shown that \(x^{*}\approx d/2\) for ohmic contacts, corresponding to \(\lambda_{\mathrm{an}},\lambda_{\mathrm{cat}}\ll d\), where \(\lambda_{cat}\) is the corresponding Debye screening length for electrons at the cathode contact. Finally, an expression for \(F_{0}\) can be obtained after matching \(\phi(x)\) at \(x=x^{*}\) and applying the boundary condition Eq. (7). In general, \(F_{0}\) depends on the contact properties (represented by \(\lambda_{\mathrm{an}}\), \(\lambda_{\mathrm{cat}}\) and \(V_{bi,0}\)) and the applied voltage through a transcendental dependence. In case of ohmic contacts (\(\lambda_{\mathrm{an}}\), \(\lambda_{\mathrm{cat}}\ll d\)), however, \(F_{0}\) may be approximated as [42] \[F_{0}\approx\frac{1}{d}\left[V_{bi}-V\times\left(1+\frac{\ln[2]}{\frac{qV_{bi }}{qkT}-1}\right)\right], \tag{21}\] for \(V_{bi}-V>8\ln(2)\,kT/q\), where \(V_{bi}\) is the effective built-in voltage inside the active layer at \(V=0\), determined via \[V_{bi}=V_{bi,0}-\frac{2kT}{q}\ln\left(\frac{q^{2}d^{2}\sqrt{p_{\mathrm{an}}n_ {\mathrm{cat}}}}{2\varepsilon\varepsilon_{0}kT}\right)+\frac{4kT}{q}\ln\left( \frac{qV_{bi}}{kT}\right). \tag{22}\] The final term within the round brackets on the right-hand side of Eq. (21) may be viewed as a correction term accounting for the fact that the effective built-in voltage inside the active layer depends weakly on the applied voltage. In Fig. 2, \(p_{0}(x)\) predicted by Eq. (15) (dashed lines), with \(\phi(x)\) given by Eq. (19), are compared with the simulated \(p(x)\) (solid lines). Indeed, Eq. (15) reproduces the simulated \(p(x)\) in the limit \(\mu_{p}\rightarrow\infty\). Inside the active layer, Eq. (19) predicts \(\phi(x)\) to be linearly dependent on \(x\), translating into exponentially varying carrier densities in this limit. Near the contacts, however, the accumulation of majority carriers induces significant energy level bending resulting in non-exponential behaviour within these regions. It should be noted that, although initially derived for \(x<x^{*}\), the approximation based on Eq. (19) generally applies beyond this region; in fact, except for the electron-accumulation region near the cathode contact, Eq. (19) remains a good approximation for \(\phi(x)\) throughout the entire active layer. Figure 2: Simulated carrier density profiles of the OPV device from Fig. 1 at 1 sun for (a) \(V=0\) and (b) \(V=0.5\) V. The solid and dotted lines indicate the case of \(\mu\rightarrow\infty\) and \(\mu=10^{-4}\) cm\({}^{2}\)/Vs, respectively. In (c) and (d), the corresponding energy level diagrams for \(V=0\) and \(V=0.5\) V are shown, respectively. The conduction and valence levels are indicated by blue and red solid lines, respectively, while the electron and hole quasi-Fermi levels are indicated by blue and red dashed lines. For comparison, the analytical approximations based on Eq. (15) and Eq. (19) are depicted by the black dashed lines. In case of finite mobilities, Eq. (15) is also seen to accurately describe \(p(x)\) within the anode contact region; this is a consequence of the high hole density resulting in a virtually flat \(E_{Ep}\) [cf. Eq. (3)], and thus \(p(x)=p_{0}(x)\), within this region. Subsequently, the majority carrier densities within the contact regions are independent of charge transport properties, consistent with the defining characteristic of ohmic contacts. Finally, the \(E_{\nu}(x)\) predicted based on Eq. (19) (dashed line) are compared with the simulated \(E_{\nu}(x)\) (solid lines) in Fig. 2(c) and (d); an excellent agreement is obtained. It should be noted that the simulated \(E_{\nu}(x)\) at the two different mobility cases in Fig. 2(c) and (d) coalesce, suggesting that \(\phi(x)\) is independent of mobilities in this case. ### First-order approximation In case of finite mobilities \(n(x)\approx n_{0}(x)\) and \(p(x)\approx p_{0}(x)\) only within the high-density region near the cathode and anode contact, respectively, while the carrier densities strongly deviate from the ideal limit well inside the active layer, as shown in Fig. 2(a) and (b). The deviations \(\Delta n(x)\) and \(\Delta p(x)\) depend on both the light intensity and the charge transport properties. In the low-intensity limit, however, when the second-order recombination term \(\beta\Delta n\Delta p\) is negligible (the recombination rate is only determined by the \(\mathbf{1}^{\mathrm{st}}\) order terms) and the space charge induced by \(\Delta n\) and \(\Delta p\) is insignificant (i.e., \(F(x)\) is given by Eq. (20)), approximative solutions to Eq. (12) and (13) can be obtained. #### ii.2.1 Drift-only solutions for \(\Delta n(x)\) We consider the drift-only case, assuming the diffusion component of \(\Delta n(x)\) to be negligible. For electrons sufficiently far away from the cathode contact, the term \(n_{0}(x)\Delta p(x)\) can be neglected. Then, noting that \(p_{0}(x)\) is related to \(F(x)\) via Eq. (18), in this region, Eq. (12) reduces to \[-\mu_{n}F(x)\frac{\partial\Delta n}{\partial x}+[z_{n}-1]\mu_{n}\frac{ \partial F}{\partial x}\Delta n(x)=G_{R}, \tag{23}\] where \(G_{R}=G-\beta n_{i}^{2}\left[\exp(qV/kT)-1\right]\) and \(z_{n}=\beta/\beta_{n}\), with \(\beta_{n}\equiv q\mu_{n}/(\varepsilon\varepsilon_{0})\) being an equivalent electron-only Langevin rate constant. The solution for Eq. (23) is of the form: \[\Delta n(x)=\frac{G_{R}}{\mu_{n}F_{0}}\times\frac{\int_{0}^{x}\tanh^{2n}\left[ \frac{qF_{0}x^{\prime}}{2kT}+\sinh^{-1}\left(\frac{qF_{0}\Delta n}{2kT}\right) \right]dx^{\prime}}{\tanh^{2n-1}\left[\frac{qF_{0}x^{\prime}}{2kT}+\sinh^{-1} \left(\frac{qF_{0}\Delta n}{2kT}\right)\right]}, \tag{24}\] with \(\Delta n(0)=0\) and using Eq. (20) for \(F(x)\). An analogous expression can be obtained for \(\Delta p(x)\) sufficiently far away from the anode contact. For the case of perfect ohmic contacts (corresponding to \(p_{an}\rightarrow\infty\) and \(\lambda_{an}\to 0\)), \(\Delta n(x)\) far away from both contacts simplifies as \[J_{n}(x)\approx-q\mu_{n}F_{0}\Delta n(x)\approx-qG_{R}[x-x_{0n}], \tag{25}\] where \(x_{0n}=(kT/qF_{0})\times f_{0}(\beta/\beta_{n})\) with \(f_{0}(z)\equiv 2\times\int_{0}^{\infty}1-\tanh^{z}(u)\,du\). The integral function \(f_{0}(z)\) can be readily evaluated for special cases (e.g., \(f_{0}(2)=2\), \(f_{0}(1)=\ln(4)\)); for a general \(z\), we find \[f_{0}(z)=\psi\left(\frac{z+1}{2}\right)+\gamma+\ln(4), \tag{26}\] where \(\psi\) is the digamma function and \(\gamma\approx 0.5772\) is the Euler constant. For \(z\ll 1\) we expect \(f_{0}(z)\rightarrow\pi^{2}z/4\), while \(f_{0}(z)\approx\ln[2z\exp(\gamma)]\) in the limit of large \(z\). For practical purposes, however, it is convenient to assume the following numerical approximation for Eq. (26): \[f_{0}(z)\approx\ln\Big{(}\sqrt{1+B^{2}z^{2}}+\frac{\pi^{2}z}{4}\Big{)}, \tag{27}\] with \(B=[8\exp(\gamma)-\pi^{2}]/4\approx 1.0947\), which reproduces the two extreme cases. Figure 3: Normalized electron current density profiles simulated at (a) short circuit and (b) \(V=0.5\) V for the low-mobility device from Fig. 1(b) (\(\mu=10^{-3}\) cm\({}^{2}\)/Vs and \(\beta=\beta_{L}\)), indicated by the red solid lines. The corresponding approximations Eq. (25) and Eq. (29) for the electron current densities well inside the active layer are indicated by the solid blue lines and dashed lines, respectively. Accounting for diffusion The above analysis culminating in Eq. (24) predicts that \(J_{n}(x)\) is given by Eq. (25) far away from the anode contact (\(x\gg x_{0n}\)). On the other hand, for \(x<x_{0n}\), Eq. (24) approximates as \[\Delta n(x)\approx\frac{qG_{R}x^{2}}{2\mu_{n}kT[1+\beta/\beta_{n}]}, \tag{28}\] for perfect ohmic contacts (\(\lambda_{an}\to 0\)). However, this treatment neglects diffusion of \(\Delta n(x)\), assuming the drift component \(\Delta J_{n,\mathrm{drift}}=q\mu_{n}\Delta nF\) to be much larger than the corresponding diffusion component, \(\Delta J_{n,\mathrm{diff}}=\mu_{n}kT(\partial\Delta n/\partial x)\). While this is a good approximation for \(x\gg x_{0n}\), the assumption is generally not valid for electrons close to the anode contact. Using Eq. (28), the corresponding \(\Delta J_{n,\mathrm{diff}}\) is approximately equal but opposite to the drift component (\(\Delta J_{n,\mathrm{diff}}\approx-\Delta J_{n,\mathrm{drift}}\)), suggesting the actual total electron current \(U_{n}=\Delta J_{n,\mathrm{drift}}+\Delta J_{n,\mathrm{diff}}\)) for \(x<x_{0n}\) to be close to zero. Therefore, to a first approximation, the drift-only analysis underestimates the magnitude of \(J_{n}(x)\) by \(\Delta J_{n,\mathrm{drift}}(x_{0n})\) for \(x\gg x_{0n}\). Based on Eq. (28), we expect this correction to be given by \(\Delta J_{n,\mathrm{drift}}(x_{0n})\approx-qG_{R}x_{0n}/(z_{n}+1)\). A completely analogous consideration applies for \(\Delta p(x)\). Based on these considerations, the electron and hole current densities well inside the active layer can then be approximated as \[J_{n}(x) =-qG_{R}[x-x_{Rn}], \tag{29}\] \[J_{p}(x) =qG_{R}\big{[}x-d+x_{Rp}\big{]}. \tag{30}\] where \(x_{Rn}=\frac{kT}{qF_{0}}\times f_{R}(\beta/\beta_{n})\) and \(x_{Rp}=\frac{kT}{qF_{0}}\times f_{R}\big{(}\beta/\beta_{p}\big{)}\), with \(\beta_{p}=q\mu_{p}/(\varepsilon\varepsilon_{0})\) being an equivalent hole-only Langevin rate constant. Further, after approximating \(f_{0}(z)\) by Eq. (27), \(f_{R}(z)\) is given by \[f_{R}(z)=\frac{z+2}{z+1}f_{0}(z)\approx\frac{z+2}{z+1}\ln\Big{(}\sqrt{1+B^{2 }z^{2}}+\frac{\pi^{2}z}{4}\Big{)}. \tag{31}\] Fig. 3 shows the simulated \(J_{n}(x)\) of the low-mobility device in Fig. 1(b) at low illumination conditions (where the first-order recombination channel dominates). Comparing the simulations with Eq. (29) a yields a good agreement between the two inside the active layer. 3 Approximate expression for the current density As is implied in Fig. 3, the effect of injected carriers is to induce recombination zones for minority carriers near the contacts. In accordance with Eq. (29) and (30), \(x_{Rn}\) and \(x_{Rp}\) can be viewed as the corresponding effective widths within which all minority carriers recombine near the anode and cathode contact, respectively. Further, these recombination zones are separated by a recombination-free charge collection zone of width \[d_{\rm eff}=d-x_{Rn}-x_{Rp} \tag{32}\] inside the active layer, as illustrated in the schematic energy level diagram in Fig. 4(a). Note that the widths of the recombination zones depend on the ratios between the mobilities and recombination coefficient and grow with increasing applied voltage. As a result, the recombination-free charge collection zone is reduced with increasing voltage giving rise to an increased charge collection loss. Conversely, in the limit of large reverse-bias voltages, high mobilities, or small \(\beta\), we expect \(d_{\rm eff}\to d\), as the ideal high-mobility limit is approached. Finally, combining Eq. (29) and (30), the total current density \(J=J_{n}(x)+J_{p}(x)\), evaluated inside the active layer, can be approximated as \[J(V)=\Big{(}1-\frac{\theta kT}{qF_{0}(V)d}\Big{)}\times J_{\rm ideal}(V), \tag{33}\] where \(J_{\rm ideal}(V)\) is given by Eq. (17) and \[\theta=f_{R}(\beta/\beta_{n})+f_{R}\big{(}\beta/\beta_{p}\big{)}. \tag{34}\] with \(f_{R}\) defined through Eq. (31). Subsequently, the effect of a finite mobility is to reduce the magnitude of the current density, relative to the idealized high-mobility limit [Eq. (17)], by a prefactor given by \(d_{\rm eff}/d\). In addition to the applied voltage and the (effective) built-in voltage (via \(F_{0}\)), this current reduction factor only depends on \(\beta/\beta_{n}\) and \(\beta/\beta_{p}\) in the active layer (through \(\theta\)). Eq. (33) generally describes the current density under low intensity conditions when first-order recombination with injected carriers dominates. In Fig. 4 the light intensity dependence of \(J_{\rm SC}/J_{\rm gen}\) of the low-mobility case from Fig. 1 (\(\mu=10^{-4}\) cm\({}^{2}\)/Vs), along with the corresponding \(J\)-\(V\) curves at 0.001 and 1 sun conditions, is simulated and compared to the analytical prediction Eq. (33). Indeed, a nearly perfect agreement between Eq. (33) and the simulations is obtained at low intensities (0.001 suns). Note that for the case \(\mu\to\infty\), Eq. (33) reduces back to the ideal diode equation \(J_{\rm ideal}(V)\). Figure 4: In (a) a schematic energy level diagram indicating the recombination zones near the contacts, where first-order recombination between injected and photogenerated carriers dominate, and charge collection zone of width \(d_{\rm eff}\) where first-order recombination is negligible. In (b), the simulated intensity dependence of \(J_{\rm SC}/J_{\rm gen}\) (red solid line) of the low-mobility device (\(\mu=10^{-4}\) cm\({}^{2}\)/Vs) from Fig. 1 is shown alongside the corresponding analytical approximations based on Eq. (33) (blue dashed line) and Eq. (41) (black dotted line). In (c) and (d), the corresponding simulated _J-V_ curves at 0.001 suns [from Fig. 1(b)] and 1 sun [from Fig. 1(a)] are shown, respectively, and indicated by red solid lines. The analytical approximations based on Eq. (33) (accounting for 1\({}^{\rm st}\) order recombination with injected carriers) and Eq. (41) (also including effects of 2\({}^{\rm nd}\) order recombination among photogenerated carriers) are indicated by blue dashed and black dotted lines, respectively. The associated contributions from 1\({}^{\rm st}\) and 2\({}^{\rm nd}\) order recombination to the total charge collection losses, relative to the ideal limit [Eq. (17), dark-cyan solid lines], are highlighted by purple- and blue-shaded areas, respectively. ### Accounting for second-order recombination between photogenerated carriers As evident from Fig. 4, a deviation between the simulations and the prediction based on Eq. (33) eventually appears at high enough intensities as second-order recombination between photogenerated electrons and holes become important. Under these conditions, the \(\beta\Delta n\Delta p\) term in Eq. (12) can no longer be neglected. Subsequently, there will be a competition between charge extraction and second-order recombination taking place within the bulk of the active layer, with \(J(V)\) subsequently deviating from Eq. (33). However, a correction to Eq. (33) can be obtained by accounting for second-order recombination within the charge collection zone, \(x_{Rn}<x<d-x_{RP}\). Since \(\Delta n\gg n_{0}\) and \(\Delta p\gg p_{0}\), within this region, \(\Delta n\) and \(\Delta p\) represent the actual excess densities of electrons and holes induced by photogeneration. First-order recombination (dominating near the contacts) is accounted for by assuming that \(J_{n}(x_{Rn})=0\) and \(J_{p}\big{(}d-x_{RP}\big{)}=0\). In other words, the device is effectively treated as having a thickness \(d_{\text{eff}}\), where all electrons (holes) photogenerated within \(0<x<x_{Rn}\) (\(d-x_{RP}<x<d\)) are lost due to recombination. #### iii.3.1 Effect of second-order recombination inside the collection zone We consider the effect of second-order recombination at small voltages when \(V_{\text{OC}}-V\gg kT/q\), corresponding to \(G_{R}\approx G\). Then, assuming that the transport of photogenerated carriers is dominated by drift and that space charge effects are negligible, within the charge collection zone, Eq. (12) simplifies as \[-\mu_{n}F\,\frac{\partial\Delta n}{\partial x}=G-\beta\Delta n(x)\Delta p(x)=G -\frac{\beta J}{q\mu_{p}F}\Delta n(x)+\frac{\mu_{n}\beta}{\mu_{p}}\Delta n(x)^ {2}, \tag{35}\] where \(J=q\big{[}\mu_{n}\Delta n(x)+\mu_{p}\Delta p(x)\big{]}F\) was used in the last step to eliminate \(\Delta p(x)\). Assuming that \(\Delta n(x_{Rn})=0\), while noting that the total current \(J\) is independent of \(x\), Eq. (35) can be readily solved for \(\Delta n(x)\) with the result: \[\Delta n(x)=\sqrt{\frac{\mu_{p}G}{\mu_{n}\beta}}\sqrt{1-\Big{(}\frac{J}{J_{ \beta}}\Big{)}^{2}}\tan\left(2\sqrt{1-\Big{(}\frac{J}{J_{\beta}}\Big{)}^{2}} \frac{qG[x-x_{Rn}]}{J_{\beta}}+\sin^{-1}\Big{(}\frac{J}{J_{\beta}}\Big{)} \right)+\frac{J}{2q\mu_{n}F}\,, \tag{36}\] for \(x_{Rn}<x<d-x_{RP}\), where \(J_{\beta}=2q\sqrt{\mu_{n}\mu_{p}}\sqrt{(G/\beta)}|F|\). An analogous treatment applies for photogenerated holes within the collection zone; in fact, assuming that \(\Delta p=0\) at \(x=d-x_{RP}\), it can be shown that \(\Delta p(x)=\big{(}\mu_{n}/\mu_{p}\big{)}\times\Delta n\big{(}d-x_{RP}-x\big{)}\). Unfortunately, an analytically tractable explicit solution of Eq. (36) in terms of the total current density \(J\) cannot be established [24]. However, noting that \(\mu_{n}\Delta n=\mu_{p}\Delta p\) (and thus \(J_{n}=J_{p}=J/2\)) at the midplane of the charge collection zone, the following implicit relation is obtained: \[\sin^{-1}\left(\frac{J}{J_{\beta}}\right)=-\frac{J_{\rm gen}^{*}}{J_{\beta}} \sqrt{1-\left(\frac{J}{J_{\beta}}\right)^{2}}. \tag{37}\] Here, \(J_{\rm gen}^{*}\) is defined as \(J_{\rm gen}^{*}=qGd_{\rm eff}\) and corresponds to the magnitude of the current density obtained when all carriers photogenerated within the collection zone are extracted. We note that the derivation of Eq. (36) assumes the electric field to be uniform in the charge collection zone. At small voltages, the electric field within this region is well approximated by \(F\approx-F_{0}\). However, the derivation of Eq. (36) also neglects the effect of diffusion of photogenerated carriers induced by super-linear density profiles in the collection zone, resulting in Eq. (37) overestimating the magnitude of the actual \(J\). The deviation may be estimated from the difference between the expected diffusion components (\(\mu_{n}kT\,\partial\Delta n(x)/\partial x\)) of Eq. (36) evaluated at the edge (\(x=x_{Rn}\)) and midplane of the collection zone. It turns out that this effect can be corrected for in Eq. (37) by modifying \(J_{\beta}\) as [43] \[J_{\beta}\approx 2q\sqrt{\mu_{n}\mu_{p}}\sqrt{\frac{G}{\beta}}F_{0}\times \left[1+\frac{2kT}{qF_{0}d_{\rm eff}}\right]^{-1/2}\,. \tag{38}\] where the final term on the right-hand side of Eq. (38) represents a second-order correction induced by a diffusion of \(\Delta n(x)\) in the collection zone. In accordance with Eq. (37) the current density depends on the ratio \(J_{\rm gen}^{*}/J_{\beta}\), as shown in Fig. 5. As expected, \(J\) approaches \(-J_{\rm gen}^{*}\) in limit when \(J_{\rm gen}^{*}\ll J_{\beta}\), corresponding to the low-intensity limit when second-order recombination within the charge collection zone is negligible. At high intensities when \(J_{\rm gen}^{*}\gg J_{\beta}\), in turn, Eq. (37) predicts that \(J\to-J_{\beta}\); in this regime, the current is strongly limited by second-order bimolecular recombination between photogenerated electrons and holes inside the collection zone. For the general case, an explicit approximation for \(J\) can be obtained based on expanding the left-hand-side of Eq. (37) as \(\sin^{-1}(J/J_{\beta})\approx J/J_{\beta}\). Subsequently, the following approximative expression was found: \[J\approx-\frac{J_{\rm gen}^{*}}{\sqrt{1+K(V)}}\,, \tag{39}\] with \[K(V)=\left(\frac{J_{\rm gen}^{*}}{J_{\beta}}\right)^{2}+\kappa\,\frac{J_{\rm gen }^{*}}{J_{\beta}}, \tag{40}\] where \(\kappa\) is a correction factor. Here, the case \(\kappa=0\) corresponds to the first-order approximation \(\sin^{-1}\bigl{(}J/J_{\beta}\bigr{)}\approx J/J_{\beta}\) in Eq. (37). However, by introducing an additional correction term (with non-zero \(\kappa\)), the accuracy between the exact value for \(J\) based on Eq. (37) and Eq. (39) can be further improved. As shown in Fig. 5, a good agreement is obtained for \(\kappa=0.173\). #### iii.2.2 Revised approximation for the current density Based on Eq. (39), Eq. (33) can be extended to account for the effect of second-order recombination among photogenerated charge carriers. Noting that \(J_{\rm gen}^{*}\) is equal to Eq. (33) for \(V_{\rm OC}-V\gg kT/q\), it becomes evident that the effect of second-order recombination is to reduce the current density by a factor \(1/\sqrt{1+K(V)}\). Hence, the current density can be approximated as \[J(V)=\frac{1}{\sqrt{1+K(V)}}\times\left(1-\frac{\theta kT}{qF_{0}(V)d}\right) \times J_{\rm ideal}(V). \tag{41}\] In other words, apart from the reduction induced by recombination with injected carriers at the contacts, the magnitude of the current density is further reduced, relative to \(J_{\rm ideal}(V)\), by an Figure 5: Calculated \(J/J_{\beta}\) as a function of \(J_{\rm gen}^{*}/J_{\beta}\). The solid line corresponds to the exact (implicit) solution to Eq. (37). The approximations based on Eq. (39) with \(\kappa=0\) and \(\kappa=0.173\) are indicated by dotted and short-dashed lines, respectively. The case \(J=J_{\rm gen}^{*}\) (long-dashed lines) neglecting second-order recombination between photogenerated carriers has been included for comparison. additional reduction factor induced by second-order recombination among photogenerated carriers within the charge collection zone. In accordance with Eq. (41), the current loss induced by second-order recombination among photogenerated carriers depends on the generation rate (or \(J_{\rm gen}\)) with the magnitude of \(J/J_{\rm gen}\) generally decreasing with increasing light intensity. Indeed, Eq. (41) reproduces both the simulated intensity dependence of \(J_{\rm SC}/J_{\rm gen}\) and \(J\)-\(V\) curve at 1 sun as shown in Fig. 4(b) and (d), respectively. The degree of second-order recombination is critically determined by \(K\) as per Eq. (40); at short-circuit, Eq. (40) can be approximated by \[K\approx\frac{\beta J_{\rm gen}d^{3}}{4\mu_{n}\mu_{p}\nu_{bi}^{2}}\times\Big{(} 1-\frac{\theta kT}{qV_{bi}}\Big{)}^{2}. \tag{42}\] Apart from the photogeneration current (\(J_{\rm gen}\)), the degree of the second-order recombination also depends on the recombination coefficient and the product of the electron and hole mobility. When \(K\ll 1\), corresponding low intensities, small \(\beta\), or high enough mobilities, Eq. (41) reduces to Eq. (33) as the second-order recombination within the collection zone is negligible. Conversely, in the high intensity regime, when \(K\gg 1\), second-order recombination dominates over charge extraction of the photogenerated charge carriers. In this regime, \(J\) approaches a linear \(V\) dependence while displaying a sublinear intensity dependence of the form \(J\propto G^{1/2}\) (or \(J/J_{\rm gen}\propto G^{-1/2}\)), consistent with previous reports [24, 44, 45]. ### Validation of the analytical model To substantiate the presented analytical framework for the current density, we next compare the obtained diode equation (Eq. (41)) against numerical simulations of different cases. Figure 6 shows the numerically simulated \(J\)-\(V\) characteristics for varying mobilities, \(\beta\) and mobility imbalance \(\mu_{p}/\mu_{n}\) under 1 sun and 0.001 suns illumination conditions. Upon comparing the numerically simulated \(J\)-\(V\) curves (coloured solid lines) with the predictions based on Eq. (41) (black dashed lines) an excellent agreement is obtained for voltages sufficiently below \(V_{bi}\), in case of balanced mobilities. This is particularly true for the low-intensity cases where a virtually perfect agreement is obtained across the board. At higher forward-bias voltages, however, a deviation is obtained for systems with low mobilities or large \(V_{\rm OC}\). This can be attributed to the fact that the approximation for \(F_{0}\) [Eq. (21)] in Eq. (41) is no longer valid at high voltages (and that Eq. (41) diverges for \(d_{\rm eff}\to 0\)). Note that Eq. (41) is essentially indistinguishable from Eq. (33) at 0.001 suns, but also at 1 sun for mobilities \(>10^{-3}\) cm\({}^{2}\)/Vs in Fig. 4. A good agreement is also obtained for imbalanced mobilities, especially at low intensities. At large enough mobility imbalances, however, a deviation is eventually observed between the analytically predicted and the simulated _J-V_ curves. This deviation is attributed to the inevitable space charge build-up of the slower photogenerated carriers at high intensities, which screens the electric field inside the active layer, resulting in strongly space-charge-limited photocurrents [46, 47, 48, 49]. The developed analytical framework can be used to gain insights about loss mechanisms in organic solar cells and photodiodes. In addition, the derived approximations can be used to extract material parameters from experimental data, for example using a global fitting procedure, in systems where bimolecular recombination is the dominant loss mechanism of photogenerated charge carriers. In this regard, to further validate the analytical model, we applied our diode equation to experimental data of the OPV model system PM6:ITIC [45, 50]. The experimental _J-V_ curves of a 110 nm thick PM6:ITIC device under 1 sun illumination is depicted in Fig. 7(a). The corresponding normalized \(J_{\text{SC}}/J_{\text{gen}}\) is shown as a function of light intensity in Fig. 7(b), assuming \(J_{\text{gen}}\) to be proportional to the light intensity. The details of the experiments and device fabrication can be found in elsewhere [45, 50]. The carrier mobilities for this system were previously found to balanced and estimated to be \(\mu=2\times 10^{-4}\) cm\({}^{2}\)/Vs [45]. As evident from the pronounced intensity dependence of \(J_{\text{SC}}/J_{\text{gen}}\), the PM6:ITIC device suffers from second-order bimolecular recombination at the higher intensities (above 1 sun). The solid lines in Fig. 7 indicate the qualitative fit based on Eq. (41) using \(\beta\) and \(V_{bi}\) as fitting parameters. A good qualitative fit is obtained with \(\beta\approx 4\times 10^{-11}\) cm\({}^{3}\)/s and \(V_{bi}\approx 1.2\) V. The obtained value for \(\beta\) is about 7 times smaller than the Langevin recombination coefficient, consistent with the moderate efficiency of this system [50]. The deviation between the experimental data and the fit at the highest intensities in Fig. 7(b) is attributed to additional losses induced by the series resistance of the external circuit (cf. Section III.E). Figure 6: Comparison between numerical simulations indicated by the coloured solid lines and the analytical approximations Eq. (41) indicated by the corresponding dashed lines are shown under 1 sun (left-hand-side panel) and 0.001 suns (right-hand-side panel) illumination conditions. In (a) and (b), the simulated \(J\)-\(V\) characteristics for the case of varying \(\beta/\beta_{L}\) are shown for \(\mu_{n}=\mu_{p}=10^{-3}\) cm\({}^{2}\)/Vs. In (c) and (d), the simulated \(J\)-\(V\) characteristics at different but balanced mobilities are shown for \(\beta=10^{-11}\) cm\({}^{2}\)/Vs. In (e) and (f), the simulated \(J\)-\(V\) characteristics at increasing mobility imbalance \(\mu_{p}/\mu_{n}\) is shown for a fixed \(\mu_{n}=10^{-3}\) cm\({}^{2}\)/Vs and constant \(\beta=10^{-10}\) cm\({}^{3}\)/s. ### Implications for photovoltaic device performance The presented theoretical framework implies that the charge collection loss induced by bimolecular recombination can generally be partitioned into zeroth, first and second order contributions. The zeroth order case [Eq. (17)] represents perfect charge collection of photogenerated carriers, containing only fundamental losses associated with the \(V_{\text{OC}}\). Conversely, first and second-order losses directly reduce the number of collected photogenerated carriers. The corresponding contributions from first- and second-order losses are highlighted by shaded areas in Fig. 4(c) and (d) for the case \(\mu=10^{-4}\) cm\({}^{2}\)/_Vs_. These results suggests that even at 1 sun the predominant charge collection loss due to bimolecular recombination in OPVs (with ohmic contacts) is first-order, while second-order losses only show up at low mobilities which may not be relevant to state-of-the art OPVs. The implications of first-order charge collection losses for photovoltaic devices are discussed below. #### ii.5.1 Fill Factor and PCE The presence of first-order current losses, which depend on the voltage in accordance with Eq. (33), has important implications for the FF, and ultimately the maximal achievable PCE in OPVs. For a solar cell under a given light intensity (\(I_{L}\)), the PCE and FF are defined through Figure 7: (a) Experimental \(J\)-\(V\) curves (symbols) of organic solar cell device based on PM6:ITIC. In (b) normalized \(J_{\text{SC}}/J_{\text{gen}}\) as a function of light intensity for PM6:ITIC (symbols) measured under short-circuit conditions are shown; star-shaped symbol indicate the value at 1 sun. The solid lines in (a) and (b) indicate the corresponding qualitative fits using Eq. (41). \(\mathrm{FF}\times J_{\mathrm{SC}}V_{\mathrm{OC}}/I_{L}\), where \(V_{\mathrm{mp}}\) is the voltage at which the output power of the device is maximal. In the ideal diode limit, when \(J(V)\) is given by Eq. (17), the \(\mathrm{FF}\) can be approximated as [28] \[\mathrm{FF}_{\mathrm{ideal}}=\frac{\frac{qV_{\mathrm{OC}}-\ln\left(1+\frac{qV_{ \mathrm{OC}}}{kT}\right)}{1+\frac{qV_{\mathrm{OC}}}{kT}}}{1+\frac{qV_{\mathrm{OC }}}{kT}}, \tag{43}\] with the corresponding maximum power point voltage given by \[V_{\mathrm{mp}}^{\mathrm{ideal}}=V_{\mathrm{OC}}-\frac{kT}{q}\ln\left(1+\frac {qV_{\mathrm{OC}}}{kT}\right). \tag{44}\] Eq. (43) strictly applies for the ideal-diode regime, corresponding to the limit of large mobilities. However, an upper limit for the \(\mathrm{FF}\) in case of finite mobilities can be estimated assuming that the charge collection is only limited by first-order recombination with injected carriers. Then, noting that the principal voltage dependence in Eq. (33) is determined by the \(J_{\mathrm{ideal}}(V)\) component, while the prefactor varies relatively slowly with voltage, we may take \(V_{\mathrm{mp}}\approx V_{\mathrm{mp}}^{\mathrm{ideal}}\) as an upper limit estimate for \(V_{\mathrm{mp}}\). Subsequently, the \(\mathrm{FF}\) is reduced relative to the ideal limit by a factor corresponding to the ratio between the prefactor at \(V=V_{\mathrm{mp}}\) and at short-circuit as: \[\mathrm{FF}\approx\frac{\mathrm{FF}_{\mathrm{ideal}}}{\left(1-\frac{\theta kT }{qV_{\mathrm{pl}}}\right)}\times\left(1-\frac{\theta kT}{qF_{0}\left(V_{ \mathrm{mp}}^{\mathrm{ideal}}\right)d}\right), \tag{45}\] where \(\mathrm{FF}_{\mathrm{ideal}}\) and \(V_{\mathrm{mp}}^{\mathrm{ideal}}\) is given by Eq. (43) and Eq. (44), respectively. In accordance with Eq. (45), the \(\mathrm{FF}\) is expected to depend on the ratios between the bimolecular recombination coefficient and the mobilities, i.e., \(\beta/\beta_{n}\) and \(\beta/\beta_{p}\). Fig. 8 shows the \(\mathrm{FF}\) as a function of the sum of the ratios, \(\beta/\beta_{R}\), where \(\beta_{R}=\left(\beta_{p}^{-1}+\beta_{n}^{-1}\right)^{-1}\). The symbols in Fig. 8(a) and (b) indicate numerically simulated FFs from a wide set of different mobilities, bimolecular recombination coefficients, mobility imbalances, and generation rates, while keeping the \(V_{\mathrm{OC}}\) fixed at 0.81 V and 0.63 V, respectively. Note that \(\beta\) in these simulations have been restricted to \(\beta/\beta_{L}\leq 1\). The approximation based on Eq. (45) for the special case \(\beta_{n}\gg\beta_{p}\) (solid lines) and \(\beta_{n}=\beta_{p}\) (dashed lines), corresponding to the expected upper and lower bounds for Eq. (45) have been included for comparison. Eq. (45) provides a good estimate of the simulated FFs at small \(\beta/\beta_{R}\) (large mobilities and/or small \(\beta\)) and low \(V_{\mathrm{OC}}\), in particular. At larger \(\beta/\beta_{R}\) and higher \(V_{\mathrm{OC}}\), on the other hand, non-uniform electric fields and space charge effects in the active layer become increasingly important. Eventually, as \(\beta/\beta_{R}\gg 1\), corresponding to highly imbalanced mobilities, the \(\mathrm{FF}\) at high \(V_{\mathrm{OC}}\) approaches a value close to 0.385 as the simulated devices become severely space-charge-limited. Figure 8: The Fill Factor as a function of \(\beta/\beta_{R}\) as extracted from a large set of numerically simulated devices (open symbols) with different \(\mu_{n}\), \(\mu_{p}\), \(\beta/\beta_{L}\) and \(G\) at fixed open-circuit voltages of (a) \(V_{\rm OC}=0.81\) V and (b) \(V_{\rm OC}=0.63\) V. In the simulations, \(\beta/\beta_{L}\) spans between 0.001 and 1, while \(\mu_{n}\) and \(\mu_{p}\) are allowed to independently vary between \(10^{-6}\) and \(10^{2}\) cm\({}^{2}\)/Vs. For comparison, the approximations based on Eq. (45) for \(\beta_{n}\gg\beta_{p}\) (solid lines) and \(\beta_{n}=\beta_{p}\) (blue dashed lines), where the former case corresponds to the estimated upper limit of the FF when the charge collection is limited by first-order recombination with injected carriers. The black dotted line in (a) corresponds to FF = 0.385 expected for space-charge limited photocurrents (SCLC). Finally, combining Eq. (45) with the corresponding \(J_{\rm SC}\) predicted by Eq. (33), an upper limit estimate of the PCE for low-mobility thin-film devices with ohmic contacts can be established. The corresponding PCE takes the form \[{\rm PCE}={\rm PCE}_{\rm ideal}\times\left(1-\frac{\theta kT}{qF_{0}(V_{\rm mp }^{\rm ideal})d}\right), \tag{46}\] where \({\rm PCE}_{\rm ideal}\) is the PCE expected in case of perfect charge collection. The parameter \(\theta\) may thus be taken as an associated figure-of-merit for charge collection. Fig. 9 shows the \({\rm PCE}/{\rm PCE}_{\rm ideal}\) of the numerically simulated devices from Fig. 8 but plotted as a function of \(\theta\), where \(\theta\) is obtained from Eq. (34) using \(\beta/\beta_{n}\) and \(\beta/\beta_{p}\) as input. As shown, for a given \(V_{\rm OC}\) (and \(V_{bi}\)) the simulated data set coalesces into a common trend which only depends on \(\theta\). Indeed, comparing the simulated trend with the upper limit estimate Eq. (46), a very good overall agreement can be obtained. As a result, Eq. (46) can be used to separately estimate the loss in the PCE associated with charge collection in OPV devices with selective ohmic contacts. These results can also be used to facilitate machine-learning for PCE predictions. Finally, we note that \(\theta\) [Eq. (34)] is markedly different to previously proposed figures-of Figure 9: The \({\rm PCE}/{\rm PCE}_{\rm ideal}\) of the numerically simulated devices from Fig. 8 are shown as a function of the figure-of-merit \(\theta\) at \(V_{\rm OC}=0.81\) V, as indicated by open symbols. The corresponding case for \(V_{\rm OC}=0.63\) V is shown in the inset. The upper limit estimate of the PCE based on Eq. (46), expected in the presence of first-order recombination with injected carriers, is depicted by the solid lines. merit which are proportional to \(\frac{\beta G}{\mu_{n}\mu_{p}}\)[13, 14, 22]. The fact that the spread in the numerically simulated data (from Fig. 8) is drastically reduced when parametrized against \(\theta\) (see Fig. 9) elevates \(\theta\) as a more accurate figure-of-merit for charge collection in thin-film OPVs. #### iii.2.2 The dark current and the reciprocity between charge injection and extraction The theoretical framework based on Eq. (33) can be consolidated with the principle of reciprocity between charge injection and extraction [51], provided that the voltage dependence of the photovoltaic external quantum efficiency (EQE) is accounted for [52]. In general, the EQE describes the efficiency for incident photons to be converted into collected electron-hole pairs at the contacts. In excitonic semiconductors, the EQE can be written as \(\text{EQE}=\eta_{\text{abs}}\eta_{\text{CGV}}\eta_{\text{col}}\), where \(\eta_{\text{abs}}\) is the efficiency for an incident photon to be absorbed and generate an exciton and \(\eta_{\text{CGY}}\) is the efficiency for a photogenerated exciton to generate a free electron-hole pair. Finally, \(\eta_{\text{col}}\) is the charge collection efficiency of photogenerated carriers defined as \(\eta_{\text{col}}=|J-J_{\text{dark}}|/J_{\text{gen}}\), where \(J_{\text{dark}}\) is the current density under dark conditions. Note that, from this perspective, \(J_{\text{gen}}\) can be equivalently expressed as \(J_{\text{gen}}=q\int_{0}^{\infty}\eta_{\text{abs}}\eta_{\text{CGY}}\Phi_{ \text{ext}}\,dE\), where \(\Phi_{\text{ext}}(E)\) is the photon flux of the external light source used to photogenerated charge carriers (e.g., 1 sun). In accordance with Eq. (33), under conditions when recombination between photogenerated and injected charge carriers dominates, the charge collection efficiency is given by \[\eta_{\text{col}}(V)=1-\tfrac{\theta kT}{qF_{0}(V)d}\,, \tag{47}\] for \(V\ll V_{bi}\). Based on Eq. (47), \(\eta_{\text{col}}\) depends on the applied voltage but is independent of light intensity, consistent with the first-order nature of the current loss. In fact, Eq. (47) applies to dark conditions as well, noting that the current density Eq. (33) in the dark is given by \[J_{\text{dark}}(V)=\left(1-\tfrac{\theta kT}{qF_{0}(V)d}\right)\times J_{0} \left[\exp\left(\tfrac{qV}{kT}\right)-1\right], \tag{48}\] for \(V\ll V_{bi}\). Hence, the effect of finite mobilities, is to reduce the injected dark current density, relative to the ideal-diode case \(J_{\text{ideal}}=J_{0}\left[\exp\left(\tfrac{qV}{kT}\right)-1\right]\) (i.e., Eq. (17) in the dark), by the prefactor \(\eta_{\text{col}}(V)\) given by Eq. (47). Concomitantly, \(J_{\text{dark}}(V)\) can be described in terms of an effective dark saturation current density \(J_{0,\text{eff}}(V)=\eta_{\text{col}}(V)\times J_{0}\), which depends on the applied voltage. This is demonstrated in Fig. 10, where \(J\)-\(V\) curves under dark conditions are simulated and compared to Eq. (48), showing excellent agreement for \(V\ll V_{bi}\). Under illumination, the rates of charge carrier injection and extraction are exactly balanced at open circuit (where \(J=0\)). From Eq. (33), the open-circuit voltage is given by \[V_{\text{OC}}=\frac{k\tau}{q}\ln\Big{(}1+\frac{J_{\text{gen}}}{J_{0}}\Big{)}. \tag{49}\] However, since \(J_{0}\) is defined as the (saturated) thermal generation current density in the bulk, and noting that charge carriers can be thermally generated either radiatively or non-radiatively, we can write \(J_{0}=(q/\eta_{\text{rad}})\int_{0}^{\infty}\eta_{\text{abs}}\eta_{\text{CGY }}\Phi_{\text{BB}}\,dE\), where \(\Phi_{\text{BB}}(E)\) is the Black-body photon flux of the environment and \(\eta_{\text{rad}}\) is the radiation efficiency. On the other hand, provided that \(\eta_{\text{col}}\) remain independent of intensity, it follows that \(J_{\text{gen}}/J_{0}=\eta_{\text{col}}(V)J_{\text{gen}}/J_{0,\text{eff}}(V)\). If the EQE is evaluated at low intensities under short-circuit conditions (\(V=0\)), Eq. (49) can then be reformulated as \[V_{\text{OC}}=\frac{k\tau}{q}\ln\Big{(}1+\frac{J_{\text{SC}}}{J_{\text{a,SC}}} \Big{)}\,, \tag{50}\] consistent with detailed balance, where \(J_{\text{SC}}^{*}=q\int_{0}^{\infty}\text{EQE}(E)\phi_{\text{ext}}(E)\,dE\) is the expected \(J_{\text{SC}}\) in the absence of second-order losses, while \(J_{0,\text{SC}}=(q/\eta_{\text{rad}})\int_{0}^{\infty}\text{EQE}(E)\phi_{ \text{BB}}(E)\,dE\) corresponds to \(J_{0,\text{eff}}(V=0)\). Note that \(J_{0,\text{SC}}\) generally differs from \(J_{0}\). It should be emphasized that Eq. (50) assumes that \(J_{\text{SC}}^{*}=J_{\text{SC}}\) which only apply for conditions when Eq. (42) is negligibly small, i.e., \(K\ll 1\). For very low mobilities, this condition is no longer valid, and a deviation from reciprocity is expected at open circuit, in agreement with previous findings [52]. Figure 10: (a) Simulated dark current densities at different mobilities with \(\beta\) of \(10^{-10}\) cm\({}^{3}\)/s. (b) Normalized dark current densities relative to the idealized expected dark current density [Eq. (17)] in the limit of high mobilities. The approximations based on Eq. (48) are indicated by dashed lines. ### Other effects The presented diode equations neglect the effect of external resistive losses. In real devices, the voltage \(V\) over the active layer (diode) is generally different to the externally applied voltage \(V_{\rm ext}\), due to the finite resistance of the wires (connecting the load to the diode) and the electrodes. In addition, diode devices suffer from parasitic shunts, commonly attributed to extrinsic non-idealities from the device fabrication. The device-intrinsic current density \(J(V)\) (considered above) is related to the external current density \(J_{\rm meas}\), measured as a function of \(V_{\rm ext}\), via \[J_{\rm meas}(V_{\rm ext})=J(V)+\frac{\nu}{\mathcal{R}_{\rm shunt}}\,, \tag{51}\] where \(V=V_{\rm ext}-J_{\rm meas}\mathcal{R}_{\rm series}\). Here, \(\mathcal{R}_{\rm series}\) is the combined series resistance (in units of \(\Omega\)cm\({}^{2}\)) of the external circuit and the electrodes and \(\mathcal{R}_{\rm shunt}\) is the shunt resistance (in \(\Omega\)cm\({}^{2}\)). While voltage losses induced by a finite series resistance become prevalent when the magnitude of the current density is large (high intensity and/or high forward bias), the effect of the shunt is generally important at low current densities and light intensities (dominating the measured dark current density in the reverse bias). The presented theoretical treatment for \(J(V)\) applies to undoped thin-film diode devices with ohmic contacts, assuming bimolecular recombination to be the dominant recombination mechanism. This corresponds to situations where additional loss channels induced by trap-assisted recombination and surface recombination of minority carriers at the contacts are negligible. If there is a sufficiently large number of traps in the active layer, however, the resultant trap-assisted recombination generally gives rise to an increased charge collection loss [45, 53, 54, 55], which depends on the trap energy and distribution. Similar to doping in the active layer [56], trapped charge carriers may also induce considerable electric field screening inside the active layer [53, 57], further reducing the photocurrent. Additionally, if the contacts are non-selective for the extraction of majority carriers, additional charge collection losses and reductions in the open-circuit voltages caused by surface recombination of minority carriers may occur. This loss is expected to be particularly prominent in case of energetically unoptimized contacts and/or large carrier mobilities [58, 59, 60, 61]. Finally, it should be stressed that the theoretical analysis is limited to thin active layers with relatively uniform charge carrier generation rates. While the absorption rate of photons inside the active layer is decays exponentially with the distance from the transparent electrode, the combined effect of optical interference and back reflection from the reflective counter electrode results in smooth generation rates in (optically) thin active layers. For thick active layers, however, the photon absorption rate in the active layer is concentrated near the transparent contact resulting in a highly non-uniform charge carrier generation and asymmetric charge extraction [62, 63, 64, 65]. Furthermore, thick active layers are generally more sensitive to space charge effects induced by photogenerated carriers, trapped charge carriers, or (unintentional) dopants. ## 4 Conclusions In conclusion, we have presented an analytic framework for describing the device physics of charge collection in thin-film OPVs with ohmic contacts. Based on this framework, a diode equation describing the \(J\)-\(V\) characteristics is derived, showing excellent agreement with numerical simulations. The presented framework is further employed to analytically understand what limits the charge collection efficiency, FF, and ultimately the PCE in organic solar cells. The obtained findings provide vital physical insights into the interplay between charge carrier extraction, injection, and bimolecular recombination and how these processes influence the device performance of QPV devices. The developed diode equations are not limited to organics but apply generally to sandwich-type thin-film devices based on low-mobility semiconductors with ohmic contacts.
2304.10699
Confined states and topological phases in two-dimensional quasicrystalline $π$-flux model
Motivated by topological equivalence between an extended Haldane model and a chiral-$\pi$-flux model on a square lattice, we apply $\pi$-flux models to two-dimensional bipartite quasicrystals with rhombus tiles in order to investigate topological properties in aperiodic systems. Topologically trivial $\pi$-flux models in the Ammann-Beenker tiling lead to massively degenerate confined states whose energies and fractions differ from the zero-flux model. This is different from the $\pi$-flux models in the Penrose tiling, where confined states only appear at the center of the bands as is the case of a zero-flux model. Additionally, Dirac cones appear in a certain $\pi$-flux model of the Ammann-Beenker approximant, which remains even if the size of the approximant increases. Nontrivial topological states with nonzero Bott index are found when staggered tile-dependent hoppings are introduced in the $\pi$-flux models. This finding suggests a new direction in realizing nontrivial topological states without a uniform magnetic field in aperiodic systems.
Rasoul Ghadimi, Masahiro Hori, Takanori Sugimoto, Takami Tohyama
2023-04-21T02:01:02Z
http://arxiv.org/abs/2304.10699v2
# Confined states and topological phases in two-dimensional quasicrystalline \(\pi\)-flux model ###### Abstract Motivated by topological equivalence between an extended Haldane model and a chiral-\(\pi\)-flux model on a square lattice, we apply \(\pi\)-flux models to two-dimensional bipartite quasicrystals with rhombus tiles in order to investigate topological properties in aperiodic systems. Topologically trivial \(\pi\)-flux models in the Ammann-Beenker tiling lead to massively degenerate confined states whose energies and fractions differ from the zero-flux model. This is different from the \(\pi\)-flux models in the Penrose tiling, where confined states only appear at the center of the bands as is the case of a zero-flux model. Additionally, Dirac cones appear in a certain \(\pi\)-flux model of the Ammann-Beenker approximant, which remains even if the size of the approximant increases. Nontrivial topological states with nonzero Bott index are found when staggered tile-dependent hoppings are introduced in the \(\pi\)-flux models. This finding suggests a new direction in realizing nontrivial topological states without a uniform magnetic field in aperiodic systems. _Introduction-_ The discovery of exotic phases in quasicrystals, such as quantum criticality [1], superconductivity [2], and magnetism [3], has stimulated further research about the role of aperiodicity in establishing these phenomena. Although quasicrystals lack translational symmetry [4; 5; 6], they possess unique structural properties such as self-similarity, originating from their higher-dimensional structures. Recent advances in realizing quasicrystals enable researchers to analyze aperiodicity in combination with other physical phenomena [7; 8; 9; 10]. Further efforts are needed to understand and explore classical and quantum phases in such systems [11; 12; 13; 14; 15; 16; 17] including topological phases of matter, which have become a central issue in various fields of condensed matter physics [18; 19; 20]. Interestingly, aperiodicity in quasicrystals does not prevent them from hosting topological phases [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35] including Chern insulators [36; 37; 38; 39; 40; 41; 42; 43; 44], topological superconductors [45; 46; 47; 48; 49], non-Hermitian topological phases [50; 51], and higher-order topological phases [52; 53; 54; 55]. In this Letter, we present a new proposal for generalizing the Haldane model [56] on a honeycomb lattice to quasicrystalline systems. The Haldane model describes nontrivial topological states without an external magnetic field. While previous studies have generalized this model to quasicrystalline systems [41; 42], a procedure similar to a conventional generalization using \(\pi\)-flux models [57; 58; 59; 60; 61] applied to other translational systems such as a square lattice has not been employed. We extend the \(\pi\)-flux model to two-dimensional bipartite quasicrystals with rhombus tiles, where \(\pi\) flux penetrates tiles with specific configurations. As examples of such quasicrystalline systems, we analyze two quasicrystals, Penrose tiling, and Amman-Beenker tiling [62]. We find that in contrast to zero flux limit, \(\pi\)-flux configurations in the Amman-Beenker tiling introduce massively degenerate confined states whose energies are not at the center of energy bands. These states are strictly localized, similar to well-known confined states [63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75] and dispersionless flat bands of periodic kagome and Lieb lattices, etc. [76; 77; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86]. The massive degeneracy of the confined states is related to the self-similarity of quasicrystals, where local patterns repeat through quasicrystals due to Conway's theorem [68]. Accordingly, if a localized wave function appears in a small part of a given quasicrystal, it also appears in the repeated regions, leading to massively degenerate confined states. On the other hand, in the Penrose tiling, confined states always appear at the center of energy bands regardless of the presence of \(\pi\)-flux, and their number remains constant despite modifications to their wave function induced by the \(\pi\)-flux model. This is related to the local violation of bipartite neutrality in the Penrose tiling [87]. We investigate the confined states by developing a method to obtain their localized wave function and analyzing their fraction in different \(\pi\)-flux configurations. Furthermore, we discover Dirac cones in the energy dispersion of a particular \(\pi\)-flux configuration in Ammann-Beenker approximants. We find nontrivial topological states in all of the \(\pi\)-flux configurations in both Penrose tiling and Amman-Beenker tiling when we introduce staggered tile-diagonal hopping (STDH). We confirm the emergence of topological states in the \(\pi\)-flux models with STDH by calculating the Bott index and edge modes. Therefore, this model can be a new proposal for the generalization of the Haldane model to aperiodic systems in realizing nontrivial topological states without a uniform magnetic field. In the following, we first focus on the Ammann-Beenker tiling and then return to the Penrose tiling at the end. _Structure_- Ammann-Beenker tiling consists of two different tiles (see Fig. 1(a)) [88]. We regard vertices of the tiles as positions where electrons can hop through the edges of tiles. This is known as the vertex model [89, 15]. This model can be studied by using Ammann-Beenker approximants, or equivalently supercell (see Fig. 1(b)) [90, 91, 92], where periodic boundary conditions are employed at the edges of the supercell. The size of quasicrystal approximants can be increased by increasing the approximant generation \(g\). By definition, an approximant supercell contains full information of a given quasicrystal when \(g\rightarrow\infty\). In order to reduce boundary effects and to perform Bott index calculations, we use approximants with \(g=4\), which gives \(N_{t}=1393\) total vertices. The vertex model for the Ammann-Beenker tiling has bipartite properties. This means that we can decompose vertices of the Ammann-Beenker tiling into two sets, A and B, without any edges connecting the same set [93]. As a result, starting from a given vertex, we cannot make a loop with an odd number of edges. However, our method (see Ref. [92]) to generate the approximant of the Ammann-Beenker tiling violates bipartite properties at the boundaries (e.g. a \(\leavevmode\hbox{\small 1\kern-3.8pt\normalsize\char 37}\rightarrow\leavevmode \hbox{\small 2\kern-3.8pt\normalsize\char 37}\rightarrow\leavevmode\hbox{\small 1 \kern-3.8pt\normalsize\char 37}\) and \(\leavevmode\hbox{\small 1\kern-3.8pt\normalsize\char 37}\) path in Fig. 1(b)). Nonetheless, we can restore bipartite properties by constructing a new approximant made from \(2\times 2\) non-bipartite approximants. In the following, we will use this new approximant to study the Ammann-Beenker tiling. _General Hamiltonian-_ Hamiltonian for electrons in Ammann-Beenker tiling with arbitrary flux distribution reads \[H=\sum_{\langle i,j\rangle}te^{iA_{ij}}\left|i\right\rangle\left\langle j \right|, \tag{1}\] where \(t\) and \(A_{ij}=-A_{ji}\) are hopping amplitude and Peierls phase, respectively. \(\langle i,j\rangle\) indicates all edges of the Ammann-Beenker tiling [41] and \(A_{ij}\) gives a complex phase for the hopping. The total flux penetrating each face or tile is given by the total phases that an electron can accumulate along edges, i.e., \[\phi_{i,j,k,l}=A_{ij}+A_{jk}+A_{kl}+A_{li}, \tag{2}\] where \(i\),,\(j\), \(k\), and \(l\) represent vertices on a given tile and are arranged in an anticlockwise way such as \(i\to j\to k\to l\to i\). _Zero flux case-_ Let us first review the zero flux limit, where we set \(A_{ij}=0\) in Eq. (1). In Fig. 1(c), we show the energy levels and density of states (DOS) of the zero-flux Ammann-Beenker approximant with \(N_{t}=2\times 2\times 1393\). Both energy levels and DOS are symmetric with respect to the energy \(E=0\), reflecting bipartite nature. A prominent peak exists in DOS at \(E=0\), which corresponds to the presence of confined states [70]. In the Ammann-Beenker tiling, the fraction of confined states \(p\) (the number of them divided by \(N_{t}\)) is given by \(p=p_{E=0}=1/2\tau_{s}^{2}\approx 0.086\)[70], where \(\tau_{s}=1+\sqrt{2}\) is the silver ratio. Since the confined states in the Ammann-Beenker tiling are fragile, they generally disperse upon introducing other terms into the Hamiltonian [70]. Furthermore, the confined states are mixed with other states. This may be related to the fact that increasing the system size generates a new confined state with larger extension [70]. _Quasicrystalline \(\pi\)-flux model_- Following the \(\pi\)-flux model on a square lattice, which introduces \(\pi\)-flux systematically [57] (see Sec. S1 in the Supplemental Material (SM) [94]), we assume the same phase \(A_{ij}\) for edges with Figure 1: (a) Ammann-Beenker tiling. The indicated area with green dashed edges refers to a symmetric domain of tiles called \(D_{1}\) that is used in TABLE. I. (b) An example of Ammann-Beenker tiling approximant (\(g=1\)). The numbers inside circles show approximant sublattice indices. Blue dots show the origin of a unit-cell and blue arrows represent the basis vectors of the unit-cell, \(\vec{a}_{1}\), \(\vec{a}_{2}\). (c) Energy levels (black points) sorted from lowest to highest energies versus index of levels, \(1\) to \(N_{t}\) in the bottom horizontal axis and DOS (blue lines) for a zero-flux model in an Ammann-Beenker approximant [\(N_{t}=2\times 2\times 1393\)]. (d) Edges of the Ammann-Beenker tiling drawn by arrows starting from a vertex of the set A. The colored arrows correspond to those in (e). (e) Phase \(A_{n}\) (\(=\pi/2\), \(n=1,\ldots,4\)) to construct quasicrystalline \(\pi\)-flux models. The direction of arrows corresponds to \(\hat{R}_{n}\) (see the text). the same orientation, where the edges start from a vertex of the set A. However, for the edges starting from a vertex of set B, we use the same phase but with the opposite sign to keep the hermiticity of the Hamiltonian. Therefore, it is useful to define phase \(A_{n}\) for the edges starting from a vertex of the set A with a directional vector \(\hat{R}_{n}=\pm(\cos(2\pi n/8),\sin(2\pi n/8))\), where \(n=1,\ldots,4\). The quasicrystalline \(\pi\)-flux model is defined by setting some (but not all) of \(A_{n}\) to \(\pm\pi/2\) and others to zeros. Note that as fluxes \(\{\phi_{i,j,k,l}\}\) are defined modules to \(2\pi\), setting all \(A_{n}\) to \(\pm\pi/2\) results in a zero flux model. In Fig. 1(d), we present the Ammann-Beenker tiling with attached arrows that are defined based on \(A_{n}\) (see Fig. 1(e)). We note that one can construct this arrow-attached tiling using inflation rules (see Sec. S2 in the SM [94]). Furthermore, it is worth mentioning that the arrows with the same colors construct structures similar to Ammann lines in the Ammann-Beenker tiling (for example, see green arrows in Fig. 2 (a1)) [62]. _Confined states in Ammann-Beenker \(\pi\)-flux model_-The Ammann-Beenker tiling allows various \(\pi\)-flux configurations due to possible choices of nonzero \(A_{n}\). In the first column of Fig. 2, we show three possible configurations of \(\pi\)-flux in the Ammann-Beenker tiles. In the following, we name the three cases AB.conf1, AB.conf2, and AB.conf3, as shown in Figs. 2(a1), 2(a2), and 2(a3), respectively. Other configurations can be obtained by rotation and/or translation of the three configurations. The AB.conf1 is obtained by choosing \(A_{4}=\pi/2\) and \(A_{n\neq 4}=0\); AB.conf2 is by \(A_{1}=A_{2}=\pi/2\) and \(A_{n\neq 1,2}=0\); and AB.conf3 is by \(A_{1}=A_{3}=\pi/2\) and \(A_{n\neq 1,3}=0\). These \(\pi\)-flux configurations lead to confined states with different energies and fraction. We plot energy levels and DOS for AB.conf1, AB.conf2, and AB.conf3, in Figs. 2(a2), 2(b2), and 2(c2), respectively. Confined states appear at \(E=\pm\sqrt{2}t\) for AB.conf1, at \(E=0\) and \(\pm 2t\) for AB.conf2, and \(E=0\) and \(\pm 2\sqrt{2}t\) for AB.conf3. We can confirm that confined states actually consist of localized wave functions after some manipulations of degenerate eigenfunctions numerically obtained by exact diagonalization. Usually such eigenfunctions, i.e., wave functions, give finite overlap at the same vertex. This means that they are not localized wave functions. However, one can construct localized wave functions by recombining the degenerate wave functions. For this purpose, we define inverse participant ratio (IPR) given by \(\sum_{i}|\Psi_{i}|^{4}/\sum_{i}|\Psi_{i}|^{2}\), where \(\Psi_{i}\) is the \(i\)-th vertex component of a variational wave function composed of linearly combined degenerate wave functions. IPR gives a measure of the extension of a given wave function. For instance, IPR for extended Bloch states in translationally invariant systems is zero, while it gives 1 for atomic wavefunctions [95; 49]. Maximizing IPR, we can obtain localized wave functions corresponding to confined states (see Sec. S4 in the SM [94]). After confirming a localized wave function for each confined state, we perform an analysis similar to Ref. [70] to obtain the fraction \(p\) of the confined states. We conjecture \(p\) for each configuration as \(p=p_{E=\pm\sqrt{2}t}^{\rm AB.conf1}=\tau_{s}^{-4}\approx 0.029\), \(p=p_{E=0}^{\rm AB.conf2}\approx 0.061\), \(p=p_{E=\pm 1}^{\rm AB.conf2}=198-82\tau_{s}\approx 0.034\), \(p_{E=0}^{\rm AB.conf3}=2675-2\tau_{s}\approx 0.035\), \(p_{E=0}^{\rm AB.conf3}=198-82\tau_{s}\approx 0. \(1108\tau_{s}\approx 0.051\), and \(p=p_{E=\pm 2\sqrt{2}t}^{\rm AB.conf3}=\tau_{s}^{-4}\approx 0.029\) (see TABLE. I, and Sec. S6 in the SM [94]). _Topological phase-_ It has been known that topological states emerge in a periodic \(\pi\)-flux model on a square lattice if STDH is set to be \(\pm t_{d}\) for tiles with \(\pm\pi\) fluxes [57]. We introduce the same STDH in our Ammann-Beenker \(\pi\)-flux configurations. However, in our Ammann-Beenker \(\pi\)-flux model, we obtain some tiles without any flux and we assume no diagonal hopping for them. We plot energy levels and DOS for \(t_{d}=0.3t\) in the third column of Fig. 2 for the three \(\pi\)-flux configurations. In all cases except AB.conf2, introducing STDH disperses the confined states for \(t_{d}=0\). We found that in AB.conf2 actually STDH disperse approximately half of the confine states for \(t_{d}=0\) and their fraction is approximately given by \(p=p_{E=0,t_{d}\neq 0}^{\rm AB.conf2}=1/2\tau_{s}^{3}\approx 0.035\) (see TABLE. I). Additionally, nonzero \(t_{d}\) develops new gaps in their energy-level spectrum. We calculate the Bott index \(\nu_{\rm Bott}\)[96; 97; 98] (see Sec. S3 in the SM [94]), which is equivalent to the Chern number used in periodical structures, for all energy gaps with gap value of \(\Delta E>0.04t\). We find that some energy gaps have a nonzero topological invariant \(|\nu_{\rm Bott}|=1\) as indicated by red dots in the third column of Fig. 2. We also confirm the existence of edge modes for these nontrivial gaps under the open boundary condition, which are presented in the last column of Fig. 2. In the fourth column of Fig. 2, we show energy levels as a function of \(t_{d}\). The green and red dots in these figures show trivial \(\nu_{\rm Bott}=0\) and topological \(|\nu_{\rm Bott}|=1\) gaps, respectively, evaluated by the Bott index. In all configurations, a sufficient amount of \(t_{d}\) induces topological states by opening new topological gaps. We note that STDH is essential in the establishment of topological states. For instance, in the fifth column of Fig. 2, we calculate the Bott index in the presence of non-staggered tile-diagonal hopping (NSTDH), i.e., \(+t_{d}\) for all tiles with \(\pm\pi\) flux. Since the value of the Bott index is zero, none of the tiny gaps have a nontrivial topological state. _Dirac cone-_ As shown in Fig. 2(b4), nonzero STDH in AB.conf2 can induce energy gaps with nontrivial topological Bott index. Note that in this case, increasing STDH adiabatically (without gap closing) connects the trivial energy gap (with \(v_{\rm Bott}=0\)) located around \(E\approx\pm 1.1t\) with a topological gap (with \(v_{\rm Bott}\neq 0\)) [see Fig. 2(b4)]. In Fig. 3(a), we compare the resulting energy spectrum using periodic boundary conditions (PBC) and open boundary conditions (OBC) and find states inside the gap under PBC irrespective of STDH. We plot wave function distribution for different STDH in Fig. 3(b) and observe that edge modes become visible with increasing \(t_{d}\), leading to clear edge mode in Fig. 2(b6). Note \begin{table} \begin{tabular}{c||c c c c c c c} & \(N_{1}^{\rm net}\) & \(N_{2}^{\rm net}\) & \(N_{3}^{\rm net}\) & \(N_{4}^{\rm net}\) & \(N_{5}^{\rm net}\) & \(N_{i}^{\rm net}\) & \(p\) \\ \hline \hline zero flux & \multirow{2}{*}{2} & \multirow{2}{*}{6} & \multirow{2}{*}{12} & \multirow{2}{*}{20} & \multirow{2}{*}{30} & \multirow{2}{*}{\(i(i+1)\)} & \multirow{2}{*}{\(1/2\tau_{s}^{2}\)} \\ \(E=0\) & & & & & & & & \\ \hline AB.conf1 & \multirow{2}{*}{1} & \multirow{2}{*}{1} & \multirow{2}{*}{1} & \multirow{2}{*}{1} & \multirow{2}{*}{1} & \multirow{2}{*}{1} & \multirow{2}{*}{1} & \multirow{2}{*}{\(1/\tau_{s}^{4}\)} \\ \(E=\pm\sqrt{2}t\) & & & & & & & & \\ \hline AB.conf2 & \multirow{2}{*}{1} & \multirow{2}{*}{4} & \multirow{2}{*}{15} & \multirow{2}{*}{68} & \multirow{2}{*}{347} & \multirow{2}{*}{?} & \multirow{2}{*}{\(\approx 0.061\)} \\ \(E=0\) & & & & & & & & \\ \hline AB.conf2 & \multirow{2}{*}{1} & \multirow{2}{*}{2} & \multirow{2}{*}{3} & \multirow{2}{*}{4} & \multirow{2}{*}{5} & \multirow{2}{*}{1} & \multirow{2}{*}{\(1/2\tau_{s}^{3}\)} \\ \(E=0,t_{d}\neq 0\) & & & & & & & \\ \hline AB.conf2 & \multirow{2}{*}{1} & \multirow{2}{*}{2} & \multirow{2}{*}{2} & \multirow{2}{*}{2} & \multirow{2}{*}{2} & \multirow{2}{*}{\(198-82\tau_{s}\)} \\ \(E=\pm 2t\) & & & & & & & \\ \hline AB.conf3 & \multirow{2}{*}{1} & \multirow{2}{*}{5} & \multirow{2}{*}{7} & \multirow{2}{*}{7} & \multirow{2}{*}{7} & \multirow{2}{*}{\(2675-1108\tau_{s}\)} \\ \(E=0\) & & & & & & & \\ \hline AB.conf3 & \multirow{2}{*}{1} & \multirow{2}{*}{1} & \multirow{2}{*}{1} & \multirow{2}{*}{1} & \multirow{2}{*}{1} & \multirow{2}{*}{1} & \multirow{2}{*}{1} & \multirow{2}{*}{\(1/\tau_{s}^{4}\)} \\ \(E=\pm 2\sqrt{2}t\) & & & & & & & \\ \end{tabular} \end{table} Table 1: The number of newly generated confined states \(N_{j}^{\rm net}\) in the \(D_{j}\) region for the Ammann-Beenker tiling, where \(D_{j}\) defines a cluster of tiles that appears after \((j-1)\)-th inflation of \(D_{1}\) indicated by green in Fig. 1(a). \(N_{j}^{\rm net}\) is calculated for \(j=1,\ldots,5\) and presented in each column for confined states with energy \(E\) in a given flux configuration. For \(j>5\), we use speculated \(N_{j}^{\rm net}\) expected from the calculated \(N_{j}^{\rm net}\) for \(j=1,\ldots,5\) (see Ref. [70] and Sec. S6 in the SM [94]). The? mark means that it is difficult to speculate the number. The last column represents the fraction of confined states defined by \(p=\sum_{j}p_{j}N_{j}^{\rm net}\), where \(p_{j}=2\tau_{s}^{-(2j+3)}\) is the fraction for the \(D_{j}\) region. The fraction of confined states \(p\) for AB.conf2 and \(E=0\), is calculated numerically for a Ammann-Beenker tiling with \(N_{t}\approx 2\times 10^{4}\) vertices. Figure 3: (a) Energy spectrum with OBC (red lines) and PBC (blue lines) for AB.conf2 as a function of STDH \(t_{d}\). (b) The distribution of a wave function with energy \(E\approx 1.1t\) with OBC for different \(t_{d}\). In (a) and (b) we use the same system as we considered in Fig. 2(b4). (d), (e), (f) Band dispersion of a \(\pi\)-flux AB.conf2 model along high symmetry line for \(t_{d}=0\) (blue) and \(t_{d}=0.1t\) (red) with different approximant sizes. (d) \(g=1\), (e) \(g=2\), and (f) \(g=3\). In the inset of (f) we plot Brillouin zone (BZ) and indicate \(\vec{k}_{\Gamma}=0\), \(\vec{k}_{M}=\frac{1}{2}\vec{b}_{1}+\frac{1}{2}\vec{b}_{2}\), and \(\vec{k}_{K}=\frac{1}{2}\vec{b}_{1}\). that the bulk-boundary correspondence of the topological phases is obtained by considering different twisted boundary conditions. To see this, we examine how energy dispersion (corresponding to different twisted boundary conditions) around \(E\approx\pm 1.1t\) behaves when we take an approximant of AB.conf2 as a unit-cell of translationally invariant system with the basis vectors, \(\vec{a}_{1}\) and \(\vec{a}_{2}\), given in Fig. 1(b2). We apply the Fourier transformation by \(\ket{i}=1/N_{t}\sum_{\vec{k}}\exp(i\vec{r_{i}}\cdot\vec{k})\ket{\vec{k}}\) on Eq. (1), which gives momentum-dependent Hamiltonian \(H(\vec{k})\) written by an \(N_{t}\times N_{t}\) matrix. The energy dispersion is obtained by diagonalizing \(H(\vec{k})\), where we denote it by \(\epsilon_{\lambda=\{1,\ldots,N_{t}\}}(\vec{k})\). In Fig. 3, we plot \(\epsilon_{\lambda}(k)\) with blue (red) color at \(t_{d}=0\) (\(t_{d}=0.1t\)) along high symmetry lines for different approximant size. In Fig. 3(a), we find several two-fold degenerate Dirac cones with a liner dispersion at \(\vec{k}_{M}=\frac{1}{2}\vec{b}_{1}+\frac{1}{2}\vec{b}_{2}\), where \(\vec{b}_{1}\) and \(\vec{b}_{2}\) are reciprocal vectors. The Dirac cones that are located at \(E\approx\pm 1.1t\) become gapped after introducing STDH. This is in accordance with our previous result of the emergence of topological gaps in Fig. 2(b4). Increasing system size induces more bands, and therefore, the velocity of these Dirac cones decreases due to the band repulsion from other states. Interestingly, the Dirac cones do not repeat with increasing the approximant size, indicating no band folding of the cones. Note that both the energy gap and system size should be large enough for the calculation of the Bott index [96; 97]. However, as we found in the previous paragraph, increasing system size decreases the energy gap at \(t_{d}=0\). This means that the calculation of the Bott index at \(t_{d}=0\) is inapplicable to any size of the Ammann-Beenker structure. _Penrose tiling-_ The Penrose tiling without \(\pi\)-flux shows zero energy confined states [68] (see Fig. 4(a)), originating from a local imbalance of bipartite neutrality within a given cluster [87]. The confined states are separated by a gap and its fraction is given by \(p=p_{E=0}=-50\tau_{g}+81\approx 0.098\)[15; 68], where \(\tau_{g}=(1+\sqrt{5})/2\) is the golden ratio. The imbalance of bipartite neutrality in neighboring clusters alternates between two bipartite sets and is separated by forbidden ladders. As confined states originate from local bipartite imbalance fluctuations, random flux cannot alter the energy and fraction of the original confined states despite significant modification of their wave functions [87]. We extend our \(\pi\)-flux approach to the Penrose tiling by defining \(A_{n}\) (see \(A_{n=1,\ldots,5}\) in Fig. 4(c) and their attachment on the edge of the Penrose tiling in Fig. 4(b)). We confirm that the number of the confined states are unchanged (see Sec. S5 in the SM [94]) for any of three possible \(\pi\)-flux configurations (see Figs. 4(d1), 4(e1), and 4(f1)). Additionally, we find that introducing sufficient STDH in the Penrose \(\pi\)-flux model gives rise to nontrivial topological states (see Figs. 4(d2), 4(e2), and 4(f2)). _Summary-_ In summary, we have extended the two-dimensional \(\pi\)-flux model on a square lattice to quasicrystalline bipartite tilings and made a new proposal for an aperiodic Haldane model by introducing staggered tile-diagonal hopping. We have found that the \(\pi\)-flux model leads to a new massively degenerate confined state in the Ammann-Beenker tiling, while confined states in the Penrose tiling remain the same as the vertex model without flux. Additionally, we have discovered massless Dirac cones in a specific \(\pi\)-flux configuration in the Ammann-Beenker approximant. Our results on the quasicrystalline \(\pi\)-flux model warrant further exploration through innovative techniques such as topological circuits [55; 99], and photonics [100]. R.G. was supported by the Institute for Basic Science in Korea (Grant No. IBS-R009-D1), Samsung Science and Technology Foundation under Project Number SSTF-BA2002-06, the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No.2021R1A2C4002773, and No. NRF-2021R1A5A1032996). M.H. was supported by JST SPRING (Grant No. JPMJSP2151). T.S. was supported by the Japan Society for the Promotion of Science, KAKENHI (Grant No. JP19H05821). Figure 4: Penrose \(\pi\)-flux model. (a) Energy spectrum (black points) and DOS (blue lines) for approximant of Penrose tiling without flux with \(g=8\) [\(N_{t}=9349\)]. (b) Penrose tiling where arrows are attached. (c) \(A_{n}\) for the Penrose tiling. (d1), (e1), (f1): Possible Penrose \(\pi\)-flux models. (d2), (e2), (f2): corresponding energy spectra as a function of STDH \(t_{d}\), where red (green) dots indicate topologically nontrivial (trivial) gaps
2308.01441
The Fission Fragment Rocket Engine for Mars Fast Transit
In this paper we discuss the advantages and challenges of utilizing Fission Fragment Rocket Engines (FFREs) to dramatically reduce transit time in space travel, for example, traveling to Mars. We discuss methods to decrease the size and weight of FFREs. These include utilizing metallic deuterides as moderators, driving the engines with electron beam bremsstrahlung, and operating the FFREs as subcritical assemblies, not as nuclear reactors. We discuss these and other new innovations based upon improved materials and technology that may be integrated into a revolutionary nuclear rocket technology.
John Gahl, Andrew K. Gillespie, Cuikun Lin, R. V. Duncan
2023-08-02T21:30:44Z
http://arxiv.org/abs/2308.01441v1
# The Fission Fragment Rocket Engine for Mars Fast Transit ###### Abstract In this paper we discuss the advantages and challenges of utilizing Fission Fragment Rocket Engines (FFREs) to dramatically reduce transit time in space travel, for example, traveling to Mars. We discuss methods to decrease the size and weight of FFREs. These include utilizing metallic deuterides as moderators, driving the engines with electron beam bremsstrahlung, and operating the FFREs as subcritical assemblies, not as nuclear reactors. We discuss these and other new innovations based upon improved materials and technology that may be integrated into a revolutionary nuclear rocket technology. fission, fragment, nuclear, rocket, fast, transit, engine, Mars ## I Introduction Since the 1950's, nuclear rockets were developed primarily at Los Alamos National Laboratory to produce faster methods for space travel. These technologies utilize nuclear designs that transfer heat from a sealed core to liquid hydrogen expanders or thermionic converters in a conventional manner. Beginning in the 1980s, a more efficient nuclear energy transfer design emerged for rocketry, one which exposes the core for direct nuclear fragment thrust once the rocket is well outside the Earth's atmosphere. From FY11 through FY14, the NASA Institute for Advanced Concepts studied the Fission Fragment Rocket Engine (FFRE). The FFRE would directly convert the momentum of fission fragments into spacecraft momentum at extremely high specific impulse (\(I_{\rm SP}\)). The FFRE combined the existing technologies of low-density dust trapped electrostatically (dusty plasmas) with high field magnets for fission fragment control. By designing the nuclear core to permit sufficient mean free path for the escape of fission fragments, the study showed the FFRE could convert nuclear power to thrust directly and efficiently at a delivered \(I_{\rm SP}\) of 527,000 seconds. The study showed further that, without increasing the reactor power, adding a neutral gas to the fission fragment beam significantly increased the FFRE thrust. This frictional interaction of gas and beam resulted in an engine that would continuously produce nearly 4.5 kN of thrust at a delivered impulse of 32,000 seconds, thereby reducing the (then expected) round trip mission to Mars from 3 years to 260 days. These studies concluded that the engine and spacecraft were within current technological capabilities, could be built and launched, though the FFRE engines would be very large. New materials, and recent advances in technology, promise to make FFREs much more compact and reliable. For manned transit to Mars, speed is of the essence to minimize radiation exposure and maximize human health, as discussed in other articles in this volume. Nuclear fuel, with several million times the energy density of any chemical fuel, holds the promise of fast transit to Mars. Typically, two types of nuclear propulsion are considered for space flight: thermal propulsion and electric propulsion. In thermal propulsion, a coolant that prevents reactor melt-down doubles as a propellant. This gas is heated by direct interaction with the fuel, which limits the gas heating to a temperature below 3000 K, where typical fission fuel melts. A metric of the efficiency of a rocket engine is the specific impulse or \(I_{\rm SP}\). The \(I_{\rm SP}\) is thrust (force) of the engine divided by the weight flow of the propellent. The higher the \(I_{\rm SP}\), the more efficient the engine. In nuclear thermal propulsion \(I_{\rm SP}\) of approximately 900 s can be achieved, compared to perhaps half that for the best chemical engines. While certainly an improvement, with nuclear energy's enormous energy density, conventional nuclear thermal propulsion could benefit from technological innovation. In electric propulsion, the nuclear reactor is configured to produce electricity, and that electricity is used to drive a conventional ion or plasma source that provides propulsion. These sources tend to have high \(I_{\rm SP}\), but low thrust. Thrust, approximately the velocity of ejected propellent times mass flow, is required to lift a rocket out of a gravity well. For a mission to Mars, a rocket engine would ideally have the capacity to produce high thrust (for leaving the earth, moon, or mars) and then high \(I_{\rm SP}\) (high efficiency) for the interplanetary transit. As an analogy, this is similar to an automobile starting with a low gear and high torque, finally traveling at high speed in a high gear. In a reactor producing electricity, Carnot efficiency requires a large difference in output temperature compared to input temperature. In a vacuum, space craft have difficulty rejecting heat, requiring large radiators. Increasing input temperature is again limited by the melting temperature of the nuclear fuel. For both of these technologies, nuclear electric propulsion or nuclear thermal propulsion, performance enhancement can be achieved through dealing with the fuel heating problem. If we could heat the propellent to a higher temperature than the melting point of the fuel, then we could achieve higher thrust. If we could operate the nuclear reactor at a temperature well above the melting point of the nuclear fuel, Carnot efficiency could also be improved. ## II. Fission Fragment rocket engine Fission fragment nuclear propulsion was proposed by George Chapline of Lawrence Livermore National Laboratory in the late 1980's.[1, 2, 3] The propulsion method depended on the fission fragments of a nuclear reaction partially escaping the fissile fuel, mitigating the heating of the fuel. When a fissile material undergoes fission, two daughter atoms are produced as fragments of the event. These atoms carry a total energy of 160 MeV and can travel at velocities up to 5% of the speed of light. If the fission occurs in a very thin layer of nuclear fuel (a few microns thick), some of the fission fragments can escape without collision, minimizing fuel heating. With a magnetic field used to direct these fragments to the exhaust direction, extremely high ISP could be achieved, perhaps on the order of \(10^{6}\) s. This would be two orders of magnitude larger than an electric ion source and more than three orders greater than chemical or nuclear thermal propulsion. On long transit missions this increased ISP, and the decrease in mass of the rocket as its nuclear fragments are ejected, would result in an extremely high final rocket velocity. Chapline envisioned the thin fissile fuel coating being placed on carbon fibers, the fibers moving at high speed through a neutron moderator region with many other coated fibers, having sufficient fissile mass to sustain the reaction, obtaining reactor criticality. The fibers would then exit the neutron moderator to radiatively cool in the vacuum of space. Even though many fission fragments would escape the fuel and eject at high speed, the carbon fibers would still heat significantly. To simulate a 14-MeV neutron source incident on a layer of uranium oxide, a nuclear transport code (MCNP6.2) was employed [4]. The simulation tracked any heavy ions generated to estimate the percentage that escaped as a function of the thickness of the uranium oxide layer. The results of the simulation, illustrated in **Figure 1**, confirm that nearly 100% of heavy ions can escape for layer thicknesses below a few microns. Although thicker uranium oxide layers generate more fission fragments, almost all heavy ions remain trapped inside the fissile material. Therefore, using fissile layers thicker than approximately 10 microns does not yield any benefits. To avoid this heating, NASA initiated the study of the Fission Fragment Rocket Engine (FFRE) concept in 2010 [5, 6, 7]. The FFRE would directly convert the momentum of fission fragments into spacecraft momentum at extremely high ISP. The FFRE combined the existing technologies of low-density dust trapped electrostatically (dusty plasmas) with high field magnets for fission fragment control. The fissile fuel fragments and dust would be micron-sized, allowing for the Figure 1: Simulation results for fission fragments in various zones around a uranium oxide layer. _Left_: The population of fission fragments either escaping or remaining inside the uranium oxide layer as a function of layer thickness. _Right_: The percentage of fission fragments escaping as a function of uranium oxide thickness. fission fragments to escape the fuel particle without depositing significant energy into the fuel particle or heating it. By designing the nuclear core's uranium dust density to permit a sufficient mean free path for the escape of fission fragments, while simultaneously maintaining density sufficient to maintain criticality, the study showed the FFRE could convert nuclear power to thrust directly and efficiently at a delivered ISP of 527,000 seconds. Without increasing the reactor power, adding a neutral gas to the fission fragment beam significantly increased the FFRE thrust. This frictional interaction of gas and beam resulted in an engine that would continuously produce nearly 4.5 kN of thrust at a delivered impulse of 32,000 seconds, thereby reducing the (then expected) round trip mission to Mars from 3 years to 260 days. It importantly would allow the heating of the gas propellent to a very high temperature, improving thermodynamic efficiency without melting the uranium oxide fuel. The fuel dust would still heat from the gamma radiation in the reactor, however the small size of the fuel dust would mean that the particles had a larger surface-area-to-volume ratio compared to large particles, enhancing radiative cooling of the fuel. These studies concluded that the engine and spacecraft were within current technological capabilities, could be built and launched, however the FFREs would be very large. To sustain a nuclear reaction, a neutron produced by fission in the fuel should be moderated and available to initiate another fission event in order to "go critical." There also needs to be a sufficient amount of fuel mass to continue the reaction. In the FFRE concept, the fuel density needs to be low enough for fission fragments to get out. The design constraint is to design a reactor that keeps neutrons in, and allows fission fragments out, and fundamentally this means a low-density fuel in a large volume. In an early conceptual design, superconducting magnets were used adjacent to the nuclear reaction. With one magnet having a radius of over 5 meters, the coils would have a combined mass of over 40 metric tons. Either a moderator or neutron reflector would be needed to keep neutrons in the reactor, maintaining criticality. This would add over 50 metric tons and the total engine mass would be over 113 metric tons.[6] In this design, the extra mass would not be expended as the rocket accelerated, substantially limiting the final velocity of the rocket. Though there are a number of other very practical issues to consider for the successful deployment of a dusty plasma FFRE, perhaps most importantly, no one has ever built or tested a dusty plasma reactor on earth, let alone in space. ## III Innovations on the fission fragment rocket engine ### Aerogel Core Ryan Weed and others[8] have suggested a modification of the dusty plasma FFRE concept. By utilizing low density aerogel materials to contain micron-sized fuel particles, the significant complications of the dusty plasma confinement can be eliminated. The distribution of fissile particles in the aerogel can be engineered to facilitate criticality while allowing fission products to escape. The low density of the aerogel would minimize energy loss from the fission fragments while at the same time facilitating radiative cooling of the fissile particles. Spent, aged, or depleted aerogel could be simply replaced with the low mass of replacement aerogels, amounting to a trivial amount of total spacecraft mass. Recent advances in magnetic technology[9] along with reactor design versatility made possible by the utilization of aerogel could dramatically reduce FFRE mass and make deployment on "conventional" space craft, such as SpaceX's Falcon 9, possible. **IIIb. Subcritical Assembly** Reactors have sufficient fissile fuel and sufficient moderation and neutron reflection to sustain a nuclear chain reaction. If any of these requirements for criticality are removed, an accelerator can be used to produce neutrons to facilitate continued nuclear reactions. A reactor that cannot sustain a nuclear chain reaction is not critical and is not, by definition, a reactor. It is a subcritical nuclear assembly. Of many examples, Gahl and Flagg[10] proposed the use of an aqueous solution of uranium salts in heavy water as a subcritical assembly. Driven by a high energy electron beam, producing x-rays upon hitting a high-Z converter, the accelerator would induce photoneutron production in the deuterium of the heavy water. The photoneutrons would induce fission in the uranium[11], producing fission fragments for use in radio pharmacy. With more uranium and better reflection, this assembly would have been a reactor. But if kept subcritical, the device would be much easier and safer to develop, operate, and license. In a subcritical FFRE assembly, an electron accelerator could produce x-rays and subsequently photoneutrons through the irradiation of metallic deuterides. The light-element metallic deuterides would also be used as neutron moderators in the subcritical FFRE assembly, since they very effectively reduce the fission neutrons' energy through elastic collisions. A simplified example of the FFRE assembly is shown in **Figure 2**. There are many types of neutron sources available to drive fission reactions, from simple, commercially available DD fusion systems, spontaneous neutron emitting isotopes, to bremsstrahlung x-ray sources driving photoneutron reactions. Applied to FFRE technology, accelerator driven systems would be simpler to develop, much easier to control, and could facilitate Figure 2: A simplified depiction of an electron beam incident on deuterated metal hydride converter. The generated bremsstrahlung x-rays create photoneutrons that drive fission in thin layers of a fissionable material. the development of very small FFREs, whose geometry would never allow for a system to achieve criticality. Some of the fission fragments, such as the ones propagating on the thrust axis along the direction of travel, may be used to produce electricity, preferably by direct conversion, to power the accelerator and to meet other spacecraft needs. In addition, the small amount of waste heat generated by the subcritical assembly may be cooled through a thermoelectric converter to produce spacecraft power. **Table 1** lists neutron sources that can be used in the fission reaction. Cf-252, with a half-life of 2.65 years and 2.31 x 10\({}^{6}\) n/s per ug, is the only californium isotope that emits neutrons spontaneously. D-D and D-T reactions generate monoenergetic neutrons of 2.45 MeV and 14 MeV respectively when D+ and/or T+ ions have low kinetic energy. In recent decades, quite a few compact neutron generators using radio-frequency (RF) driven plasma discharges have been developed [12, 13, 14]. These compact neutron generators are composed of three main parts: the plasma source, the acceleration column, and the target. The source is a deuterium plasma formed by ionizing deuterium gas through RF coupling. The deuterium ions present in the plasma are then extracted from an aperture in the plasma facing electrode and accelerated up to over 100 keV onto a metal target. The target is typically either a pre-loaded metal hydride target, or a plain metal target that can adsorb deuterium and tritium atoms very efficiently and form metal hydrides. The incoming ions can then collide with the implanted deuterium through D-D or D-T fusion reactions and produce neutrons of 2.45 MeV and 14.1 MeV respectively. These neutrons can then be moderated and collimated to the necessary energies and direction by using various shielding materials such as polyethylene, heavy water, or light water. The photoneutron can serve as another potential neutron source. Chadwick and Goldhabber discovered the nuclear photoeffect on the deuteron [15]. **Table 2** outlines the materials and their corresponding low threshold reactions, with D and Be-9 being the only nuclides possessing low binding energies, namely 2.23 MeV and 1.66 MeV, respectively. Electron accelerators can also act as neutron generators with certain advantages, including higher intensity, lower cost, and smaller size than nuclear reactors, albeit with lower flux in thermal spectra. Prior research has successfully generated neutrons using high-energy electron beams [16, 17, 18, 19]. Tabbakh _et al_ reported achieving 10\({}^{12}\) n/cm\({}^{2}\)/s, with average energies of 0.9 MeV, 0.4 MeV, and 0.9 MeV for the Pb, Ta and W targets, respectively using RHODOTRON TT200 (IBA) [20]. Thus such equipment displays great potential as a neutron source. We have determined from our MCNP6.2 simulations in a typical geometry that \(\sim\)10\({}^{4}\) electrons are required to generate adequate gamma radiation to produce one neutron through the photodisintegration of a deuteron. In this configuration, each photoneutron will need to create about 10\({}^{4}\)\({}^{235}\)U fissions, so this system must be operated very close to criticality. In addition, a neutron amplifier can also be considered, comprising a cascaded series of stages consisting of fissionable fuel cells separated from each other by neutron moderator walls [21, 22]. By integrating these innovations into existing designs, it may become feasible to construct FFREs with much lower masses while improving upon the ISP for reduced transit times. \begin{table} \begin{tabular}{|l|l|r|r|} \hline Manufacture & & Neutrons per & Energy (MeV) \\ & & second & \\ \hline Frontier & Cf-252 & 4.4 x 10\({}^{9}\) per & 1.09 \\ & & Ci & \\ \hline Adelphi & DD-108 & 1 x 10\({}^{8}\) & 2.45 \\ \cline{2-3} & DD-109 & 1 x 10\({}^{9}\) & 2.45 \\ \cline{2-3} & DD-110 & 1 x 10\({}^{10}\) & 2.45 \\ \cline{2-3} & DT-110 & 1 x 10\({}^{10}\) & 14 \\ \hline Starfire & nGen-100 D-D & 1 x 10\({}^{7}\) & 2.45 \\ \cline{2-3} & nGen-100 D-T & 5 x 10\({}^{8}\) & 14 \\ \cline{2-3} & nGen-300 D-D & 1 x 10\({}^{7}\) & 2.45 \\ \cline{2-3} & nGen-300 D-T & 5 x 10\({}^{8}\) & 14 \\ \cline{2-3} & nGen-400 D-D & \(>\)1 x 10\({}^{8}\) & 2.45 \\ \cline{2-3} & nGen-400 D-T & \(>\)5 x 10\({}^{9}\) & 14 \\ \hline Thermo Fisher & P385 D-T & 3 x 10\({}^{8}\) & 14 \\ \cline{2-3} & minGen D-T & 3 x 10\({}^{8}\) & 14 \\ \cline{2-3} & minGen D-D & 6 x 10\({}^{6}\) & 2.45 \\ \hline LLNL & Patent \#6,907,097. & 10\({}^{12}\)–10\({}^{14}\) & 14 \\ \hline \multicolumn{3}{|c|}{Electron beam accelerator [23]} \\ \hline Manufacture & model & Type & Energy (MeV) \\ \hline IBA Industrial & TT-50 & RF-SCR & 10 \\ \cline{2-3} Solutions & TT-100 & RF-SCR & 10 \\ \cline{2-3} & TT-200 & RF-SCR & 10 \\ \cline{2-3} & TT-300 & RF-SCR & 10 \\ \cline{2-3} & TT-1000 & RF-SCR & 7.5 \\ \hline NIIEFA & UEL-10-D & RF-Linac & 10 \\ \hline NIIEFA & Elektron 23 & DC & 1 \\ \hline BINP & ILU-10 & RF-SCR & 5 \\ \hline BINP & ILU-14 & RF-Linac & 10 \\ \hline BINP & ELV-12 & DC & 1 \\ \hline Varian & Linatron & RF-Linac & 9 \\ \hline Mevex & Linac & RF-Linac & 3 \\ \hline Wasik Assoc. & ICT & DC & 3 \\ \hline Getienge Group & Linac & RF-Linac & 10 \\ \hline Vivírad S.A. & ICT & DC & 5 \\ \hline \end{tabular} \end{table} Table 1: List of commercially available neutron and electron sources for use in driving fission reactions. ## IV. Conclusions Fast transit to Mars poses several challenges to rocket design due to overall mass, specific impulse, thrust, and thermal management. It may be possible to circumvent several of these obstacles by using subcritical assemblies, energy efficient neutron generation, and compact, high-field magnets to generate and steer fission fragments. Integrating these innovations into FFRE designs may provide increased specific impulse and controlled thrust while operating as a subcritical assembly. These new technologies and improvements may be integrated into a revolutionary nuclear rocket technology and dramatically reduce transit time to Mars. ## V. Acknowledgements We are grateful to Matthew Looney (Texas Tech University) his suggestions and efforts as the Radiation Safety Officer involved in this effort. We thank Prof. John Brockman from MURR for his advice and recommendations associated with the radiations reported herein. ## VI. Data Availability The data and methods recorded and utilized in this study can be found at [https://www.depts.ttu.edu/phas/cees/](https://www.depts.ttu.edu/phas/cees/), and through the Information Technology Division of Texas Tech University. Additional data that support the findings of this study are available from the corresponding author upon reasonable request. \begin{table} \begin{tabular}{|l|c|c|c|} \hline Nuclide & Threshold (MeV) & Maximum Cross & reaction \\ & & Section Below 10 & \\ & & MeV [24, 25] & \\ & & [mbarn] & \\ \hline \({}^{2}\)D & 2.23 & 2.17 @4.8 MeV & \({}^{2}\)H(\(\gamma\),n)\({}^{1}\)H \\ \hline \({}^{6}\)Li & 3.70 & 3.16 @10 MeV & \({}^{6}\)Li(\(\gamma\),n\({}^{+}\)p)4He \\ \hline \({}^{6}\)Li & 5.67 & 3.16 @10 MeV & \({}^{6}\)Li(\(\gamma\),n)\({}^{5}\)Li \\ \hline \({}^{7}\)Li & 7.25 & 1.09 @10 MeV & \({}^{7}\)Li(\(\gamma\),n)\({}^{6}\)Li \\ \hline \({}^{9}\)Be & 1.67 & 1.41 @1.7 MeV & \({}^{9}\)Be(\(\gamma\),n)\({}^{8}\)Be \\ \hline \({}^{13}\)C & 4.90 & 0.69 @10 MeV & \({}^{13}\)C(\(\gamma\),n)\({}^{12}\)C \\ \hline \end{tabular} \end{table} Table 2: Materials and low threshold reactions.
2303.03124
IFAN: An Explainability-Focused Interaction Framework for Humans and NLP Models
Interpretability and human oversight are fundamental pillars of deploying complex NLP models into real-world applications. However, applying explainability and human-in-the-loop methods requires technical proficiency. Despite existing toolkits for model understanding and analysis, options to integrate human feedback are still limited. We propose IFAN, a framework for real-time explanation-based interaction with NLP models. Through IFAN's interface, users can provide feedback to selected model explanations, which is then integrated through adapter layers to align the model with human rationale. We show the system to be effective in debiasing a hate speech classifier with minimal impact on performance. IFAN also offers a visual admin system and API to manage models (and datasets) as well as control access rights. A demo is live at https://ifan.ml.
Edoardo Mosca, Daryna Dementieva, Tohid Ebrahim Ajdari, Maximilian Kummeth, Kirill Gringauz, Yutong Zhou, Georg Groh
2023-03-06T13:37:59Z
http://arxiv.org/abs/2303.03124v2
# IFAN: An Explainability-Focused Interaction Framework ###### Abstract Interpretability and human oversight are fundamental pillars of deploying complex NLP models into real-world applications. However, applying explainability and human-in-the-loop methods requires technical proficiency. Despite existing toolkits for model understanding and analysis, options to integrate human feedback are still limited. We propose IFAN, a framework for real-time explanation-based interaction with NLP models. Through IFAN's interface, users can provide feedback to selected model explanations, which is then integrated through adapter layers to align the model with human rationale. We show the system to be effective in debiasing a hate speech classifier with minimal performance loss. IFAN also offers a visual admin system and API to manage models (and datasets) as well as control access rights. A demo is live at ifan.ml. ## 1 Introduction As _Natural Language Processing_ (NLP) systems continue to improve in performance, they are increasingly adopted in real-world applications Khurana et al. (2022). _Large Language Models_ (LLMs)--such as GPT-3 Brown et al. (2020), BLOOM Scao et al. (2022), and T5 Raffel et al. (2020)--are without a shred of doubt the main protagonists of recent advances in the field. They are able to substantially outperform previous solutions while being directly applicable to any NLP task. There are however strong concerns given the black-box nature of such architectures Madsen et al. (2022); Mosca et al. (2022). In fact, their large scale and high complexity are substantial drawbacks in terms of _transparency_, _accountability_, and _human oversight_. Beyond ethical considerations, even legal guidelines from the European Union are now explicitly defining these interpretability factors as essential for any deployed AI system European Commission (2020). Research efforts in _eXplainable Artificial Intelligence_ (XAI) Arrieta et al. (2020); Mosca et al. (2022) and _Human-in-the-Loop_ (HitL) machine learning Monarch (2021) have thus been on the rise--producing solutions that aim at mitigating the current lack of interpretability. Most notably, the recent literature contains a number of toolkits and frameworks to analyze, understand, and improve complex NLP models Wallace et al. (2019); Liu et al. (2021). Some of them even offer low-code interfaces for stakeholders who do not possess the otherwise required technical proficiency. Nonetheless, current options to collect human rationale and provide it as feedback to the model are still limited. We propose IFAN, a novel low-to-no-code framework to interact in real time with NLP models via explanations. Our contribution can be summarized as follows: 1. IFAN offers an interface for users to provide feedback to selected model explana Figure 1: IFAN in brief. The interface allows NLP models and users to interact through predictions, explanations, and feedback. IFAN also provides developers with (1) a manager for models and datasets, (2) model API access, and (3) reports about the model. tions, which is then integrated via parameter-efficient adapter layers. 2. Our live platform also offers a visual administration system and API to manage models, datasets, and users as well as their corresponding access rights. 3. We show the efficiency of our framework in debiasing a hate speech classifier and propose a feedback-rebalancing step to mitigate the model's forgetfulness across updates. IFAN's demo is accessible at ifan1.ml together with its documentation.2 Full access is available with login credentials, which we can provide upon request. A supplementary video showcase can be found online3. Footnote 1: [https://ifan.ml](https://ifan.ml) Footnote 2: [https://ifan.ml/documentation](https://ifan.ml/documentation) Footnote 3: [https://www.youtube.com/watch?v=BzzoQzTsrLo](https://www.youtube.com/watch?v=BzzoQzTsrLo) ## 2 Related Work ### HitL with Model Explanations _Human-in-the-Loop_ (HitL) machine learning studies how models can be continuously improved with human feedback Monarch (2021). While a large part of the HitL literature deals with label-focused feedback such as _active learning_, more recent works explore how explanations can be leveraged to provide more detailed human rationale Lertvitatyakumjorn and Toni (2021). Combining classical HitL Wang et al. (2021) with explanations to construct human feedback for the model Han et al. (2020) has been referred to as _Explanation-Based Human Debugging_ (EBHD) Lertvittayakumjorn and Toni (2021). Good examples are Ray et al. (2019), Selvaraju et al. (2019), and Strout et al. (2019), which show improvements in performance and interpretability when iteratively providing models with human rationale. A more NLP-focused EBHD approach is Yao et al. (2021), where the authors leverage explanations to debug and refine two transformer instances--BERT Devlin et al. (2019) and RoBERTa Liu et al. (2019). Concretely, word saliency explanations at different levels of granularity are provided to humans, who in turn provide suggestions in the form of natural language. The annotator's feedback is converted into first-order logic rules, which are later utilized to condition learning with new samples. ### Interactive NLP Analysis Platforms In the recent literature, we can find strong contributions in terms of software and digital toolkits to analyze and explain NLP models Wallace et al. (2019); Hoover et al. (2020) as well as further refining them via parameter-efficient fine-tuning Beck et al. (2022). For instance, Liu et al. (2021) proposes ExplainaBoard, an interactive explainability-focused leaderboard for NLP models. More in detail, it allows researchers to run diagnostics about the strengths and weaknesses of a given model, compare different architectures, and closely analyze predictions as well as recurring model mistakes. Similarly, the Language Interpretability Tool by Tenney et al. (2020) is an open-source platform and API to visualize and understand NLP models. In particular, it provides a browser-based interface integrating local explanations as well as counterfactual examples to enable model interpretability and error analysis. Finally, Beck et al. (2022) releases AdapterHub Playground, a no-code platform to few-shot learning with language models. Specifically, the authors built an intuitive interface where users can easily perform predictions and training of complex NLP models on several natural language tasks. ## 3 Ifan The **I**nteraction **F**ramework for **A**rtificial and **N**atural Intelligence (**IFAN**) is a web-based platform for inspecting and controlling text processing models. Its main goal is to decrease the opacity of NLP systems and integrate explanation-based HitL into their development pipeline. Through our interface, stakeholders can test and explain models' behavior and--when encountering anomalies in predictions or explanations--they can fix them onsite by providing feedback. The main blocks of the platform are presented in Figure 2. The **Backbone** part contains all machine learning development components--datasets and models. We adopt HuggingFace formats (see 3.3 and 3.4) Wolf et al. (2020) and wrap the entire backbone as a Docker4 image for deployment. The **User Interface** is the visual component of the platform, where all the human-machine interaction takes place. Here, developers have also access to additional visual resources to configure details about models, datasets, and users. Footnote 4: [https://www.docker.com](https://www.docker.com) The connection between the backbone and the user interface is managed by the **Admin** component. All the user data and rights as well as samples receiving feedback are stored in a PostgreSQL5 database instance. The communication is handled via Python Django6, which integrates everything w.r.t. user authentication, API calls/responses, state logs, and location of backbone resources. In the next sections, we provide a more detailed description of the main platform components. Footnote 5: [https://www.postgresql.org](https://www.postgresql.org) Footnote 6: [https://www.djangoproject.com](https://www.djangoproject.com) ### User Interface Our frontend is built with Boostrap7 and JavaScript8. Currently, the pages available in our UI are the following: Footnote 7: [https://getbootstrap.com](https://getbootstrap.com) Footnote 8: [https://www.javascript.com](https://www.javascript.com) Landing PageHere users can get a short introduction to IFAN. We briefly explain our platform's goals, the concept of HitL, and how our framework can be integrated into the development of NLP models. DocumentationIt provides a detailed description of all the UI components together with screenshots and guidelines. Here, users can find specific instructions on how to configure and interact with our platform. FeedbackThis is the main interaction page. Here, users can run a model on an input sample either taken from the dataset or that they wrote themselves. Then, they can load the model's prediction and explanations and provide feedback both in terms of re-labeling and adjusting each feature's relevance. ConfigurationThis page has limited access (see 3.2). Here, developers can configure and manage the platform, More specifically, users can be created, modified, and deleted as well as upgraded or downgraded in their roles and access rights. Also, they can manage models and datasets as well as specify the currently active ones. Account SettingsEach authorized user can view, edit, export, and delete their account data as well as reset their login password. ### Users The platform separates users in three tiers: _developers_, _annotators_, and _unauthorized_ users (Table 1). Unauthorized users do not possess login credentials and have limited access to the platform. They can visualize model predictions and explanations but their feedback is not considered. Figure 2: Overall schema of IFAN idea: (i) The user selects a dataset or writes a customized input. (ii) Then the user can select a model which should be inspected. (iii) With the UI, annotators can check the model’s prediction on a sample and two types of explanations – local and global. (iv) If there is some misbehavior, the annotators can provide feedback. (iv) The feedback is stored and then used to fine-tune the model. Normal users (or annotators) are known through their credentials and can thus actively engage with the model. During a HitL iteration, they can use the feedback page with pre-configured datasets and models, test the model on a text sample, view explanations, and provide feedback if needed. Developers have full access and can configure all aspects of the platform. More specifically, they have access to the _configuration_ page (see 3.1) and can thus manage anything regarding users, roles, API access, models, and datasets. ### Datasets Before the model's behavior exploration, the _active dataset_ should be specified via the configuration page (see 3.1). This is the dataset from which the text examples for the model testing are sampled. We conform to a standard format by using the HuggingFace Datasets library9. Developers interacting with our platform are strongly encouraged to adhere to this standard when uploading new datasets and making them available to the interface. Table 2 shows two examples of datasets already available on our platform. Footnote 9: [https://huggingface.co/docs/datasets/index](https://huggingface.co/docs/datasets/index) ### Models Analogous to datasets, our platform specifies an _active model_ at any time and the employed models adhere to the standard used by HuggingFace Models10. Footnote 10: [https://huggingface.co/docs/transformers/main_classes/model](https://huggingface.co/docs/transformers/main_classes/model) To incorporate feedback into our models, we utilize adapter layers (Houlsby et al., 2019), a parameter-efficient fine-tuning technique. Figure 3 sketches an overview of the architecture used. Adapters are integrated on top of each language model unit (e.g. transformer block) and are trained with the human feedback while we freeze the rest of the model's weights. Adapters can also be disabled at any time to recover to the original state of the model. ### Explanations & Feedback Mechanism Users can evaluate the active model on the active dataset through the Feedback page. They may input text in three ways: i) create a text sample themselves; if authorized: ii) sample a random text from the active dataset; iii) sample a random _misclassified_ text from the test part of the active dataset. Users receive the classification results and the model's confidence. They can assess the result and correct any misclassifications. To further inspect the model's behavior, we provide two types of explanations--local and global. For local explanations on a text sample, we display relevant features to each output class (Figure 4). We attribute scores using the LIME framework (Ribeiro et al., 2016) and--to filter weak correlations--we highlight as relevant only tokens with a score above the threshold \(\theta=0.1\). On the global side, we list the most influential unigrams \begin{table} \begin{tabular}{c|c|c|c} \hline \hline & Dev & Annotator & Unauthorized \\ \hline Classification & & & \\ \& Explanations & & \\ \hline Smart Samples & & & \\ Selection & & & \\ \hline Feedback & & & \\ \hline Active Configuration & & & \\ \hline New Models \& & & \\ Datasets Upload & & & \\ \hline New Users & & & \\ Creation & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Different levels of access to IFAN functionalities. \begin{table} \begin{tabular}{l|l} \hline \hline **Dataset** & **Short Description** \\ \hline HateXplain (Mathew et al., 2021) & A dataset for hate speech classification which has 3 classes for hate type detection, the target community classification, and rationales. \\ \hline GYAFC (Rao and Tetreault, 2018) & Formality detection dataset which corresponds to 2-class classification: formal and informal. \\ \hline \hline \end{tabular} \end{table} Table 2: Example of datasets available at IFAN for testing. Figure 3: The proposed architecture for the models integrated into IFAN: addition of Adapter layer which is trainable on provided human feedback. for each output class. These can be inspected to extract insights about what keywords and patterns the model focuses on at the dataset level. For all 1-grams present in a dataset, their corresponding classification scores are calculated and the tokens with top scores are displayed on the page. Annotators can easily edit the highlighted tokens and send the updated explanation as feedback. We store the result--i.e. the highlighted relevant parts--and use them to fine-tune the adapter layers (see 3.4). Regarding the fine-tuning procedure, directly using the highlighted feedback text for adapter fine-tuning causes significant losses in the original model performance. We propose to mix feedback with original samples to mitigate this effect, which allows effective feedback incorporation while reducing model forgetfulness. See 4 for more details. ### Backbone API We expose our backbone's API to make available all essential dataset/model management functions. These provide a high-level interface for additional experiments dealing with model evaluation, explanation, and feedback. The API was built with the Python framework FastAPI11, detailed screenshots can be found in the Appendix A. Footnote 11: [https://fastapi.tiangolo.com](https://fastapi.tiangolo.com) ## 4 Case Study We carried out a case study to test the applicability of IFAN. We chose a hate speech detection task based on the HateXplain dataset (Mathew et al., 2021). The goal of the experiment was to use our framework to debias a given hate speech detector. Firstly, we modified the original dataset for binary classification task-- _"toxic"_ and _"non-toxic"_--and fine-tuned a BERT model (Devlin et al., 2019) (BERT-Tiny uncased snapshot)12. We choose the Jewish subgroup as a target for our debiasing process. Footnote 12: [https://huggingface.co/google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) We annotate \(24\) random misclassified samples, \(12\) with the most confidence and \(12\) with the least confidence scores (see Appendix C.1). We invited \(3\) annotators to participate in the annotation process. The n-grams that were modified by annotators were saved and used to create a new training dataset for the adapters. As a result, we collected \(40\) annotated n-grams and repeated them to get \(120\) training samples. To complete the new training creation, we balanced these samples with \(500\) original samples (\(250\) toxic, \(250\) non-toxic) randomly selected from the HateXplain dataset. The results are presented in Table 3. We observe that the non-balanced training dataset, which only contains feedback on the most confidently misclassified samples, resulted in a significant decrease in performance. While the inclusion of feedback on least confident samples caused a slight decline in the overall F1 score, Adapter training on the balanced feedback led to an improvement in the precision score for the Jewish target group. Figure 5 shows the changes in the detector while fine-tuning with the collected feedback. When rebalancing the feedback, only modified samples are drastically changed while the performance on the original texts is only slightly affected. A more detailed comparison between fine-tuning on non-balanced and balanced feedback can be found in Appendix C.2. Figure 4: The example of the results and local explanations that annotators can obtain on the Feedback page. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline **Model** & **Pr** & **Re** & **F1** & **Pr\({}_{J}\)** \\ \hline BERT (baseline) & \(0.80\) & **0.78** & **0.79** & 0.95 \\ \hline \multicolumn{5}{c}{_Most Confident Misscalarified_} \\ \hline BERT+Feedback (non-bal.) & 0.34 & 0.28 & 0.31 & 0.82 \\ BERT+Feedback (bal.) & 0.78 & 0.80 & **0.79** & **0.97** \\ \hline \multicolumn{5}{c}{_Least Confident Misscalarified_} \\ \hline BERT+Feedback (non-bal.) & **0.83** & 0.73 & 0.78 & 0.96 \\ BERT+Feedback (bal.) & 0.79 & **0.78** & 0.78 & 0.96 \\ \hline \hline \end{tabular} \end{table} Table 3: The results of the case study: hate speech classification model debiasing. We compare different strategies for feedback incorporation. Pr\({}_{J}\) states for the Precision score on the Jewish target group. ## 5 Limitations & Future Work As of now, our feedback system is limited to applications in the sequence-to-class format. However, current and future work is already focusing on extending the pipeline to token-to-class and sequence-to-sequence use cases. At the same time, we currently offer a limited set of explanation, feedback, and management options, which we plan to increase in the immediate future. A small user study has been conducted (Appendix B) to collect feedback about the platform and improve its user-friendliness. Our intent is to continue iterating the development of new features with trials with developers and laymen. Finally, our experiments do not yet show clear trends w.r.t. the correlation between performance and feedback hyperparameters. Indeed, further research and trials have to be carried out to establish optimal choices for the number of feedback samples, fine-tuning epochs, and the rebalancing ratio. ## 6 Conclusion This work proposes IFAN, a framework focusing on real-time explanation-based interaction between NLP models and human annotators. Our contribution is motivated by the limited options in terms of existing tools to interpret and control NLP models. IFAN is composed of three main units. The **Backbone** unifies all the machine learning pipelines and exposes an API for accessibility. The **User Interface**--organized in _landing page_, _documentation_, _feedback_, and _configuration_--provides an intuitive visual component to interact with models. Finally, the **Admin** controls the connection between the two previous components. Additionally, we introduce the feedback mechanism that takes advantage of adapter layers to efficiently and iteratively fine-tune models on the downstream task. Our experiments show the frameworks' credibility at debiasing a hate speech classifier with minimal performance loss. We believe IFAN to be a valuable step towards enabling the interpretable and controllable deployment of NLP models--allowing users with no technical proficiency to interact and provide feedback to deployed NLP systems. Regarding future work, we set as a priority to extend the framework to more NLP tasks as well as to integrate additional model analysis features and feedback mechanisms. ## Acknowledgments We thank Yutong Zhou for her valuable contribution to our user interface. This paper has been supported by the _German Federal Ministry of Education and Research_ (BMBF, grant 01IS17049). Figure 5: The results of the domain case using IFAN platform. We can observe that for both experiments with balanced training data, the overall model’s performance is only slightly changed while the model’s behavior on the Jewish target group is improved. ### Ethical Considerations Interpretability and controllability of modern NLP models and systems are fundamental pillars for their ethical and safe deployment (European Commission, 2020). This works aims at having a positive impact on both aspects as it provides a tool to explain models and provide them with feedback. By reducing the technical proficiency required to interact with NLP systems, we hope to facilitate the process of providing valuable human rationales to influence complex models. We strongly encourage future work to keep exploring this research direction as it enables to involve a larger and more diverse crowd, thus positively affecting also other desiderata such as _fairness_, _transparency_, and _accountability_. Nevertheless, there are potential pitfalls worth considering. Ensuring high quality for the human feedback is challenging (Al Kuwatly et al., 2020), and exposing models to external influence can be used as an exploit by adversarial agents (Mosca et al., 2022). Especially with a very small crowd of annotators, there's potential for a few people to have a strong influence on the model. A restrictive access rights management system like IFAN's already mitigates these issues. We believe that additional security features as well as tracking annotators' impact are key for future work to foster their trustworthiness. Previous works mention that users can feel discouraged and frustrated when interacting with poor models and badly-designed interfaces, which can also affect feedback quality (Lertvittayakumjorn and Toni, 2021). This can be addressed by integrating user studies in the development process in order to design more intuitive interfaces and improve the overall user experience. On the opposite end of the spectrum, plausible explanations can make humans overestimate the model's capabilities and make them trust systems that are still not ready for deployment. In this case, a more diverse and complementary set of explanations for users (Madsen et al., 2022) as well as comprehensive model reports for developers are core goals to provide a more complete picture of the models to be deployed.
2301.12888
Dynamic Full-Field Optical Coherence Tomography module adapted to commercial microscopes for longitudinal in vitro cell culture study
Dynamic full-field optical coherence tomography (D-FFOCT) has recently emerged as a label-free imaging tool, capable of resolving cell types and organelles within 3D live samples, whilst monitoring their activity at tens of milliseconds resolution. Here, a D-FFOCT module design is presented which can be coupled to a commercial microscope with a stage top incubator, allowing non-invasive label-free longitudinal imaging over periods of minutes to weeks on the same sample. Long term volumetric imaging on human induced pluripotent stem cell-derived retinal organoids is demonstrated, highlighting tissue and cell organisation as well as cell shape, motility and division. Imaging on retinal explants highlights single 3D cone and rod structures. An optimal workflow for data acquisition, postprocessing and saving is demonstrated, resulting in a time gain factor of 10 compared to prior state of the art. Finally, a method to increase D-FFOCT signal-to-noise ratio is demonstrated, allowing rapid organoid screening.
Tual Monfort, Salvatore Azzollini, Jeremy Brogard, Marilou Clémençon, Amélie Slembrouck-Brec, Valerie Forster, Serge Picaud, Olivier Goureau, Sacha Reichman, Olivier Thouvenin, Kate Grieve
2023-01-30T13:47:40Z
http://arxiv.org/abs/2301.12888v1
Dynamic Full-Field Optical Coherence Tomography module adapted to commercial microscopes for longitudinal _in vitro_ cell culture study ###### Abstract Dynamic full-field optical coherence tomography (D-FFOCT) has recently emerged as a label-free imaging tool, capable of resolving cell types and organelles within 3D live samples, whilst monitoring their activity at tens of milliseconds resolution. Here, a D-FFOCT module design is presented which can be coupled to a commercial microscope with a stage top incubator, allowing non-invasive label-free longitudinal imaging over periods of minutes to weeks on the same sample. Long term volumetric imaging on human induced pluripotent stem cell-derived retinal organoids is demonstrated, highlighting tissue and cell organisation as well as cell shape, motility and division. Imaging on retinal explants highlights single 3D cone and rod structures. An optimal workflow for data acquisition, postprocessing and saving is demonstrated, resulting in a time gain factor of 10 compared to prior state of the art. Finally, a method to increase D-FFOCT signal-to-noise ratio is demonstrated, allowing rapid organoid screening. ## Introduction The development of microscopy methods dedicated to cell cultures and explants has transformed our understanding of human biology [1-5]. Imaging cells and tissues outside the body removes the constraints of in vivo imaging, enabling higher spatial resolution and the use of exogenous markers [6, 7], thereby permitting imaging of structures ranging from cells and subcellular organelles [8-14] to single-proteins [15, 16]. If the high-resolution imaging of the 3D structural organization of life has brought valuable information [4, 15], the more recent quantification of the temporal dynamics of cells and tissues, made accessible by live cell imaging, gives functional information and a more physiological view of biological behaviours [17-23]. Many processes including cell division rate [21], proliferation, motility, migration, differentiation, or cell death [24], can be used as biomarkers of the physiology of cells and tissues [22-24]. Deregulations of these processes are often associated with diseases such as cancer [25-27], autoimmune disorders [28, 29], neurological disease [30, 31], and chronic inflammation [32]. Live cell imaging of the functional cell response to physical and chemical stimuli can help understanding pathological mechanisms responses to treatments [33] or stimuli [34, 35] and therapeutic effect [33, 36]. However, for successful live cell imaging experiments, cells and tissues must be studied in a context close to their native environment, i.e. as close as possible to in vivo conditions, in order to obtain meaningful data [37-39]. Temperature, CO2, O2 and Na2 levels, and tissue culture medium composition are thus critically important [37-39]. Furthermore, 2D cell cultures are not a satisfactory model for this paradigm since the mechanical stiffness of the glass slide supporting the cells exceeds by 6 orders of magnitude the stiffness of most tissues [40, 41], which has strong consequences on cell organization [42], signalling [43], and fate [44]. Additionally, they do not replicate the physiologically important long-range (>100 um) chemical gradients and long-range 3D interactions, which for instance play a role in organogenesis as well as in the tumor microenvironment [25]. Explants are not completely satisfactory either, due to the difficulty in obtaining them, especially for human explants, as well as the difficulties in maintaining the explants alive once blood circulation has been sectioned, as the explants are then oxygen and nutrients starved leading to cell death after a few hours [45]. During the last decade, the development of differentiation protocols for growing organoids from human stem cells have been a revolution in human biology modelling [4, 5, 46-49]. Organoids are self-organized and self-sustaining 3D cell structures derived from embryonic stem cells or induced pluripotent stem cells (iPSCs), mimicking the 3D cellular organization and composition of the primary tissues, comprising all major cell lineages in proportions similar to those in living tissue. They replicate biologically relevant intercellular phenomena and restores some of the physiological mechanical parameters and long-range chemical and biological interactions [4, 5, 49]. Besides, patient-derived samples have become accessible in large quantities, offering outstanding opportunities for (rare) disease modelling, drug testing and development, and personalized medicine [4, 5, 49]. Despite recent advances in microscopy, high-resolution imaging of organoids is still complex, particularly longitudinal, live, volumetric imaging, and remains an open challenge [50]. For organoid imaging, and more generally for live cell imaging, fluorescence microscopy has largely prevailed over other imaging methods [51, 52]. Owing to the astounding number of fluorescent probes available, able to label specifically most biological entities, the spatio-temporal dynamics of virtually any structure of interest can be studied [53]. However, the use of exogenous fluorophores can often skew native cell functioning and they are intrinsically not suitable for live cell imaging [52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63]. These methods introduce significant artefacts including phototoxicity, increased DNA replication stress and mitotic defects [21, 52], molecular buffering [64], and displaced physicochemical equilibrium associated with preventing the formation of liquid organelles for instance [65]. The use of genetically targeted fluorophores for live imaging is also limited to a few model organisms in which genetic editing tools are available, and they are difficult to transpose for tailored individual studies. More specifically for live cell imaging of organoids, fluorescence-based strategies are possible [66, 67, 68] but cumbersome, expensive to implement for patient-derived organoids, and restricted to a live imaging period of a couple of days at most [53]. The integration of fluorophores is also incompatible with cell therapy and organoid grafting therapeutic options. As a result, non-invasive, label-free volumetric optical microscopy appears as a more natural solution for live cell imaging of organoids and other similar 3D tissues [69, 23, 50]. Label free microscopy consists of using the intrinsic optical properties of biological structures--such as scattering, absorption, and phase contrasts, instead of external probes like fluorophores--and monitoring how they change dynamically, in order to characterize samples [70, 71, 72]. Recently, the analysis of the temporal dynamics of such endogenous contrasts enabled an important step to be made towards increasing the specificity of label-free microscopies [23, 50, 73]. Among other types of label-free microscopies, full-field optical coherence tomography (FFOCT) appears particularly suited to live cell imaging of organoids [50] thanks to its high 3D spatial resolution, contrast based on backscattering and phase differences [8, 12, 13], high sensitivity, and high imaging speed [8, 12, 14, 74]. Furthermore, temporal quantification of FF-OCT signal over a short time scale, named dynamic FF-OCT (D-FFOCT), has emerged as an invaluable metric of metabolic contrast [8, 50, 69, 73] which has since been linked to subcellular organelles activity [14]. As a result, dynamic and static FF-OCT ([D]-FFOCT), have been used in in vitro and ex vivo studies to specifically resolve most cell types in 3D tissues/organisms [8, 9, 14, 74, 75], to identify different cell stage-states such as senescence and mitosis [10, 26, 28], and to detect subcellular compartments and organelles [28]. (D)-FF-OCT is therefore an appealing microscopy design to drive biology research on unaltered samples at high resolution, and perform live cell imaging in thick samples, including 3D cell cultures. However, whilst D-FFOCT has been successfully applied in retinal cell cultures such as young non-laminated retina organoids [50], retinal explants [69] and retinal pigment epithelium [14, 69], the lack of precise environmental control (temperature, CO\({}_{2}\) level, culture medium composition) on the research setups employed have up to now limited live cell imaging to under 3 hours on the same sample [21, 37, 38, 39, 45]. In this work, for the first time, we designed a D-FFOCT module coupled to one of the optical ports of a commercial inverted microscope. This design aims to standardize D-FFOCT and make it more accessible to a large community, and facilitate its coupling with a wide variety of other imaging methods available on commercial microscopes. Our microscope was directly installed inside a level 2 biosafety laboratory (L2) so that patient-derived cell cultures can be imaged over long periods of time. More importantly, we took advantage of this integration in a commercial microscope to demonstrate the use of D-FFOCT in physiological environmental conditions thanks to a stage-top incubator. The use of a motorized translation stage as well as the automation capability of a commercial microscope enabled us to perform an automatic continuous live cell imaging of an entire retinal organoid (RO) over more than 2 weeks without observing any sign of stress [14]. The RO was still alive at the end of this period of time. Finally, we demonstrate a new method for imaging retinal organoids (ROs) larger than 1 mm\({}^{2}\), older than 250 days, at a stage when photoreceptor cells are fully differentiated with the presence of outer segments. This represents an increasing of the axial range by a factor of 2 and the transverse range by a factor 4 compared to prior state-of-the-art. **Results** **Volumetric D-FFOCT imaging of retinal organoids--**A volumetric acquisition was carried out on a 28 days old retinal organoid (RO - d28), with the newly developed D-FFOCT module (see Methods), inside a stage top incubator, as displayed in Fig.1, with sufficient coverage to observe the entire organoid (Fig.1a) whilst keeping a subcellular resolution, both in lateral (Fig.1b) and axial directions (Fig.1b-d). Three dynamic metrics were calculated from a time series of 512 FF-OCT images (see methods), and were displayed in a hue-saturation-brightness space (HSB) [50]. Red hue indicates faster activity than blue; saturation indicates activity randomness; and brightness captures the axial amplitude, or strength of the activity. Using a different acquisition architecture than in prior work [50], acquisition of 512 images at 100 Hz (5.12 s), data transfer (0.79 s), post processing (1.34 s) and saving (0.53 s) are parallelised resulting in an effective total time of acquisition of 5.12 s per D-FFOCT image (see Methods), resulting in a time gain factor of 10 compared to prior state of the art (see Methods) [50]. Thanks to this acquisition speed, three dimensional (3D) volumetric imaging is carried out on the D28 organoid, covering 400x400x120 \(\upmu\)m\({}^{3}\) in 2 hours and 10 minutes, with a voxel size of 139x139x1000 nm\({}^{3}\) (see Methods). 3D volume rendering is achieved, as displayed in Fig.1.f-g. We note that the total time of acquisition also included 2-phase static FF-OCT acquisition and stage translation steps, accounting for a slightly longer acquisition time than 5.12 s per field of view. Figure 1: **Example of D-FF-OCT data acquired for each time point of the time-lapse.** A D-FFOCT 3x3 mosaic z-stack of a 28 days old retinal organoid with an axial step of 1 μm is acquired in 150 minutes. Overlap of the tiles is 50%. Hue scales from 3 to 13 Hz mean frequency. Fig.1a shows an _en face_ (XY) cross-section from within the organoid, at 35 μm depth. A zoom-in on Fig.1a, indicated by the blue square, is displayed in Fig.1b, where each cell can be resolved. Fig.1c shows a vertical (Y2) cross-section from within the organoid. The vertical yellow dashed lines, displayed in Fig.1a and Fig.1c, coincide. Fig.1d shows a vertical (X2) cross-section from within the organoid. The horizontal yellow dashed lines, displayed in Fig.1a and Fig.1d, coincide. A zoom-in on Fig.1d, indicated by the red square, is displayed in Fig.1e, where each cells can be resolved along the axial direction. Fig.1f-g show reconstructed volumetric views of the same organoid. ROs derived from IPS cells are self-forming structures, initially composed of retinal progenitor cells (RPCs) that mimic human retinogenesis through formation of stratified and organized retinal structures that display markers of typical retinal cell types [78, 79]. Label free volumetric images confirmed the self-organisation of the structures into a neuroepithelium containing the cell bodies of aligned RPCs (Fig.1c-b), as previously described in studies using classical immunochemistry [79]. At d28, the RPCs appear as confluent green/yellow cells presenting a dynamic profile between 5,5 and 8 Hz in our culture condition. **Longitudinal and volumetric D-FFOCT imaging of retinal organoids--A** longitudinal volumetric acquisition, with the newly developed D-FFOCT module (see Methods), was carried out on a single retinal organoid, placed inside a stage top incubator, over 17 days. No sign of disturbance or abnormality was observed during the entire acquisition that would indicate RO cellular stress [14, 50], see Fig.2. A D-FFOCT volume, as demonstrated previously and highlighted in Fig.1, was acquired each day with sufficient coverage to observe the entire organoid (Fig.2) whilst keeping a subcellular resolution, both in lateral and axial directions (voxel size of 139x139x1000 nm\({}^{3}\)). The volumetric acquisition ranged from 2 hours and 10 minutes for d27 to 8 hours 32 minutes for d43, due to the size difference. In order to make the proof of principle regarding the use of D-FFOCT for time-lapse across several days and weeks, the organoid was embedded in 0.3% Matrigel (see Methods) to keep the same orientation, and "face" imaged throughout the time-lapse. As a result, the position of the organoid was perfectly retrieved after removal and re-insertion of the multiwell plate on the microscope during medium changes. As a consequence, the stage-top incubator on the microscope does not necessarily need to host the sample during the whole period of the time-lapse which can potentially be stored in a separate incubator, freeing up the microscope for carrying out other experiments. The organoid remained in a standard multiwell plate with its unsealed lid during the acquisition, itself placed in a stage top incubator (see Methods) set at 37 C\({}^{0}\), 5% CO\({}_{2}\), 20.5% O\({}_{2}\), 74.5% N\({}_{2}\) and 95% humidity. Medium change was carried out every two days under a time hood placed in a level 2 (L2) biosafety laboratory environment, where the setup itself was installed to insure optimal culture conditions. As a result, contaminations were kept to a minimum, to the standard of L2 and organoid production protocols, throughout the whole experiment. A selected plane at 50 \(\upmu\)m depth originating from the daily volume acquisition, on the same organoid, over 17 days is displayed in Fig.2. Evolution of cells and structures could be followed up over several days at subcellular resolution in the same organoid. Growth of the organoid occurs on a daily basis, thanks to the relatively low concentration of Matrigel embedding the organoid. The shape of the organoid evolves from a spherical ball of retinal progenitors, at d27, to a non-descript shape with rift-like and spherical rosettes, forming from day 36 (d36), which reflects the proliferation of retinal progenitors leading to an increase of neuroepithelial tissue size observed in retinal organoids [78, 79]. During the first week (d27-d35), the RPCs that compose the RO at d27 grow to form the neuroepithelial tissue at the periphery of the RO (Fig.2). Within the two first weeks, rosette formations take place in the neuroepithelium as described previously [78, 79]. We imaged cell motility that lead to the formation of the rosette structures in ROs (Fig. 3). As a result, we could track in 3D their formation and migration over time. We showed that the rosette lumen came from the reorganization of the peripheral layer of the neuroepithelium, implicating hinge regions, as observed in ROs passing from the optic vesicle to the optic cup stage [79]. As a result, proliferative RPCs generated rosette structures which were included in the neuroepithelium over time (Fig.3). The mitotic status of RPCs was highlighted by larger spherical cells than regular RPCs, displaying a red and green hue (5,5 to 13 Hz) close to the lumen, as shown in the d33 ROs (Fig. 3) surrounding the rosette, corresponding to a specific phase of the mitosis. In fact, D-FFOCT resolved cells in replication/division stage, as showed in Fig.4, with a single cell passing successively from the prometrabase, the anaphase and the telomerase over a time period of 16 minutes. Also starting from d33 around the rosette, low frequency blue filaments at 3 Hz can be observed. Similar larger filaments can also be observed from d35 onwards in Fig.2. These filaments should theoretically correspond to retinal ganglion cell (RGC) axons, which is coherent with the fact that RGC are known to emerge in ROs from d29 [78]. We could also observe that such filaments appeared earlier at the bottom of the organoid, for example at 4 \(\upmu\)m above the glass coverslip, where we could detect them from d29. This earlier detection of the axon-like structures at the bottom of the RO, but not on the sides of equatorial plane may indicate that their formation is favoured by the glass bottom of the multiwall plate, on which the organoid was resting (see Fig. 3). Figure 2: D-FFOCT volumetric and longitudinal imaging of one single organoid at a depth of 50 \(\upmu\)m, across 17 days. Hue scales from 3 to 13 Hz mean frequency. Image mosaicking covers 406x406 \(\upmu\)m\({}^{2}\), 2928x2928 pixels, (3x3), at day 27 (d27) to 717x717 \(\upmu\)m\({}^{2}\), 5163x5163 pixels, (6x6), at day 43 (d43). The scale bar represents 50 \(\upmu\)m and stands for all panels. Methods). Low depth prolongation of the axon structures was also favoured because the ROs were placed in Matrigel [47]. Interestingly, whole images of ROs between d35 to d43 (Fig.2), generated by the D-FFOCT module, showed big rosette structures with long rift-like lumen (Fig.5c), where RPCs are still proliferative, surrounded by the emerging RGs and retinal inner neurons as described in ROs [78, 79] and shown in Fig.5b. Similar observations were made on different ROs from the same batch on which was carried the longitudinal experiments. Small "punctual" rosettes started to appear from day 32 (d32), mostly from the periphery of the RO, as highlighted by Fig.3, and rift-shaped rosettes started to appear from day 33 (d33), as displayed in Fig.2 and highlighted with Fig.5. The main axis of the cells surrounding rosette was normal to these rifts, as shown by the similarity between Fig.1.e and Fig.5b. Axonal extensions started to be generated inwardly in ROs from d33 (Fig.2). D-FFOCT resolved axonal structures with high resolution showing long associated nerve fibers that can compose the optic nerve in vivo (Fig. 5d, e). Figure 4: D-FFOCT tracking of one retinal progenitor cell (RPC) in retinal organoid (ROs) at d36. Hue scales from 3 to 13 Hz mean frequency. Images cover 40x40 \(\upmu\)m\({}^{2}\), 290x290 pixels. 8 minutes separate each images. Scale bar is 20 \(\upmu\)m. Fig.4a highlights a spherical cell in a prometaphase with two spindle poles. Fig.4b highlights an elongated cell characteristic to the anaphase. Fig.4c highlights two daughter cells after the telophase. Figure 3: D-FFOCT longitudinal time tracking of a rosette formation over eight days. Hue scales from 3 to 13 Hz mean frequency. A single rosette found in the large volume displayed in Fig.2, was tracked throughout each daily volume imaged. All images share the same scaling, with images from day 28 to 34 covering 70x70 \(\upmu\)m\({}^{2}\), 500x500 pixels, and images from day 35 to 36 covering 105x105 \(\upmu\)m\({}^{2}\), 750x750 pixels. Temporal evolution of rift-like rosettes could be followed over time too, with an epithelium organisation highlighted with dashed lines on Fig.6. Interestingly, the main cell axis of the RPCs was observed to be radial to the rosette. If RPCs can be described analogously to a 3D ellipse, it means that their major axis is included in the imaging plane. By symmetry, imaging along this axis recapitulates the different aspects of a RPC in a single imaging plane. By extension, since the neuroepithelium is also growing radially to the rosettes, imaging this tissue along their main axis enables to recapitulate both RPC and neuroepithelium fate. Therefore, the alignment of the main cell axis with the D-FROCT imaging plane enables visualization in a single image of the tissue mesoscopic architecture whilst resolving its cells units. This is a more effective way to monitor ROP epithelium as it requires only a single plane rather than a volume to decipher ROS epithelium state. Figure 5: **Highlights on different structures of a retinal organoid (d38) using D-FROCT.** Hue scales from 3 to 13 Hz mean frequency. Fig.5a displays a single plane mosaic (6x6) of 717x717 \(\upmu\)m\({}^{2}\), 5163x5163 pixels and its scale bar is 140 \(\upmu\)m. Fig.5b-d and Fig.5f are magnified views of Fig.5a of 79x79 \(\upmu\)m\({}^{2}\), 570x570 pixels and their scale bar is 30 \(\upmu\)m. Fig.5e is a magnified view from Fig.5d covering 35x35 \(\upmu\)m\({}^{2}\), 250x250 pixels, and its scale bar is 11 \(\upmu\)m. Fig.5b highlights normal-oriented retinal progenitor cells (RPCs) which are paraxial to the imaging plane. If the RPC can be described analogously to a 3D ellipse, it means that its major axis is included in the imaging plane. Fig.5c highlights mitotic RPCs located nearby the rift-like rosettes. White arrows in Fig.5c point out specifically mitotic RPCs. Fig.5d shows a retinal ganglion cell axon-like structure, expending outwardly from the retinal organoid. Fig.5e is highlighting that our setup can resolve single fibres composing this axon-like structure. Fig.5f is highlighting axon-like structure from retinal ganglion cell propagated inside the organoid. In order to demonstrate faster-paced longitudinal acquisition, a time-lapse on a locked plane was carried out over 11 hours, with mosaic imaging (3x3) and larger time series of 1024 FF-OCT images (see Methods), producing a reconstructed D-FFOCT image every 100 seconds (Not shown). This ability shows that faster time monitoring could be conducted on a single plane. However, D-FFOCT single plane imaging at 50 \(\upmu\)m depth does not encompasses the general state of the organoid, as mentioned earlier. Therefore, imaging at higher depth within more conventional ROs, with proper layering, would statistically enable the capture of more cells in a radial position. This principle is highlighted in Fig.5b and Fig.6, where radial cells normal to rosettes can be observed for example. Figure 6: D-FFOCT longitudinal time tracking of a rosette-rift formation over a couple of days. Hue scales from 3 to 13 Hz mean frequency. A single rosette-rift was tracked throughout each daily volume imaged. All images share the same scaling with images from day 33 to 34 covering 128x238 \(\upmu\)m\({}^{2}\), 924x1713 pixels, and images from day 36 covering 205x273 \(\upmu\)m, 1480x1968 pixels, and day 38 covering 313x296 \(\upmu\)m, 2252x2128 pixels. White arrows highlight mitotic RPCs. **Improvement of D-FFOCT imaging range for retinal organoids--D-FFOCT imaging of retinal organoids at high resolution, using objectives with a numerical aperture (NA) higher than 0.8, has been limited to a depth of 80 \(\upmu\)m, using a source at 660 nm, in previous work [50]. However, conventional free-floating retinal organoids are spheres that quickly reach a diameters >700 \(\upmu\)m during their growth [78, 80]. Since retinal organoids replicate long-range (>100 \(\upmu\)m) cell gradients within the retina [25, 78], it is thus critical to be able to image up to a minimal depth of 100 \(\upmu\)m in order to capture the overall cell-type organisation in organoids, or mesoscale features, using D-FFOCT [47]. Although we show D-FFOCT images from a RO at 120 \(\upmu\)m depths, reaching to the hollow center of the organoid, using a light source at 730 nm (see Fig.1c-d), it requires a significant amount of time to acquire a volumetric dataset, enabling observation of mesoscale features, as displayed in Fig.1e. Furthermore, D-FFOCT images at >100 \(\upmu\)m depth arguably lack sufficient SNR to resolve each individual cell and their subcellular features. Ideally, in order to capture both mesoscale tissue cell organisation and subcellular features in a minimum amount of time, imaging would need to be occurring at the "equator" of these spherical organoid, where the radial structures of the organoid align with the _en face_ D-FFOCT imaging plane. As a result, higher SNR is needed to optimize the data acquisition on ROs using D-FFOCT, as well as mosaicking over 1 mm\({}^{2}\) area on free-floating organoids. In this section, we demonstrate _en face_ imaging of a free-floating organoid over 1 mm\({}^{2}\), up to 230 \(\upmu\)m depth, on older than 250 days ROs, using a source with a longer wavelength of 810 nm and higher power (20 mW), resulting in an increase of SNR (see Methods). Optimal sampling frequency has been established in the past at 100 Hz for ROs, resulting in a lower D-FFOCT signal at higher or lower sampling frequency [50]. However, here, we used an acquisition rate of 500 Hz, while maintaining the camera full-well-capacity (FWC) near maximum, that we recast in an effective time series at 100 Hz, by performing a temporal binning of 5 successive FFOCT images. As a result, the SNR of the time series was increased by a theoretical factor of \(\sqrt{5}\). As a result, longer wavelengths and temporal binning of the data lead to higher SNR level in organoids, enabling imaging at depths up to 230 \(\upmu\)m. Using this acquisition scheme, we were able to image mesoscale tissue features (Fig.7a-b) and sub-cellular details (Fig.7c-f) of much older ROs on a single D-FFOCT mosaic (10x10) image in 12 minutes. In this type of acquisition, different dynamic profiles can be observed at sub-cellular level (Fig.7c-f). As previously reported by fixed organoid approaches (whole mount or cryosections [85, 86]), our strategy highlights the layered organization of the organoid cells with different cell types (Fig.7a-b). This approach allows the detection of long filaments of approximately 50 \(\upmu\)m in length, located around the edge of the organoid (Fig.7b, 7f), which likely correspond to the inner and outer segments of the photoreceptors. The outermost layer of the retinal organoid, which corresponds to the putative photoreceptor outer nuclear layer, is well defined with very distinguishable cells (Fig.7c). This layer seems delimited towards the internal part of the organoid by a putative outer plexiform layer, mostly present in the whole observation plane (Fig.7a-b). This is followed up with cells presenting different mesoscale features composed of yellow nucleus cells and larger red cells (Fig.7d), which may correspond to the soma of bipolar cells and Muller glial cells that were previously identified by immunostaining in organoids of the same age [85, 86]. Finally, in the innermost part of the organoid, distinctive speckled and saturated yellow cells appear (Fig.7e) that corresponds to dying cells, as previously described within the centre of large organoid [86, 87] and established with D-FFOCT [14]. Although, not imaged at the "equator" of the organoid, D-FFOCT mesoscale imaging was significantly improved compared to D-FFOCT imaging at lower depths. As a result, ROs can be monitored at a faster rate than with volumetric imaging while encompassing the general state of the retinal organoid. Figure 7: D-FFOCT large scale mosaic imaging of a retinal organoid highlighting mesoscale and subcellular features. 266 day old retina organoid being imaged at 190 μm depth by D-FFOCT\(-\)10x10 tiles mosaic with 50% overlap, corresponding to 1.125x1.125 mm\({}^{2}\), 8101x8101 pixels\(-\)is displayed in Fig.3a. Hue scales from 3 to 13 Hz mean frequency. Fig.7b displays a zoom-in of Fig.7a, highlighting the cells layering organisation of the organoid with different cell types. Outer and inner segments of photoreceptors are located around the edge of the organoid (50 μm thick layer), Fig.7a-b, highlighted in Fig.7f. The outer nuclear layer of the photoreceptors is well defined with very distinguishable cells, as highlighted in Fig.7c, and delimited by the outer plexform layer, mostly present in the whole plane. Fiber-like structures follow up with yellow nucleus cells; see Fig.7d, intertwined within these fibres; which may be bipolar cells. Finally, mostly in the inner part of the organoid, distinctive speckled and saturated cells appear which may be dying or dead cells, see Fig.7e. Scale bar is 60 μm in Fig.7b. Scale bar is 25 μm in Fig.7d-f. Scale bar is 10 μm in Fig.7c. Imaging in ex vivo tissues--To evaluate the performance of D-FFOCT to image the highly characteristic physiological organization of photoreceptors, ex vivo adult pig retinal explants were also imaged under culture conditions. Focusing on the photoreceptor layer, images obtained clearly show the cone and rod photoreceptor mosaic in en face slices, with subcellular detail of mitochondria visible inside the cell bodies, and the reconstructed depth slice revealing photoreceptor shape including the characteristic cone inner and outer segments (Fig 8). ## Discussion Live imaging with D-FFOCT achieves non-invasive label free 3D viewing of in vitro and ex vivo samples. In this work, we have demonstrated a D-FFOCT module design, coupled to a commercial microscope with stage top incubator, which allowed 3D, longitudinal imaging in retinal organoids over periods of weeks. Use of longer wavelength LED illumination and time series binning allowed acquisition at deeper penetration depths than had been previously demonstrated [50]; and use of the commercial microscope's translation stage allowed coverage of areas larger than 1 mm\({}^{2}\) while conserving a theoretical lateral spatial resolution <400 nm [10]. These two capabilities open up D-FFOCT applications on thicker and larger samples, as well as the imaging of retinal organoids in a more equatorial plane. This last aspect is especially interesting as it enables simultaneous monitoring of all the cell layers of the retinal organoid at high resolution, therefore providing an overview of the organoid's overall state in a single D-FFOCT mosaic plane rather than having to make a volumetric acquisition, which implies longer acquisition times and data volume. As a result, control quality and general assessment can be achieved in a quicker way than volumetric acquisition. Live imaging can therefore also be conducted at a faster pace. Furthermore, we note that mosaics were reconstructed from tiles overlapping by a factor of 50%. This inefficient coverage is due to D-FFOCT signal inhomogeneities which occur with the high NA objectives used in D-FFOCT, as sample and reference coherence planes do not overlap perfectly in space. Additional degrees of Figure 8: D-FFOCT in the photoreceptor layer of a porcine retinal explant, imaged under culture conditions. The cone and rod photoreceptor mosaic were revealed in en face (a, b) and axial (c) slices. The depth positions of a) and b) are indicated by the black arrows next to c). White arrows highlight three cones which can be visualized in all three views. Scale bar, 10μm. freedom for controlling the reference mirror inclination (see Methods) independently from the objective inclination would enable correction of this defect. As a result, using an overlap of 10 % instead of 50% for a mosaic becomes feasible. As a result, the area covered by a 10x10 mosaic can be accomplished by a 6x6 mosaic with 10% overlap leading to a threefold time gain. Furthermore, ROs are empty in their center, meaning that this area could be skipped, reducing further the number of tiles necessary to create a large-scale mosaic at this resolution. Additionally, active control of the reference time delay at high depth would benefit the mosaic image quality by having a more homogeneous contrast and higher global SNR. Indeed, the time delay between the reference arm and the sample arm (see Methods) was not constant at a given depth for different xy coordinates. The old ROs' heterogeneities cumulated significantly at depths thicker than 100 \(\upmu\)m resulting in a significant optical path difference depending on the same plane location. The mismatch was on the order of 1-2 \(\upmu\)m on the same plane. Finally, parallel management of data and optimisation of post-processing algorithms enabled close to 10 times faster acquisitions than previously demonstrated [8, 14, 50, 69], critical and paramount for imaging large samples, volumetric imaging and achieving live imaging. Use of the commercial microscope's translation stage allowed accurate mosaicking over wide regions to cover larger organoids than previously demonstrated, and positioning was reproducible when moving from well to well to the extent that the same field could be recovered without the need for image registration when moving away from and back to the same sample or removing and resetting the plate in position for a subsequent acquisition, when using embedding Matrigel. In the case where the sample is free-floating, slow stepping of 2 \(\upmu\)m every 40 ms was carried out in order to minimize movement. No drift was monitored on retinal organoids older than 250 days, see Fig.7. As a result, D-FFOCT can be used for monitoring organoids' production using different protocols, both with or without embedding. Additional demonstration of this D-FFOCT module was carried out on retinal explants with a focus on photoreceptors, see Fig.8, in order to show imaging performance in ex vivo tissues. To the best of our knowledge, these images represent the highest resolution on photoreceptors achieved by an OCT technique. Where past work used a standalone D-FFOCT device under optics laboratory conditions to highlight 3D imaging in organoids [50] and cell behavior under stress in the context of disease modelling [14], here, we were able to extend the potential of this promising technique by adapting it to use by a wider community of biologists and other non-optics experts. This was achieved by integrating D-FFOCT to a conventional microscope setup, familiar to biologists, thereby facilitating adoption; allowing continuous protection of the samples under controlled environmental conditions suitable for culture; and by precisely automating key aspects of acquisitions including repeatable xyz positioning, 3D stack capture, and mosaicking; along with developing an efficient and fast workflow for acquiring, post-processing and saving datasets. In addition to these protocol improvements, in terms of optical innovation, we showed greater penetration depths achievable with longer wavelength illumination and the improved SNR gained by binning, which opens up the perspective of making faster acquisitions with fewer frames required to calculate dynamic metrics than in past work. Biological results shown include clear 3D views of cone and rod photoreceptors in retinal explants with unprecedented resolution for an OCT-based technique (see Fig. 8) ; two types of IPSC-derived ROs with cell layering distinguishable from the dynamics and morphology of the D-FFOCT signal ; and long term acquisitions on single organoids over >40 day periods, with the ability to probe mature organoids at later stages of development enabled by the wide field mosaicking and deeper penetration depths. In follow up work, we anticipate the use of the D-FFOCT module in disease modelling and drug screening applications, in 3D samples such as organoids and explants, as well as 2D samples such as cultured cell sheets. A challenge currently being tackled is the efficient management of the large datasets generated by this high resolution volumetric imaging method. Various post processing methods are being explored to extract pertinent data and metrics in an efficient manner, including the use of machine learning based methods to quantify cell density, morphometry and dynamic profile in order to facilitate automation of image analysis [9]. With increased automation of acquisition and image analysis, D-FFOCT may find its place as a tool of choice for live imaging in therapeutic screening and quality control trials of cultured tissues. ## Methods ### Setup A schematic of the optical setup is displayed in Fig.9, with its components described. The D-FFOCT module is a Linnik interferometer with one additional lens in the sample arm and balanced in the reference arm; see to L3 and L4 in Fig.9. This configuration enables an efficient and homogeneous illumination with a multiport microscope as the distance between the objective in the sample arm (Obj.1) and the microscope output port, where no optics can be placed without altering the microscope hub for other imaging techniques, is relatively important for a D-FFOCT optical layout [8]. This rather unconventional configuration allows to image incoherent reflections, predominantly (99.9%) coming from the reflections on the non-polarizer-beam-splitter (NPBS) cube, in their Fourier plane located at the cMOS camera (Q-2HFW, Adimec, Netherland), reducing their contribution, compared to a classical configuration [74] (see patent PCT/FP2011/066132). The cMOS camera is a Q-2HFW (Adimec, Netherland), with a pixel size of 10x10 um\({}^{2}\) and 1440x1440 pixels, and is set to obtain a SNR level of 1071. Two different LEDs were used in this work, with the central emission wavelengths at 730 um (LED730) and 810 um (LED810) (M730LS and M810L3, Thorlabs, Newport, NJ, USA). The optical elements in the sample, reference and detection arms gives a magnification M= 58 and an imaging field of 200x200 um\({}^{2}\), thus a pixel size of 139x139 nm\({}^{2}\). The power applied on the sample is 3.3 and 20 mW, resulting in values of intensity of 53 mW.mm\({}^{2}\) and 318 mW.mm\({}^{2}\), for the L730 and LED810, respectively. The exposure time is set at 3.9 ms or 1.3 ms to achieve 95% of the camera FWC, for LED730 and LED810, respectively. In the case of LED730, a time series of 512 images is acquired at 100Hz and is used to generate 3 metrics: the average of the power spectral density (PSD) frequency, the standard deviation of the PSD frequency, the mean of the running standard deviation, with a sliding window of 50 elements-- according to a methodology developed by Scholler et al. (2020), for contrast standardisation as well as comparison [50]. In the case of LED810, a time series of 2560 images is acquired at 500Hz and binned in groups of 5 consecutive frames before generating the same metrics as for the case LED730. The three dynamic metrics (mean frequency of the PSD, standard deviation of PSD and averaged running standard deviation) are combined in a HSB space, respectively. Figure 9: Schematic of the set-up used in this **work**. A mounted light-emitting diode (LED) (M810L3 or M730LS, Thorlabs, Newport, NJ, USA) with a wavelength of 810 nm and 25 nm bandwidth, or 730 nm and 40 nm bandwidth, is used as an extended source (S1) of 1 mm\({}^{2}\) with 61.8 uW.mm\({}^{2}\), or 13.1 uW.mm\({}^{2}\) irradiance, respectively. A first pair of air doublet lenses (L1) is used (AC254-030-B-ML and AC254-150-B-ML, Thorlabs, Newport, NJ, USA) to image S1 onto a first diaphragm (Ir1). A pair of silver mirrors (M1 and M2) (P10-03-P01, Thorlabs, Newport, NJ, USA) are used to steer the light into the microscope. An air doublet (L2) (AC254-100-B-ML, Thorlabs, Newport, NJ, USA) is used to image Ir1 onto the back focal plane of both objectives (Obj.1 and Obj.2) (UPLAPO3OXSH, Olympus, Japan) by the intermediate of an air doublet (L3 and L4) (AC254-150-B-ML, Thorlabs, Newport, NJ, USA), respectively. A second iris (Ir2) is imaged onto the sample and M4, a silicon mirror (monocrystalline silicon wafer, Nanomaterials Development Experts Store, China) and is conjugated to a third iris (Ir3), by the intermediary of a non-polarizer-beam-splitter (NPBS), itself conjugated to a cMOS camera (Q-2HFW, Adimec, Netherland) by the intermediary of a pair of air doublet lens es(LS) (AC254-060-B-ML and AC254-200-B-ML, Thorlabs, Newport, NJ, USA). A steering mirror (M3) (PFE10-P01, Thorlabs, Newport, NJ, USA) and a manual translation stage (XY Trans) are used for alignment and fine positioning of Obj2; standing vertically. A piezoelectric (PZT) (PK25LA2P2, Thorlabs, Newport, NJ, USA) and linear stage (Delay stage) (X-USO25A, Zaber, Canada) are used to introduce a fast or fixed phase shift between interferometric fields, respectively. A flat mirror (M5) (CCM1-PO1/M, Thorlabs, Newport, NJ, USA) is mounted in the turret (IX3-RFACA-1-3, Rapp Optoelectronic, Germany) of a commercial microscope (IX83, Olympus, Japan). A commercial stage top incubator (H201-K-FRAME, H201-MW-HOLDER and OBJ-COULAR-2532, Okolab, Italy) enables to control the temperature, humidity, CO2, O2 and N2 gas composition whilst hosting conventional bottom-glassed multiwell plates. A 2 dimensionless translation stage (SCANplus IM 120&0, 2 mm, Marzhauser, Germany) enables scanning from well to well, as well as mosaicking, and the objective holder of the microscope, mounted on a linear stage (U-MCZ-1-2, Olympus, Japan) enables depth (Z) scanning. **Optimization of data workflow (acquisition, postprocessing and saving)** -- Acquisitions are triggered through hardware using a transistor-transistor logic of 0-5 V, generated by an acquisition card (NI cDAQ-9174, National Instruments,TX, USA), and data from the camera (11 bits) are transferred via a 4 CoaxPress cables (25 Gbit/s) on a frame grabber (Cytton CXP4, Bitflow, Massachusetts, USA), itself connected to a PCIe 3.0 (32 Gbit/s) of a PC motherboard (WS X299 SAGE, ASUS, Taiwan). Data from the camera are converted to 16 bits by the frame grabber and are then transferred on a parallel thread of the random-access memory (RAM) (8x Vengeance LPX 1X16GB 3000MHZ, Corsair, California, USA) of the motherboard, and logged onto this same thread. Using a multithreaded architecture, the effective acquisition time Ttot is bottlenecked to the irreducible time needed for data to be logged (tugging). For example, for generating 10 D-FFOCT images (H,S,B), from Nbatches =10 batches, on a locked location, the total time of acquisition is \(\ structures in ultra-low attachment 24-well plates (Corning) as floating structures in the ProB27 medium supplemented with 10 ng/ml of animal-free recombinant human FGFF2 (Peprotech,, 100-18B) and half of the medium was changed every 2-3 days [80, 85]. The ProB27 medium is composed of chemically defined DMEM:Nutrient Mixture F-12 (DMEM/F12, 1:1, L-Glutamine), 1% MEM non-essential amino acids (Thermo Fisher Scientific), 2% B27 supplement (Thermo Fisher Scientific), 10 units/ml penicillin and 10 ug/ml streptomycin. At d35, retinal organoids were cultured in the absence of FGF2 in the ProB27 medium with 10% FBS (Thermo Fisher Scientific) and 2 mM of Mittamax (Thermo Fisher Scientific) for the next several weeks. Around d84, the retinal organoids were cultured in the ProB27 medium with 2% B27 supplement without vitamin A (Thermo Fisher Scientific) until d250 [85, 86]. **RO sample preparation for D-FROT imaging** Early ROS at d28 were embedded in 3% Matrigel (Corning* Matrigel* Basement Membrane Matrix Growth Factor Reduced Phenol Red Free [Corning, 356231]) in 12-well glass bottom plate (IBL, 220.210.042). Embedded structures were cultured in ProB27 medium +FGF2 up to D35 followed by 1 week in ProB27 medium. Half of the medium was changed every 2-3 days [80, 85]. Old ROs around D250 were placed to a black, flat-bottomed glass microscopy-compatible 24-well plate (ibidi) and the well is filled with pre-warmed fresh culture medium. The organoid is left for at least 1h in the dark in the 37deg C / 5% CO2 incubator before imaging. **Retina explant** Porcine eyes were obtained from a local slaughterhouse in agreement with the local regulatory department and the veterinarians from the French Ministry of Agriculture (agreement FR75105131). Eyes were dissected to isolate retinas in CO2 independent medium (18045054, Thermo), and pieces from the region behind the optic nerve were cut using sterile biopsy punches of 2mm. Each retinal explant was then placed on a polycarbonate membrane (140652, Thermo, Waltham, Massachusetts) photoreceptors turned upwards. This assembly of membrane plus explant was then placed face down on a glass plate (Cellvis, P12-1.5H-N, IBL) for microscopy, i.e. with the explant sandwiched between membrane and plate, and imaged with ganglion cells uppermost. The level of medium was precisely controlled to hydrate the retina without complete immersing it to have optimized oxygenation. These pieces were kept in culture in a CO2 incubator at 37deg C for 3 days in Neurobasal-A medium (10888022, Thermo) containing 2mM of L-glutamine (G3126, MERCK). **Data and code availability** Due to the large size of the datasets (>1 To), they cannot be uploaded on a public repository. However, data, including the full volumetric longitudinal dataset, the fast timelapse over 11h, and the dataset obtained on the d266 organoid, as well as the codes can be made available upon reasonable request from the authors. **Code availability** Data and codes are available upon reasonable request from the authors.
2303.15746
qEUBO: A Decision-Theoretic Acquisition Function for Preferential Bayesian Optimization
Preferential Bayesian optimization (PBO) is a framework for optimizing a decision maker's latent utility function using preference feedback. This work introduces the expected utility of the best option (qEUBO) as a novel acquisition function for PBO. When the decision maker's responses are noise-free, we show that qEUBO is one-step Bayes optimal and thus equivalent to the popular knowledge gradient acquisition function. We also show that qEUBO enjoys an additive constant approximation guarantee to the one-step Bayes-optimal policy when the decision maker's responses are corrupted by noise. We provide an extensive evaluation of qEUBO and demonstrate that it outperforms the state-of-the-art acquisition functions for PBO across many settings. Finally, we show that, under sufficient regularity conditions, qEUBO's Bayesian simple regret converges to zero at a rate $o(1/n)$ as the number of queries, $n$, goes to infinity. In contrast, we show that simple regret under qEI, a popular acquisition function for standard BO often used for PBO, can fail to converge to zero. Enjoying superior performance, simple computation, and a grounded decision-theoretic justification, qEUBO is a promising acquisition function for PBO.
Raul Astudillo, Zhiyuan Jerry Lin, Eytan Bakshy, Peter I. Frazier
2023-03-28T06:02:56Z
http://arxiv.org/abs/2303.15746v1
# qEUBO: A Decision-Theoretic Acquisition Function for ###### Abstract Preferential Bayesian optimization (PBO) is a framework for optimizing a decision maker's latent utility function using preference feedback. This work introduces the expected utility of the best option (qEUBO) as a novel acquisition function for PBO. When the decision maker's responses are noise-free, we show that qEUBO is one-step Bayes optimal and thus equivalent to the popular knowledge gradient acquisition function. We also show that qEUBO enjoys an additive constant approximation guarantee to the one-step Bayes-optimal policy when the decision maker's responses are corrupted by noise. We provide an extensive evaluation of qEUBO and demonstrate that it outperforms the state-of-the-art acquisition functions for PBO across many settings. Finally, we show that, under sufficient regularity conditions, qEUBO's Bayesian simple regret converges to zero at a rate \(o(1/n)\) as the number of queries, \(n\), goes to infinity. In contrast, we show that simple regret under qEI, a popular acquisition function for standard BO often used for PBO, can fail to converge to zero. Enjoying superior performance, simple computation, and a grounded decision-theoretic justification, qEUBO is a promising acquisition function for PBO. ## 1 Introduction Bayesian optimization (BO) is a framework for global optimization of objective functions with expensive or time-consuming evaluations (Shahriari et al., 2015). BO algorithms have been successful in broad range of applications, such as sensor set selection (Garnett et al., 2010), hyper-parameter tuning of machine learning algorithms (Snoek et al., 2012), chemical design (Griffiths and Hernandez-Lobato, 2020), and culture media optimization for cellular agriculture (Cosenza et al., 2022). In many problems, it is not possible to observe (potentially noisy) objective values directly. Instead, a decision-maker (DM) provides preference feedback, often in the form of pairwise comparisons between options shown. This arises in applications such as animation design (Brochu et al., 2010), where a DM is shown two different images and chooses the one with better characteristics (e.g. realism or resemblance to a target image); and exoeskeleton gait design (Tucker et al., 2020), where a DM assisted by an exoskeleton walks for a short period of time using two different gait configurations and indicates the one that resulted in more comfortable walking. Preferential Bayesian optimization (PBO) (Brochu et al., 2010; Gonzalez et al., 2017), a subframework within BO, has emerged as a powerful tool for tackling such problems. As in standard BO, a PBO algorithm consists of two main components: a probabilistic surrogate model of the DM's latent utility function; and an acquisition function (AF), computed from the probabilistic surrogate model, whose value at a given set of \(q\) alternatives quantifies the benefit of DM feedback about their preferred alternative in the set. Several AFs for PBO have been proposed (Brochu et al., 2010; Gonzalez et al., 2017; Benavoli et al., 2021; Siivola et al., 2021; Nguyen et al., 2021). However, most are derived from heuristic arguments and lack a proper decision-theoretic or information-theoretic justification. For example, Brochu et al. (2010) selects the point that maximizes the posterior mean of the model over points in previous queries as the first alternative, and the point that maximizes the expected improvement with respect to the posterior mean value of the first point as the second alternative. Other works simply adopt AFs from the standard BO literature (Siivola et al., 2021), ignoring the fact that preference feedback is observed rather than direct utility values. To address the shortcomings of existing approaches, we study the _expected utility of the best option (qEUBO)_, which generalizes the EUBO AF proposed by Lin et al. (2022) for a different problem setting, as a novel AF for PBO with a proper decision-theoretic justification. ContributionsOur contributions are as follows: * We propose qEUBO, an AF for PBO. qEUBO has a sound decision-theoretic interpretation, is simple to compute, and exhibits strong empirical performance. * We show that qEUBO outperforms the state-of-the-art AFs for PBO in several synthetic and realistic test problems. Moreover, we show that qEUBO's closest competitor performs well in early iterations because it is _similar_ to qEUBO but its performance degrades as the number of queries grows. * We show that, under sufficient regularity conditions, qEUBO's Bayesian simple regret converges to zero at a rate \(o(1/n)\) as the number of queries, \(n\), goes to infinity. Moreover, we show there exist problem instances where qEI, a popular acquisition from the standard BO setting that is often used in the PBO setting, has Bayesian simple regret bounded _below_ by a strictly positive constant. * We demonstrate significant benefit of asking queries with more than two alternatives. This contrasts with previous work by Siivola et al. (2021), which concluded that \(q>2\) only provides limited performance improvement. ## 2 Related Work Several AFs for PBO have been proposed in the literature. Most of them are designed via heuristic arguments (Brochu et al., 2010) or simply reused from the standard BO setting (Siivola et al., 2021). For example, Brochu et al. (2010) selects the point that maximizes the posterior mean of the model over points in previous queries as the first alternative, and the point that maximizes the expected improvement with respect to the posterior mean value of the first point as the second alternative. Nielsen et al. (2014) proposes to use the point preferred by the user in the previous query as the first alternative, and the point that maximizes the expected improvement with respect to this point as the second alternative. For \(q=2\), qEUBO recovers this AF if we force the first alternative to be equal to the point preferred by the user in the previous query and optimize only over the second alternative. Gonzalez et al. (2017) proposes a pure exploration sampling policy along with two AFs based on the expected improvement and Thompson sampling AFs that aim to maximize the soft-Copeland's score. However, the computation of this score requires integration over the optimization domain, thus making these algorithms intractable even for problems of moderate dimension. Siivola et al. (2021) proposes using _batch_ versions of the expected improvement and Thompson sampling AFs from standard BO for selecting the points in each query. Since utility values are not observed directly, the batch expected improvement is adapted by defining the improvement with respect to the maximum posterior mean value over points in previous queries, along the lines of the approach followed by Brochu et al. (2010). Batch Thompson sampling is defined as in the standard BO setting: each point in the batch is selected as the point that maximizes an independently drawn sample from the utility's posterior distribution. Nguyen et al. (2021) proposes the multinomial predictive entropy search (MPES) AF for top-\(k\) ranking BO, a slightly more general framework where the DM selects her most \(k\) preferred alternatives in each query. MPES selects the query that maximizes the information gain on the utility function's maximizer through observing the DM's feedback. It can be seen as a principled adaptation of the predictive entropy search (PES) AF for standard BO (Hernandez-Lobato et al., 2014). Like with PES, the computation of MPES requires approximating an intractable multi-dimensional integral with respect to the posterior distribution on the utility function's maximizer. This is computationally challenging, especially in problems of moderate dimension, and inaccurate approximation can lead to a degraded performance. To our knowledge, the Figure 1: Fire particle rendering problem from Section 5, in which a human user is asked which of two animations looks more like fire (top). Final rendering results based on fitting a support vector machine model to 100 comparisons between random particle effects and then optimizing the predicted latent decision function over animation parameters (bottom). AFs proposed by Siivola et al. (2021) and Nguyen et al. (2021) are the only existing ones allowing for queries with more than two alternatives. Finally, Benavoli et al. (2021) proposes an AF where the first alternative is the point chosen by the user in the previous query, and the second one is obtained by maximizing a linear combination between the logarithm of the probability of improvement and the information gain; the weight of this linear combination is a hyperparameter of the algorithm. It also proposes two other AFs based on Thompson sampling and GP-UCB (Srinivas et al., 2012). The above AFs (except MPES) were derived via heuristic arguments. In contrast, qEUBO is derived following a principled decision-theoretic analysis modeling the fact that, in PBO, observations are comparisons instead of direct utility values. qEUBO's approach is consistent with the rigorous decision-theoretic or information-theoretic analysis used to derive principled AFs in standard BO. Moreover, unlike MPES, qEUBO is easy to compute and comes with a convergence guarantee. Finally, qEUBO outperforms MPES significantly in our empirical evaluation. qEUBO, restricted to the case \(q=2\), was first discussed by Lin et al. (2022) in the context of preference exploration for multi-attribute BO. In this context, the DM does not express preferences directly over alternatives but over attributes of these alternatives, which are assumed to be time-consuming to evaluate. As a consequence, qEUBO is not used directly to find alternatives to show to the DM. Instead, it is combined with a probabilistic surrogate model of the mapping from alternatives to attributes to find hypothetical attribute vectors over which the DM expresses preferences. Our work places qEUBO in the context of PBO and extends its definition to queries with \(q>2\) alternatives. We also generalize the analysis of Lin et al. (2022) by showing that maximizing qEUBO recovers a one-step optimal solution when responses are noise-free for \(q>2\). Finally, we provide a novel convergence analysis for qEUBO. The connection between qEUBO and the one-step Bayes optimal policy relates qEUBO to the knowledge gradient class of sampling policies for sequential data collection, which are, by definition, one-step Bayes optimal (Frazier et al., 2008). Knowledge gradient AFs have been widely used in standard BO (Wu and Frazier, 2016; Wu et al., 2017; Cakmak et al., 2020) and are known for their superior performance over simpler AFs such as expected improvement or Thompson sampling, especially when observations are Figure 2: log10(optimum value - utility value at the maximizer of the posterior mean) using moderate logistic noise and \(q=2\) comparisons per DM query. All algorithms are shown up to 150 queries. qEUBO outperforms other algorithms on all but one problem. noisy or the objective function is highly multi-modal (Wu and Frazier, 2016; Balandat et al., 2020) or when these simpler AFs are used in settings where they lack a meaningful interpretation. At the same time, they are typically very challenging to maximize (Balandat et al., 2020), often resulting in high computation times and degraded performance in problems of moderate dimension. The former is particularly problematic in the PBO setting where queries are often required to be generated in real time. Since qEUBO is significantly simpler to compute, we effectively overcome the computational burden commonly faced by knowledge gradient AFs. While this equivalence does not hold anymore when responses are noisy, we show that qEUBO still enjoys an additive constant approximation guarantee to the one-step Bayes optimal policy. Our work is also related to the literature on dueling-bandits (Yue et al., 2012; Benggs et al., 2021). Like in our problem setting, the DM is assumed to express preference feedback over sets (typically pairs) of alternatives. However, most of these approaches assume a finite number of alternatives and often also independence across pairs of alternatives. The double Thomson sampling strategy for dueling bandits proposed by Wu and Liu (2016) is analogous to the Thompson sampling AF for PBO proposed by Siivola et al. (2021). Finally, our work is also related to a broader stream of research on computational preference elicitation (Braziunas, 2006). This work focuses on problems with a finite set of alternatives, where it suffices to estimate a ranking of the alternatives, or on estimating the underlying DM's utility function within a parametric class of functions. In particular, our work is closely related to Viappiani and Boutilier (2010), which derived an analogous result relating optimal recommendation sets with one-step Bayes optimal query sets. However, Viappiani and Boutilier (2010) proposes using optimal recommendation sets to query the DM, which differ from those queries selected by qEUBO when responses are noisy. As a consequence, our analysis under noisy responses also focuses on relating qEUBO to the one-step Bayes optimal policy rather than the policy that selects optimal recommendation sets. ## 3 Problem Setting We denote the space of _alternatives_ or _options_ by \(\mathbb{X}\). Concinctly, our goal is to find the best possible alternative in \(\mathbb{X}\) according to the DM's underlying preferences. These preferences are encoded via a latent utility function, \(f:\mathbb{X}\rightarrow\mathbb{R}\). We model \(f\) through a general Bayesian prior distribution. In our experiments we use a Gaussian process (GP) prior, but our derivation and analysis of qEUBO does not make this assumption and is applicable to more general priors. At every interaction with the DM, an algorithm selects a _query_, \(X=(x_{1},\ldots,x_{q})\in\mathbb{X}^{q}\). The DM then expresses her most preferred alternative among these \(q\) points. This response is denoted by \(r(X)\in\{1,\ldots,q\}\), where \(r(X)=i\) if \(x_{i}\) is the alternative chosen by the DM. The DM's responses may be not be always consistent with the underlying utility function. We model this via a parametric likelihood function \(L(\,\cdot\,;\lambda):\mathbb{R}^{q}\rightarrow\mathbb{R}^{q}\) such that \[\mathbf{P}(r(X)=i\mid f(X))=L_{i}(f(X);\lambda),\] where \(L_{i}(f(X);\lambda)\) is the \(i\)-th component of \(L(f(X);\lambda)\) and \(\lambda\) is estimated along with other parameters of the model. Our numerical experiments and Theorem 2 assume a logistic likelihood function of the form \[L_{i}(f(X);\lambda)=\frac{\exp(f(x_{i})/\lambda)}{\sum_{j=1}^{q}\exp(f(x_{j})/ \lambda)},\] for \(i=1,\ldots,q\), where \(\lambda\geq 0\) is the _noise level_ parameter. For \(\lambda=0\), the above expression is defined as its right-hand limit as \(\lambda\) converges to \(0\). It can be easily shown that \(\lambda=0\) recovers a noise-free response likelihood. Theorem 3 allows for a broader class of likelihood functions. Details are provided in Section B. Let \(\mathcal{D}^{(n)}=\{(X_{m},r(X_{m}))\}_{m=1}^{n}\) denote the data collected after \(n\) queries and \(\mathbf{E}_{n}\) denote the conditional expectation given \(\mathcal{D}^{(n)}\). Following the decision-theory literature, if we decide to stop at time \(N\), we will recommend the point that maximizes the DM's expected utility given the data collected so far; i.e., an element of \(\operatorname*{argmax}_{x\in\mathbb{X}}\mathbf{E}_{N}[f(x)]\). Thus, we wish to select the queries \(X_{1},\ldots,X_{N}\) so that the expected utility received by the DM under our recommendation, \(\max_{x\in\mathbb{X}}\mathbf{E}_{N}[f(x)]\), is as large as possible. ## 4 qEUBO ### qEUBO and the One-Step Bayes Optimal Policy To motivate our AF, we begin by discussing the one-step Bayes optimal policy, i.e., the policy that chooses at every iteration the query that would be optimal if it were the last one. To this end, we define for an arbitrary query \(X\in\mathbb{X}^{q}\), \[V_{n}(X)=\mathbf{E}_{n}\left[\max_{x\in\mathbb{X}}\mathbf{E}_{n+1}[f(x)]\mid X _{n+1}=X\right].\] This is the expected utility received by the DM if one last additional query \(X_{n+1}=X\) is performed. The one-step Bayes optimal policy chooses at every iteration the query that maximizes \(V_{n}\). Since \(\max_{x\in\mathbb{X}}\mathbf{E}_{n}[f(x)]\) does not depend on \(X_{n+1}\), maximizing \(V_{n}\) is equivalent to maximizing \[\mathbf{E}_{n}\left[\max_{x\in\mathbb{X}}\mathbf{E}_{n+1}[f(x)]-\max_{x\in \mathbb{X}}\mathbf{E}_{n}[f(x)]\mid X_{n+1}=X\right].\] The above expression is analogous to the knowledge gradient AF from standard BO (Frazier et al., 2009; Scott et al., 2011; Wu and Frazier, 2016). As mentioned earlier, knowledge gradient AFs often outperform simpler AFs. However they are also very challenging to maximize due to their nested expectation-maximization structure. Our main result shows that, when the DM responses are noise-free, maximizing \(V_{n}\) is equivalent to maximizing a simpler AF. We define the _expected utility of the best option (qEUBO)_ AF by \[\mathrm{qEUBO}_{n}(X)=\mathbf{E}_{n}\left[\max\{f(x_{1}),\ldots,f(x_{q})\} \right].\] Under this definition, the following result holds. **Theorem 1**.: _Suppose the DM's responses are noise-free. Then,_ \[\operatorname*{argmax}_{X\in\mathbb{X}^{q}}\mathrm{qEUBO}_{n}(X)\subseteq \operatorname*{argmax}_{X\in\mathbb{X}^{q}}V_{n}(X).\] Thus, to find a maximizer of \(V_{n}\), it suffices to maximize \(\mathrm{qEUBO}_{n}\). This is a significantly simpler task as it does not require solving a nested stochastic optimization problem. When the posterior over \(f\) is Gaussian or approximated via a Gaussian distribution (e.g., via a Laplace approximation), \(\mathrm{qEUBO}_{n}\) can be efficiently maximized via sample average approximation (Balandat et al., 2020). This is the approach we pursue in our experiments. Moreover, if \(q=2\), \(\mathrm{qEUBO}_{n}\) has a closed form expression in terms of the posterior mean and covariance functions (Lin et al., 2022). When the DM's responses are noisy, maximizing \(\mathrm{qEUBO}_{n}\) is no longer equivalent to maximizing \(V_{n}\). However, the result below shows that if noise in the DM's responses follows a logistic likelihood, maximizing \(\mathrm{qEUBO}_{n}\) still recovers a high-quality query. Formally, we show the following. **Theorem 2**.: _Suppose that the DM's responses follow the logistic likelihood function with parameter \(\lambda\) defined above. Denote \(V_{n}\) as \(V_{n}^{\lambda}\) to make its dependence on \(\lambda\) explicit. If \(X^{*}\in\operatorname*{argmax}_{X\in\mathbb{X}^{q}}\mathrm{qEUBO}_{n}(X)\), then_ \[V_{n}^{\lambda}(X^{*})\geq\max_{X\in\mathbb{X}^{q}}V_{n}^{0}(X)-\lambda C,\] _where \(C=L_{W}((q-1)/e)\), and \(L_{W}\) is the Lambert \(W\) function (Corless et al., 1996)._ The above two results extend those shown by Lin et al. (2022) to the logistic likelihood and \(q>2\). Their proofs can be found in Section A. Figure 3: log10(optimum value - utility value at the maximizer of the posterior mean) for \(q\in\{2,4\}\). Algorithms are shown up to 150 queries. qEUBO outperforms all other algorithms on all problems. Including more alternatives per query (\(q=4\)) allows regret to decline more quickly. ### qEUBO and qEI The batch expected improvement AF, commonly known as qEI, was developed in the context of parallel BO (Ginsbourger et al., 2008; Wang et al., 2016), where it enjoys a meaningful decision-theoretic interpretation. It was adapted to the PBO setting by Siivola et al. (2021). While qEI lacks a meaningful interpretation in the PBO setting, it often has good performance. Here, we show that qEUBO is related to qEI. This connection sheds light on qEI's strong empirical performance as an AF for PBO. However, we also show qEI has significant drawbacks that cause it to have poor performance in some practical scenarios. Observe that \(\mathrm{qEUBO}_{n}(X)=\mathbf{E}_{n}[F(X)]\) where \(F:\mathbb{X}^{q}\rightarrow\mathbb{R}\) is defined by \(F(X)=\max_{i=1,\ldots,q}\{f(x_{1}),\ldots,f(x_{q})\}\). Moreover, if \(I_{n}\) is any value that does not depend on \(X\), then maximizing \(\mathrm{qEUBO}_{n}\) produces the same query as maximizing \(\mathbf{E}_{n}\left[F(X)-I_{n}\right]\). Now observe that, if \(I_{n}=\max\left\{\mathbf{E}_{n}[f(x)]:x\in\cup_{m=1}^{n}X_{m}\right\}\), then \(\mathrm{qEI}_{n}(X)=\mathbf{E}_{n}[\{F(X)-I_{n}\}^{+}]\) recovers the qEI AF proposed by Siivola et al. (2021). From these expressions we observe that, if \(F(X)\) is typically larger than \(I_{n}\) so that \(F(X)-I_{n}=\{F(X)-I_{n}\}^{+}\), then optimizing \(\mathrm{qEUBO}_{n}(X)\) should produce an optimal \(X\) similar to the one obtained by optimizing \(\mathrm{qEI}_{n}(X)\). This is often the case in early iterations, when \(I_{n}\) is small. However, as \(I_{n}\) becomes larger, we expect this to occur less frequently, making qEI and qEUBO produce more different queries. Moreover, when they diverge, qEI can perform quite poorly, as we will see later. When the incumbent alternative (the one whose posterior mean achieves \(I_{n}\)) has low variance, as typically results from comparing a good alternative against many other alternatives, then qEI will become increasingly reluctant to include it or points near it into the next query. In standard BO, this reluctance is appropriate because re-measuring the incumbent will not generate an improvement. But, in PBO, there is great value in comparing an incumbent alternative to another alternative that might be better -- this is a primary way that we evaluate new alternatives. This is also consistent with experimental results discussed later in Section 5 and shown in Figures 2, 3 and 5: qEUBO and qEI tend to perform similarly early on when we have asked the DM few queries; later, qEI tends to stall, while qEUBO continues to reduce its simple regret. This intuition is also codified in Theorem 4 in the next section, which shows an example in which qEI fails to be consistent. To illustrate this further, Figure 4 compares qEUBO and qEI on a simple 1-dimensional example problem (a quadratic function with a single maximum at \(x=0.5\)). For each AF, we first trained a preferential GP model using 5 randomly chosen comparisons (left column of Figure 4), then generated 5 more (middle column of Figure 4), and additionally 5 more comparisons (right column of Figure 4) using qEUBO (top rows) and qEI (bottom rows) respectively. After the first 5 randomly generated comparisons, the posteriors are the same, the contours of the two AFs are similar because \(I_{n}\) is small, and the two methods make similar queries. After 5 Figure 4: Comparison of qEUBO (top row) against qEI (bottom row) on a 1-dimensional problem (a quadratic function with a single maximum at \(x=0.5\)) with \(q=2\). The first column shows the AF over the two alternatives to be included in the query after training a preferential GP on \(5\) randomly generated queries. The second column shows the AF after 5 more queries, generated by the given AF. The third column shows the AF after 5 more queries generated via that row’s AF. more queries, generated using each AF, qEUBO has already learned that \(0.5\) is a good solution and is comparing this with other alternatives. In contrast, qEI is choosing not to compare with \(0.5\). This pattern continues after 5 additional queries. ### Convergence Analysis of qEUBO and qEI We end this section by discussing the convergence properties of qEUBO and qEI. We show that, under sufficient regularity conditions, qEUBO's Bayesian simple regret converges to zero at a rate \(o(1/n)\). We also show that there are problem instances where qEI has Bayesian simple regret bounded below by a positive constant; in particular, qEI is not asymptotically consistent. Our analysis assumes that \(\mathbb{X}\) is finite, \(q=2\), and other technical conditions described in Section B. These conditions hold in a broad range of settings. For example, they hold under the logistic likelihood function discussed above if the prior distribution on \(f\) is such that \[\delta\leq|f(x)-f(y)|\leq\Delta\] almost surely whenever \(x\neq y\) for some \(\Delta\geq\delta>0\). They also hold for general non-degenerate GP prior distributions if the likelihood function satisfies \[L(f(X);\lambda)=a\] for some fixed \(a>1/2\) whenever \(f(x_{1})\neq f(x_{2})\). Under these conditions, we show the following results. Formal statements and proofs can be found in Section B. **Theorem 3**.: _Assume the sequence of queries is chosen by maximizing qEUBO and the assumptions described in Section B hold. Then, \(\mathbf{E}[f(x^{*})-f(\widehat{x}_{n}^{*})]=o(1/n)\), where \(x^{*}=\operatorname*{argmax}_{x\in\mathbb{X}}f(x)\) and \(\widehat{x}_{n}^{*}\in\operatorname*{argmax}_{x\in\mathbb{X}}\mathbf{E}_{n}[ f(x)]\)._ **Theorem 4**.: _There exists a problem instance (i.e., \(\mathbb{X}\) and Bayesian prior distribution over \(f\)) satisfying the assumptions described in Section B such that if the sequence of queries is chosen by maximizing qEI, then \(\mathbf{E}[f(x^{*})-f(\widehat{x}_{n}^{*})]\geq R\) for all \(n\), for a constant \(R>0\)._ The problem instance in which qEI fails to be consistent has the characteristics previously described in Section 4.2 -- the incumbent, i.e., the alternative with the best posterior mean, also has known value. As a result, qEI is unwilling to include it in the queries asked. This makes qEI unable to learn about the value of other alternatives -- it can learn about the relative value of other alternatives with _each other_, but not about their value relative to the incumbent. ## 5 Experiments We compare qEUBO with various state-of-the-art AFs for PBO from the literature. We consider MPES from Nguyen et al. (2021), which, as described, is arguably the only existing PBO AF with a proper justification. We also consider qEI and batch Thompson sampling (qTS) from Siivola et al. (2021), which were both shown to have excellent empirical performance. We also consider qNEI, a version of qEI that accounts for the uncertainty in latent function values through Monte Carlo integration over fantasized values Balandat et al. (2020). qEUBO, qEI, and qNEI are optimized via sample average approximation with multiple restarts (Balandat et al., 2020). qTS uses approximate sample paths obtained via 1000 random Fourier features (Rahimi and Recht, 2007). For reference, we also include the performance of random search (Random), which selects queries uniformly at random over the space of alternatives. All algorithms use a Gaussian process prior with a constant mean function and RBF covariance function to model \(f\). We approximate the posterior distribution over \(f\) via the variational inducing point approach introduced by Hensman et al. (2015). Our approach is equivalent to the one pursued by Nguyen et al. (2021) if we take the set of inducing points equal to the set of all points in the queries asked so far. Our set of inducing points includes these points in addition to a small set of quasi-random Sobol points, which improves performance slightly. We report results across three synthetic test functions and three test functions built from real-world data. In contrast with most existing papers from the literature, which limit themselves to low-dimensional problems, we focus on more challenging problems of moderate dimension (\(>3\)). Synthetic functions include 6-dimensional Ackley, 7-dimensional Alpine1, and 6-dimensional Hartmann. Realistic problems include a 7-dimensional car cab design problem (Carcab) (Lin et al., 2022), a 4-dimensional problem involving real-world human preferences over 100 sushi items (Sushi) (Siivola et al., 2021), and a novel 5-dimensional animation optimization problem (Animation). Noise is added to simulate inconsistency in the DM's responses. To create our novel animation optimization problem, we use real human comparison data from a real-world particle effect rendering animation based on the publicly available demo in the AEPsych package (Owen et al., 2021). In this setting, a human user is asked to compare two rendered animations of particles side by side and to determine which one looks more like fire (Figure 1, top). The particle animation is parameterized by 5 parameters. We collected 100 such pairwise comparisons from human users with random particle animation parameters. We then confirmed that by fitting a support vector machine model on this data and optimizing, we are able to obtain a realistic fire-like particle effect. A screenshot of the resulting animation shown in the bottom of Figure 1. We then use this fitted model as the ground-truth test function to perform simulation. In all problems, a first stage of interaction with the DM is performed using \(4d\) queries chosen uniformly at random over \(\mathbb{X}^{q}\), where \(d\) is the input dimension of the problem. After this initial stage, each algorithm was used to select 150 additional queries sequentially. Figures 2 and 3 show the mean of the log simple regret, plus and minus 1.96 times the standard deviation divided by the square root of the number of replications, as a function of the number of queries. Here, simple regret is is defined as the maximum objective value minus the objective value at the maximizer of the posterior mean. We average over 100 replications for the Animation and Sushi problems and 50 replications for the other problems. Figure 2 shows results for \(q=2\) for all algorithms. Figure 3 shows results for both \(q=2\) and \(q=4\) for MPES, qTS, qEI, and qEUBO; we only focus on the best-performing algorithms to reduce visual clutter. All problems use moderate levels of Gumbel noise, consistent with the use of a logistic likelihood. qEUBO outperforms all other AFs in all problems except Carcab for \(q=2\), followed by qEI and then by qTS. In Section C we also include results for \(q=4\), \(q=6\), and varying levels of noise. In these results, qEUBO continues to consistently outperform competitor methods. Figure 3 shows that including more alternatives in each query (\(q=4\) vs. \(q=2\)) allows qEUBO to achieve a given simple regret using fewer queries. Other AFs also benefit from including more alternatives in each query, but qEUBO seems to benefit the most. This contrasts with Siivola et al. (2021), which found only a marginal benefit of using larger values of \(q\). At the same time, our results are consistent with those from Mikkola et al. (2020), which also observed significant benefits from using queries with _larger information content_. Our work provides complementary evidence because each query in Mikkola et al. (2020) is equivalent to an infinite number of pairwise comparisons, while our queries use only \(q-1\) comparisons. Results in Section C suggest that there is also a benefit in going from \(q=4\) to \(q=6\) for all algorithms considered there, including qEUBO, but that this benefit is smaller and less consistent than that of going from \(q=2\) to \(q=4\). Table 1 shows the AF optimization walltime per iteration for each AF and each test problem, averaged over all the iterations. qEI is competitive in terms of its computational requirements, often outperforming all the other AFs. qEUBO is fast enough to support interactive learning applications, such as those for psychophysics experimentation (Owen et al., 2021) and animation (Brochu et al., 2010), despite the challenging dimensionality of the experiments presented here. To better support interactive applications, one can begin optimizing qEUBO to generate the next query while the user is considering the current query. This can be done by initiating qEUBO for all possible user responses to the current query. Figure 5 compares qEI and qEUBO on an example problem similar to the one analyzed in Theorem 4 in which qEI fails to be consistent. The objective function is the 7-dimensional Alpine1 function. The initial data set contains \begin{table} \begin{tabular}{l|r r r r r} \hline \hline Problem/Acquisition function & qNEI & MPES & qTS & qEI & qEUBO \\ \hline Ackley & 8.1 & 24.8 & 6.5 & 6.3 & 12.4 \\ Alpine1 & 11.4 & 16.4 & 6.4 & 8.9 & 11.3 \\ Hartmann & 8.7 & 15.3 & 7.1 & 6.0 & 8.3 \\ Animation & 9.4 & 13.5 & 8.6 & 7.4 & 8.2 \\ Carcab & 7.2 & 12.7 & 6.9 & 7.1 & 7.2 \\ Sushi & 8.9 & 23.9 & 9.5 & 5.9 & 7.5 \\ \hline \hline \end{tabular} \end{table} Table 1: Average runtimes in seconds across all test problems. MPES is consistently the slowest algorithm, followed by qNEI. MPES is slow because it requires approximating an intractable integral involving the posterior distribution on the utility function’s maximizer. qTS and qEI are the fastest algorithms, followed closely by qEUBO. Figure 5: Comparison between qEUBO and qEI on the 7-dimensional Alpine1 function seeded with many comparisons between a good solution and other randomly chosen ones. This setting is similar to Theorem 4. When we have a reasonably good status quo solution whose value is known with high precision, qEI is unable to significantly reduce its simple regret while qEUBO steadily learns. several queries constituted by pairs where the first point is a known high-utility point close to the optimum, and the second point is drawn uniformly at random over the domain. After these comparisons, the value of this point has relatively low variance and has a posterior mean relatively high compared to the posterior mean elsewhere. This mimics a setting common in practice where we have an in-use status quo solution that is reasonably good, has well-understood performance because it is currently in use, and on which we would like to improve. In this setting, the incumbent value \(I_{n}\) has a reasonably high value relative to the posterior mean elsewhere and the variance of the latent utility near the incumbent solution is small. As a result, qEI does not include the incumbent solution or nearby values in DM queries, hampering its ability to learn. Consequently, qEI's simple regret stalls while qEUBO, on the other hand, makes steady progress as the number of queries grows. The code used to conduct our empirical evaluation can be found at [https://github.com/facebookresearch/qEUBO](https://github.com/facebookresearch/qEUBO). ## 6 Conclusion This work introduces the expected utility of the best option (qEUBO) acquisition function for preferential Bayesian optimization. qEUBO is simple to compute, has a sound decision-theoretic interpretation, and exhibits a strong empirical performance across a broad range of problems. We also draw a connection between qEUBO and its closest competitor, qEI, showing that qEI tends to perform well in early iterations because it is similar to qEUBO but its performance degrades as the number of queries grows or when the variance around the optimum is very small. Furthermore, we show that qEUBO's Bayesian simple regret converges to zero at a rate \(o(1/n)\) as the number of queries, \(n\), goes to infinity. In contrast, we show that simple regret under qEI can fail to converge to zero. Finally, we demonstrate the substantial benefit of performing queries with more than two alternatives, in contrast with previous work, which found only a marginal benefit. Future directions include studying qEUBO's performance under other probabilistic models and extending qEUBO to more structured problem settings such as contextual preferential Bayesian optimization. ## Acknowledgements We thank Stryker Buffington and Michael Shvartsman for their help setting up the animation example. We also thank the anonymous reviewers for their helpful comments. PF was supported by AFOSR FA9550-19-1-0283 and FA9550-20-1-0351-20-1-0351.
2303.04893
A class of Gorenstein algebras and their dualities
In the recent paper "The Nakayama functor and its completion for Gorenstein algebras", a class of Gorenstein algebras over commutative noetherian rings was introduced, and duality theorems for various categories of representations were established. The manuscript on hand provides more context to the results presented in the aforementioned work, identifies new classes of Gorenstein algebras, and explores their behaviour under standard operations like taking tensor products and tilting.
Wassilij Gnedin, Srikanth B. Iyengar, Henning Krause
2023-03-08T21:22:40Z
http://arxiv.org/abs/2303.04893v1
# A class of Gorenstein algebras ###### Abstract. In the recent paper "The Nakayama functor and its completion for Gorenstein algebras", a class of Gorenstein algebras over commutative noetherian rings was introduced, and duality theorems for various categories of representations were established. The manuscript on hand provides more context to the results presented in the aforementioned work, identifies new classes of Gorenstein algebras, and explores their behaviour under standard operations like taking tensor products and tilting. Key words and phrases:Gorenstein algebra, local duality, maximal Cohen-Macaulay module, Serre duality, stable module category, tilting, gentle algebra 2 ## 1. Introduction In this paper we study the existence of a positive definite number of positive definite matrices. We consider the following two classes of positive definite matrices: \[\operatorname{\mathrm{i}j}\dim_{R}M<\infty\,\iff\,\operatorname{\mathrm{i}j} \dim_{R}M\leq\operatorname{\mathrm{i}j}\dim_{R}M\leq\operatorname{\mathrm{i}j }\dim_{R}M=\operatorname{\mathrm{i}j}\dim_{R}M\] where \(M\) is a positive definite matrix and \(\dim_{R}M\) is a positive definite matrix. The first class of positive definite matrices is the class of positive definite matrices. The second class of positive definite matrices is the class of positive definite matrices. The second class of positive definite matrices is the class of positive definite matrices. The second class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of negative definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of positive definite matrices. The third class of positive definite matrices is the class of negative definite matrices. The third class of positive definite matrices is the class of negative definite matrices. The third class of positive definite matrices is the class of negative definite matrices. The third class of positive definite matrices is the class of negative definite matrices. The third class of positive definite matrices is the class of negative definite matrices. The third class of positive definite matrices is the class of negative definite matrices. The third class of positive definite matrices is the class of negative definite matrices. The third class of negative definite matrices is the class of negative definite matrices. The third class of negative definite matrices is the class of negative definite matrices. the "homological conjectures" in local algebra, and was finally settled by P. Roberts, as a consequence of his proof of the New Intersection Theorem; see [66, 67]. Given this, one can replace \(\operatorname{depth}R\) by \(\dim R\) in the equivalences stated above. A commutative local ring \(R\) is _Gorenstein_ if \(\operatorname{inj}\dim R<\infty\). It follows from the discussion in 2.1 that \(R\) is Gorenstein if and only if \(\operatorname{inj}\dim R=\dim R\). Then \[\operatorname{Ext}_{R}^{i}(k,R)=\begin{cases}k&i=\dim R\,,\\ 0&\text{otherwise}\,.\end{cases}\] That is to say \(\operatorname{RHom}_{R}(k,R)\cong\Sigma^{-\dim R}k\). Let \(R\) be a commutative ring, not necessarily local. We write \(\operatorname{Spec}R\) for the spectrum of \(R\), consisting of the prime ideals in \(R\) with the Zariski topology. The _support_ of a finitely generated \(R\)-module \(M\) is the subset \[\operatorname{supp}_{R}M\coloneqq\{\mathfrak{p}\in\operatorname{Spec}R\mid M _{\mathfrak{p}}\neq 0\}\,.\] It is the closed subset \(V(I)\), where \(I\) is the annihilator ideal of \(M\), namely, the kernel of the homothety map \(R\to\operatorname{End}_{R}(M)\). A commutative ring \(R\) is _Gorenstein_ if it is noetherian and the local ring \(R_{\mathfrak{p}}\) is Gorenstein, in the sense of 2.2, for each \(\mathfrak{p}\) in \(\operatorname{Spec}R\); equivalently, for each \(\mathfrak{p}\) in \(\operatorname{max}(\operatorname{Spec}R)\). When this holds \(\operatorname{inj}\dim R=\dim R\), and then \(R\) has finite injective dimension precisely when \(\dim R\) is finite. In particular \(R\) is Artinian and Gorenstein if and only if \(\operatorname{inj}\dim R=0\); equivalently, when \(R\) is self-injective. Prominent examples include regular rings (which are, by definition, rings locally of finite global dimension), certain determinantal rings, and, more generally, rings of invariants of certain finite groups; see [16]. Nagata [54, Appendix A1] constructed a commutative noetherian ring \(R\) such that \(\operatorname{gl}\dim R_{\mathfrak{p}}\) is finite for each \(\mathfrak{p}\) in \(\operatorname{Spec}R\) (so \(R\) is a regular ring), and \(\dim R=\infty\). This ring is, in particular, Gorenstein, which makes the point that \(R\) Gorenstein does not mean \(\operatorname{inj}\dim R\) is finite. In other words, even in the world of commutative rings, Gorenstein does not mean Iwanaga-Gorenstein; see Section 6. The following theorem of Goto [37] often allows one to extend results known for rings of finite injective dimension, to the more general class of Gorenstein rings. **2.6 Theorem**.: _Let \(R\) be a commutative Gorenstein ring. For any \(M\) in \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}R)\) one has \(\operatorname{Ext}_{R}^{i}(M,R)=0\) for \(i\gg 0\). _ The remarkable aspect of this result is that \(\operatorname{Ext}_{R}^{*}(M,R)\) is bounded although the injective dimension of \(R\) need not be finite. Goto's theorem easily leads to the following theorem, which also goes by the name "local duality theorem", for Grothendieck's local duality can be deduced from it readily; see [66]. **2.7 Theorem**.: _When \(R\) is a commutative Gorenstein ring \(\operatorname{RHom}_{R}(-,R)\) induces a triangle equivalence \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}R)^{\mathrm{op}}\xrightarrow{ \sim}\mathbf{D}^{\mathrm{b}}(\operatorname{mod}R)\). _ ## 3. Finite projective algebras Let \(R\) be a commutative noetherian ring. Following Gabriel [32], by a _finite \(R\)-algebra_\(A\) we mean that \(A\) is an associative \(R\)-algebra that is finitely generated as an \(R\)-module. In this case, \(A\) is noetherian on either side, its centre \(Z(A)\) is a noetherian ring, and \(A\) is a finite \(Z(A)\)-algebra. Thus for any \(A\)-complexes \(M,N\) in \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}A)\), the bounded derived category of finitely generated (left) \(A\)-modules, \(\operatorname{Ext}^{i}_{A}(M,N)\) is finitely generated as an \(R\)-module, and hence also as a \(Z(A)\)-module, for each integer \(i\). We say \(A\) is a _projective_\(R\)-algebra to mean that \(A\) is an \(R\)-algebra that is projective as an \(R\)-module. Thus a finite projective \(R\)-algebra is an \(R\)-algebra that is both finite and projective as an \(R\)-module. Examples include the group algebra, \(RG\), over \(R\) of a finite group \(G\) and \(RQ\), the quiver algebra of a finite quiver \(Q\) without loops and cycles. ### Support and faithfulness Let \(A\) be a finite \(R\)-algebra. The support of \(A\) as an \(R\)-module, in the sense of 2.3, has another interpretation. 3.1. Gabriel [32] introduced the _spectrum_ of \(A\), denoted \(\operatorname{Spec}A\), to be the collection of two-sided prime ideals in \(A\). A proper two-sided ideal \(\mathfrak{p}\) of \(A\) is a _prime ideal_ if for any two-sided ideals \(I,J\) of \(A\) with \(IJ\subseteq\mathfrak{p}\) it follows that \(I\subseteq\mathfrak{p}\) or \(J\subseteq\mathfrak{p}\). Given such a \(\mathfrak{p}\) its contraction \(\mathfrak{p}\cap R\) to \(R\) is again a prime ideal, so we get a map \(\operatorname{Spec}A\to\operatorname{Spec}R\). It can be shown that its image is precisely \(\operatorname{supp}_{R}A\), see [32, Chapter V, Proof of Proposition 11]. This leads to the question: When is \(\operatorname{supp}_{R}A=\operatorname{Spec}R\)? This is connected to the faithfulness of \(A\) as an \(R\)-module, as is explained below. 3.2.: An \(R\)-module \(M\) is _faithful_ if the homothety map \(R\to\operatorname{End}_{R}(M)\) is one-to-one. An \(R\)-algebra \(A\) is faithful as an \(R\)-module precisely when the structure map \(R\to A\) is one-to-one. When a finitely generated \(R\)-module \(M\) is faithful, \(\operatorname{supp}_{R}M=\operatorname{Spec}R\). The converse need not hold; for instance, if \(I\) is a nilpotent ideal in \(R\), then \(\operatorname{supp}_{R}(R/I)=\operatorname{Spec}R\), but the \(R\)-module \(R/I\) is not faithful when \(I\neq 0\). 3.3.: **Lemma**.: _Let \(M\) be a finitely generated projective \(R\)-module. Then \(M\) is faithful if and only if \(\operatorname{supp}_{R}M=\operatorname{Spec}R\)._ Proof.: We only have to verify that when \(\operatorname{supp}_{R}M=\operatorname{Spec}R\), the \(R\)-module \(M\) is faithful. Let \(I\) be kernel of the homothety map \(\lambda\colon R\to\operatorname{End}_{R}(M)\). Since \(M\) is projective, the \(R_{\mathfrak{p}}\)-module \(M_{\mathfrak{p}}\) is free for each \(\mathfrak{p}\) in \(\operatorname{Spec}R\); it is also nonzero because \(\operatorname{supp}_{R}M=\operatorname{Spec}R\). Thus the induced map \[\lambda_{\mathfrak{p}}\colon R_{\mathfrak{p}}\longrightarrow\operatorname{ End}_{R}(M)_{\mathfrak{p}}\cong\operatorname{End}_{R_{\mathfrak{p}}}(M_{ \mathfrak{p}})\] is injective; the isomorphism holds because \(M\) is finitely generated. Thus \(I_{\mathfrak{p}}=0\). Since this holds for each prime ideal \(\mathfrak{p}\), it follows that \(I=0\), as desired. 3.4.: Let \(M\) be a nonzero finitely generated projective \(R\)-module. Consider the rank function \[\operatorname{Spec}R\longrightarrow\mathbb{N}\qquad\text{where }\mathfrak{p} \mapsto\operatorname{rank}_{R_{\mathfrak{p}}}(M_{\mathfrak{p}})\,.\] This is continuous for the Zariski topology on \(\operatorname{Spec}R\) and discrete topology on \(\mathbb{N}\), and hence constant on connected subsets of \(\operatorname{Spec}R\); see [9, Chapter III, Theorem 7.1]. Therefore when \(\operatorname{Spec}R\) is connected, the \(R\)-module \(M\), being projective and finitely generated, is faithful. In general, \(R\) decomposes as \(R^{\prime}\times R^{\prime\prime}\), where \(\operatorname{Spec}R^{\prime}=\operatorname{supp}_{R}M\), with \(M\) finitely generated even when viewed as an \(R^{\prime}\)-module, and \(R^{\prime\prime}\cdot M=0\). The upshot of this discussion is that if \(A\) is a finite projective \(R\)-algebra, then by dropping to a ring factor \(R^{\prime}\) of \(R\) if needed, we can assume that \(A\) is also faithful as an \(R\)-module; equivalently, that \(R\) is a subring of the centre of \(A\). ### Fibre-wise criteria Let \(A\) be a finite projective \(R\)-algebra. For each \(\mathfrak{p}\) in \(\operatorname{Spec}R\) the ring \(A\otimes_{R}k(\mathfrak{p})\) is a finite dimensional \(k(\mathfrak{p})\)-algebra. One can thus view \(A\) as a family of finite dimensional algebras parameterised by the (spectrum of the) ring \(R\). Moreover any \(A\)-complex \(M\) defines the family of complexes \[M_{k(\mathfrak{p})}\coloneqq M\otimes_{R}^{\operatorname{L}}k(\mathfrak{p}) \qquad\text{for $\mathfrak{p}$ in $\operatorname{Spec}R$}\,.\] We identify \(A_{k(\mathfrak{p})}\) with the \(k(\mathfrak{p})\)-algebra \(A\otimes_{R}k(\mathfrak{p})\), and view \(M_{k(\mathfrak{p})}\) as a complex over it. The following results validate this perspective. An \(A\)-complex \(M\) is _perfect_ if it is quasi-isomorphic to a bounded complex of finitely generated projective \(A\)-modules. **3.5 Proposition**.: _Let \(A\) be a finite projective \(R\)-algebra. For any \(M\in\mathbf{D}^{\operatorname{b}}(\operatorname{mod}A)\), the following conditions are equivalent._ 1. _The_ \(A\)_-complex_ \(M\) _is perfect;_ 2. _The_ \(A_{k(\mathfrak{p})}\)_-complex_ \(M_{k(\mathfrak{p})}\) _is perfect for each_ \(\mathfrak{p}\in\operatorname{supp}_{R}A\)_;_ 3. _The_ \(A_{k(\mathfrak{m})}\)_-complex_ \(M_{k(\mathfrak{m})}\) _is perfect for each_ \(\mathfrak{m}\in\max(\operatorname{supp}_{R}A)\)_._ Proof.: It is straightforward to verify that (1)\(\Rightarrow\)(2)\(\Rightarrow\)(3). (3)\(\Rightarrow\)(1) Since \(M\) is in \(\mathbf{D}^{\operatorname{b}}(\operatorname{mod}A)\), it suffices to verify that for each maximal ideal \(\mathfrak{m}\) in \(\operatorname{supp}_{R}A\) and simple \(A_{\mathfrak{m}}\)-module \(L\) one has \[\operatorname{Ext}^{i}_{A_{\mathfrak{m}}}(M_{\mathfrak{m}},L)=0\qquad i\gg 0\,.\] This is by [6, Theorem A.1.2]. Since \(\mathfrak{m}A_{\mathfrak{m}}\) is contained in the Jacobson radical of \(A_{\mathfrak{m}}\) according to [53, Corollary 5.9], one has \(\mathfrak{m}L=0\) and the action of \(A_{\mathfrak{m}}\) on \(L\) factors through \(A_{k(\mathfrak{m})}\). Thus standard adjunction yields the isomorphism below: \[\operatorname{Ext}^{i}_{A_{\mathfrak{m}}}(M_{\mathfrak{m}},L)\cong \operatorname{Ext}^{i}_{A_{k(\mathfrak{m})}}(M_{k(\mathfrak{m})},L)=0\qquad i \gg 0\,.\] The equality holds because \(M_{k(\mathfrak{m})}\) is a perfect \(A_{k(\mathfrak{m})}\)-complex. A variation of the last proof yields a fibre-wise criterion for projectivity of certain \(A\)-modules. More precisely, the following statement holds, see [15, Lemma 2.19]. **3.6 Lemma**.: _Let \(A\) be a finite projective \(R\)-algebra, and \(M\) a finitely generated \(A\)-module. When \(M\) is projective as an \(R\)-module, there is an equality_ \[\operatorname{proj}\dim_{A}M=\sup\{\operatorname{proj}\dim_{A_{k(\mathfrak{p} )}}M_{k(\mathfrak{p})}\mid\mathfrak{p}\in\operatorname{Spec}R\}\,.\] _Moreover, it suffices to take the supremum over the maximal ideals in \(R\). _ The following result due to Bass and Murthy, see [9, Theorem 7.1] or [7, Theorem 4.1], is a key ingredient in the proof of [6, Theorem A.1.2] used above. It plays a role in the discussion below, and also later on. **3.7 Theorem**.: _Let \(R\) be a commutative noetherian ring, \(A\) a finite \(R\)-algebra, and \(M\) an \(A\)-complex in \(\mathbf{D}^{\operatorname{b}}(\operatorname{mod}A)\). Then \(M\) is perfect if and only if the \(A_{\mathfrak{p}}\)-complex \(M_{\mathfrak{p}}\) is perfect for each \(\mathfrak{p}\) in \(\operatorname{supp}_{R}A\). _ It is worth noting that the analogous statement for injective dimensions does not hold in general; any Gorenstein ring that is not of finite injective dimension illustrates this; see 2.5. For later use we record a few more properties of \(A\)-complexes that can be detected on the fibres. Given a complex \(N\) in \(\mathbf{D}(\operatorname{Mod}A)\) we write \(\operatorname{Loc}(N)\) and \(\operatorname{Thick}(N)\) for the localising subcategory and the thick subcategory, respectively, generated by \(N\). **3.8 Theorem**.: _Let \(A\) be a finite projective \(R\)-algebra, and let \(M,N\) be \(A\)-complexes. The following conditions are equivalent._ 1. \(M\in\operatorname{Loc}(N)\) _in_ \(\mathbf{D}(\operatorname{Mod}A)\)_;_ 2. \(M_{k(\mathfrak{p})}\in\operatorname{Loc}(N_{k(\mathfrak{p})})\) _in_ \(\mathbf{D}(\operatorname{Mod}A_{k(\mathfrak{p})})\) _for each_ \(\mathfrak{p}\in\operatorname{supp}_{R}A\)_._ Proof.: The functor \(\mathbf{D}(\operatorname{Mod}A)\) to \(\mathbf{D}(\operatorname{Mod}A_{k(\mathfrak{p})})\) assigning \(M\mapsto M_{k(\mathfrak{p})}\) is exact and preserves coproducts. This yields that (1)\(\Rightarrow\)(2). (2)\(\Rightarrow\)(1) The key input is a local to global principle for detecting membership in localising subcategories, from [12, SS3]. To that end we view \(\mathbf{D}(\operatorname{Mod}A)\) as an \(R\)-linear triangulated category; see [46, SS7]. Associated to each \(\mathfrak{p}\) in \(\operatorname{Spec}R\) one has a local cohomology functor on \(\mathbf{D}(\operatorname{Mod}A)\), denoted \(\varGamma_{\mathfrak{p}}(-)\), extending the classical local cohomology functors over \(R\), introduced by Grothendieck. This functor can be realised as \[\varGamma_{\mathfrak{p}}M\cong M\otimes_{R}^{\mathbb{L}}\varGamma_{\mathfrak{ p}}R\] for \(M\) in \(\mathbf{D}(\operatorname{Mod}A)\), where \(\varGamma_{\mathfrak{p}}R\) is given by application of the functor \[E\longmapsto\bigcup_{n>0}\{x\in E\mid\mathfrak{p}^{n}\cdot x=0\}\] to an injective resolution of \(R_{\mathfrak{p}}\). Observe that in \(\mathbf{D}(\operatorname{Mod}R)\) one has \[\operatorname{Loc}(\varGamma_{\mathfrak{p}}R)=\operatorname{Loc}(k(\mathfrak{ p}))\subseteq\operatorname{Loc}(R_{\mathfrak{p}})\,.\] The equality can be deduced from arguments by Neeman [55, Lemma 2.9 and proof of claim (1) on page 528], whilst the inclusion holds because \(k(\mathfrak{p})\) is an \(R_{\mathfrak{p}}\)-module. Applying the functor \[M\otimes_{R}^{\mathbb{L}}-\colon\mathbf{D}(\operatorname{Mod}R)\longrightarrow \mathbf{D}(\operatorname{Mod}A)\] yields the following: \[\operatorname{Loc}(\varGamma_{\mathfrak{p}}M)=\operatorname{Loc}(M_{k( \mathfrak{p})})\subseteq\operatorname{Loc}(M_{\mathfrak{p}})\,.\] Our hypothesis is that \(M_{k(\mathfrak{p})}\) is in \(\operatorname{Loc}(N_{k(\mathfrak{p})})\) for each \(\mathfrak{p}\). By the equalities above, this then yields that \[\operatorname{Loc}(\varGamma_{\mathfrak{p}}M)=\operatorname{Loc}(M_{k( \mathfrak{p})})\subseteq\operatorname{Loc}(N_{k(\mathfrak{p})})\subseteq \operatorname{Loc}(N_{\mathfrak{p}})\] for each \(\mathfrak{p}\in\operatorname{Spec}R\). Now we invoke the local to global principle to conclude that \(M\) is in \(\operatorname{Loc}(N)\) as desired. When the Krull dimension of \(R\) is finite the local to global principle is in Theorem 3.1 and Corollary 3.5 of [12]; the general case is due to Stevenson [70, Theorem 6.9]. **3.9 Corollary**.: _Let \(A\) be a finite projective \(R\)-algebra, and let \(M,N\) be \(A\)-complexes. When \(M\) and \(N\) are perfect, the following conditions are equivalent._ 1. \(M\in\operatorname{Thick}(N)\) _in_ \(\mathbf{D}(\operatorname{Mod}A)\)_;_ 2. \(M_{k(\mathfrak{p})}\in\operatorname{Thick}(N_{k(\mathfrak{p})})\) _in_ \(\mathbf{D}(\operatorname{Mod}A_{k(\mathfrak{p})})\) _for each_ \(\mathfrak{p}\in\operatorname{supp}_{R}A\)_._ Proof.: As in the proof of Theorem 3.8, the implication (1)\(\Rightarrow\)(2) is clear. As to the converse, since \(\operatorname{Thick}(N_{k(\mathfrak{p})})\subseteq\operatorname{Loc}(N_{k( \mathfrak{p})})\), the hypothesis and Theorem 3.8 yield that \(M\) is in \(\operatorname{Loc}(N)\). Since \(M\) and \(N\) are perfect and hence compact in \(\mathbf{D}(\operatorname{Mod}A)\), it follows that \(M\) is in \(\operatorname{Thick}(N)\), as claimed; see [51, Proposition 3.4.13]. As is explained in the next paragraph, the results above extend results of Hopkins and Neeman dealing with commutative noetherian rings. Set \(A\coloneqq R\). Then \(R_{k(\mathfrak{p})}=k(\mathfrak{p})\), the residue field of \(R\) at \(\mathfrak{p}\), and one has \(M_{k(\mathfrak{p})}=M\otimes_{R}^{\mathrm{L}}k(\mathfrak{p})\) for any \(R\)-complex \(M\). The _support_ of \(M\), denoted \(\operatorname{supp}_{R}M\), is the collection \(\mathfrak{p}\in\operatorname{Spec}R\) such that \(H^{*}(M_{k(\mathfrak{p})})\neq 0\). This coincides with the support of the \(R\)-module \(H^{*}(M)\) when the latter is finitely generated; see [30, SS2]. Given \(R\)-complexes \(M,N\), the condition that \(M_{k(\mathfrak{p})}\) is in \(\operatorname{Loc}(N_{k(\mathfrak{p})})\) for each \(\mathfrak{p}\) in \(\operatorname{supp}_{R}A\) is equivalent to: for each \(\mathfrak{p}\) in \(\operatorname{supp}_{R}A\) with \(H^{*}(M_{k(\mathfrak{p})})\neq 0\) it follows that \(H^{*}(N_{k(\mathfrak{p})})\neq 0\); equivalently, that \(\operatorname{supp}_{R}M\subseteq\operatorname{supp}_{R}N\). So Theorem 3.8 translates to \[M\in\operatorname{Loc}(N)\text{ in }\mathbf{D}(\operatorname{Mod}R)\quad \iff\quad\operatorname{supp}_{R}M\subseteq\operatorname{supp}_{R}N\,.\] This result is due to Neeman and is the key step in the classification of localising subcategories of \(\mathbf{D}(\operatorname{Mod}R)\) via subsets of \(\operatorname{Spec}R\); see [55, Theorem 2.8]. In the same vein, Corollary 3.9 translates to: When \(M,N\) are perfect complexes, \(\operatorname{supp}_{R}M\subseteq\operatorname{supp}_{R}N\) if and only if \(M\) is in \(\operatorname{Thick}(N)\), which leads to a classification of the thick subcategories of the perfect \(R\)-complexes, due to Hopkins [41, SS4]. Here is one simple application of the preceding results. Let \(Q\) be a finite quiver without oriented cycles. We consider the _path algebra_\(RQ\) which is, by the assumption on \(Q\), a finite projective, even free, \(R\)-algebra. A basis of \(RQ\) as an \(R\)-module is given by all paths in \(Q\), and the multiplication is induced by the composition of paths. Modules over the path algebra \(RQ\) identify with \(R\)-linear representations of the quiver \(Q\). When \(k\) is a field, \(kQ\) has finite global dimension, and there is a well developed theory of quiver representations. In particular, thick subcategories of \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}kQ)\) have been classified in terms of non-crossing partitions [42, 43]. The quiver \(Q\) determines a Coxeter group \(W=W(Q)\) and a Coxeter element \(c\in W\). We denote by \(\operatorname{NC}(W,c)\) the corresponding poset of _non-crossing partitions_. When the quiver \(Q\) is of Dynkin type, there is a well defined assignment \[\mathbf{D}^{\mathrm{b}}(\operatorname{mod}kQ)\supseteq\mathcal{C}\longmapsto \operatorname{cox}(\mathcal{C})\in\operatorname{NC}(W,c)\] providing an isomorphism between the lattice of thick subcategories of \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}kQ)\) and the poset \(\operatorname{NC}(W,c)\). Let \(R\) be a commutative noetherian ring that is regular. For each \(\mathfrak{p}\) in \(\operatorname{Spec}R\) the ring \(R_{\mathfrak{p}}Q\) has finite global dimension, because \(R_{\mathfrak{p}}\) does. Thus Theorem 3.7 yields that each \(M\) in \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}RQ)\) is perfect. Given this observation, the fibre-wise criteria in Corollary 3.9 open a path to a classification of all thick subcategories of \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}RQ)\). Over a general commutative noetherian ring \(R\), one gets an approach to classify the thick subcategories of perfect complexes in \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}RQ)\). ## 4. Fibre-wise tilting complexes This section and the next are motivated by the question whether a derived equivalence of finite \(R\)-algebras amounts to a derived equivalence of their fibres over \(\operatorname{Spec}R\). For this, we recall notions and results from derived Morita theory. ### Derived equivalences Let \(A\) be a finite \(R\)-algebra and \(T\) a complex in \(\mathbf{D}(\operatorname{Mod}A)\). The ring \[\operatorname{End}(T)\coloneqq\operatorname{End}_{\mathbf{D}(\operatorname{Mod }A)}(T)^{\operatorname{op}}\] admits a natural structure of a finite \(R\)-algebra. The complex \(T\) is _tilting_ if 1. \(\operatorname{Ext}_{A}^{i}(T,T)=0\) for \(i\neq 0\), and 2. \(\operatorname{Thick}(T)=\operatorname{Thick}(A)\) in \(\mathbf{D}(\operatorname{Mod}A)\). For such a complex the functor \[F\coloneqq\operatorname{RHom}_{A}(T,-)\colon\mathbf{D}^{\operatorname{b}}( \operatorname{mod}A)\xrightarrow{\sim}\mathbf{D}^{\operatorname{b}}( \operatorname{mod}\operatorname{End}(T))\] is an equivalence of triangulated categories. Vice versa, if \(B\) is a noetherian ring which is derived equivalent to \(A\), denoted by \(A\sim B\) in the following, then there is an isomorphism of rings \(B\cong\operatorname{End}(T)\) for a tilting complex \(T\) of \(A\). These statements follow from results of Rickard [61, 62]; see also Keller [48]. The class of finite projective \(R\)-algebras is not closed under derived equivalences. This phenomenon was studied by Konig and Zimmermann in [50]. It is related to tilting complexes that are not fibre-wise tilting. **4.1 Example**.: Let \(R\) be the polynomial ring \(k[c]\) in one indeterminate over a field \(k\) and \(\mathfrak{m}\) the maximal ideal \((c)\) of \(R\). The \(k\)-linear path algebra \(A\) of the two-cycle quiver is a finite free \(R\)-algebra: \[Q\colon\ 1\xleftarrow{\alpha}\xrightarrow[\beta]{\alpha}2\qquad\ \ A\coloneqq kQ\cong\begin{pmatrix}R&\mathfrak{m}\\ R&R\end{pmatrix}\subset\operatorname{Mat}_{2\times 2}(R)\,. \tag{4.2}\] Replacing the projective summand \(Ae_{2}\) of \(A\) with the simple module \(Ae_{1}/A\alpha\) yields a tilting module of \(A\) which has the projective resolution \[T\coloneqq(\,Ae_{2}\xrightarrow{\cdot\alpha}A\,)\,.\] Since \(A_{k(\mathfrak{m})}\cong kQ/(\alpha\beta,\beta\alpha)\), it follows that \(\operatorname{Ext}_{A_{k(\mathfrak{m})}}^{-1}(T_{k(\mathfrak{m})},T_{k( \mathfrak{m})})\neq 0\), and thus the complex \(T_{k(\mathfrak{m})}\) is not tilting. On the other hand, the quiver of the \(R\)-algebra \(B\coloneqq\operatorname{End}(T)\) has a sink: \[B\cong\begin{pmatrix}R&0\\ R/\mathfrak{m}&R/\mathfrak{m}\end{pmatrix}\cong kQ^{*}/I^{*}\quad\text{ for }\quad(Q^{*},I^{*})\colon\ x\xleftarrow{y}2\quad yx=0\,.\] Since \(Z(B)\cong R\) is a Dedekind domain and \(y\in B\) is a nonzero torsion element, 3.4 implies that \(B\) cannot admit the structure of a finite projective \(R\)-algebra. The last example motivates the following terminology. **4.3 Definition**.: Let \(A\) be a finite projective \(R\)-algebra. A complex \(T\) in \(\mathbf{D}(\operatorname{Mod}A)\) is called _fibre-wise tilting_ if the \(A_{k(\mathfrak{p})}\)-complex \(T_{k(\mathfrak{p})}\) is tilting for each \(\mathfrak{p}\) in \(\operatorname{Spec}R\). In [62, Theorem 2.1] Rickard proved a result on derived equivalences under base change which implies the following. **4.4 Theorem**.: _Let \(A\) and \(B\) be finite \(R\)-algebras and \(R\to S\) be a ring morphism of commutative noetherian rings such that \(A\sim B\) and_ \[\operatorname{Tor}_{n}^{R}(A,S)=0=\operatorname{Tor}_{n}^{R}(B,S)\qquad\text{ for }n\geq 1.\] _Then \(A\otimes_{R}S\sim B\otimes_{R}S\). More precisely, if \(T\) is a tilting complex of \(A\) such that \(\operatorname{End}(T)\cong B\), then \(T\otimes_{R}^{\operatorname{L}}S\) is a tilting complex of \(A\otimes_{R}S\) such that there is an isomorphism \(\operatorname{End}(T\otimes_{R}^{\operatorname{L}}S)\cong B\otimes_{R}S\) of \(S\)-algebras._ One idea to prove Theorem 4.4 is to compare the cohomology of the complex \(X\coloneqq\operatorname{RHom}(T,T)\) with that of \(X\otimes_{R}^{\operatorname{L}}\Gamma\). This motivates the next considerations. **4.5**.: **Proposition**.: _For \(X\in\mathbf{D}^{\operatorname{b}}(\operatorname{mod}R)\) the following conditions are equivalent._ 1. \(H^{i}(X)=0\) _for_ \(i\neq 0\) _and the_ \(R\)_-module_ \(H^{0}(X)\) _is projective;_ 2. \(H^{i}(X_{k(\mathfrak{p})})=0\) _for_ \(i\neq 0\) _and for each_ \(\mathfrak{p}\in\operatorname{Spec}R\)_;_ 3. \(H^{i}(X_{k(\mathfrak{m})})=0\) _for_ \(i\neq 0\) _and for each_ \(\mathfrak{m}\in\max(\operatorname{Spec}R)\)_._ Proof.: (1)\(\Rightarrow\)(2) Since \(H^{i}(X)=0\) for \(i\neq 0\), one has that \(X\cong H^{0}(X)\) in \(\mathbf{D}(\operatorname{Mod}R)\). This gives, in \(\mathbf{D}(\operatorname{Mod}R)\), the first isomorphism below \[X_{k(\mathfrak{p})}\cong H^{0}(X)_{k(\mathfrak{p})}\cong H^{0}(X)\otimes_{R} k(\mathfrak{p})\,.\] The second one holds because \(H^{0}(X)\) is projective. Thus \(H^{i}(X_{k(\mathfrak{p})})=0\) for \(i\neq 0\). (3)\(\Rightarrow\)(1) Since the \(R\)-modules \(H^{i}(X)\) are finitely generated, it suffices to verify that for each maximal ideal \(\mathfrak{m}\) the \(R_{\mathfrak{m}}\)-module \(H^{i}(X)_{\mathfrak{m}}\) is zero for \(i\neq 0\) and free when \(i=0\). Replacing \(R\) and \(X\) by their localisation at \(\mathfrak{m}\), we can thus assume \(R\) is a local ring, with maximal ideal \(\mathfrak{m}\). The hypothesis is that \(H^{i}(X\otimes_{R}^{\operatorname{L}}k)=0\) for \(i\neq 0\), where \(k\coloneqq R/\mathfrak{m}\). Since \(X\) is in \(\mathbf{D}^{\operatorname{b}}(\operatorname{mod}R)\) there is a quasi-isomorphism \(F\xrightarrow{\sim}X\) with each \(F^{i}\) a finitely generated free \(R\)-module, zero for \(i\gg 0\), and differential satisfying \(d(F)\subseteq\mathfrak{m}F\); see [51, Corollary 4.3.19]. In particular \[H^{i}(X\otimes_{R}^{\operatorname{L}}k)\cong H^{i}(F\otimes_{R}k)=F^{i} \otimes_{R}k\,.\] Thus the hypothesis is equivalent to \(F^{i}=0\) for \(i\neq 0\), so that \(X\cong F^{0}\) in \(\mathbf{D}(\operatorname{Mod}R)\). This is the desired conclusion. ### Intermediate fibres Let \(\mathfrak{p}\) be a prime ideal of \(R\). In the next proposition, we call a commutative noetherian ring \(S(\mathfrak{p})\) an _intermediate ring_ if the natural ring morphism \(R\to k(\mathfrak{p})\) factors through \(S(\mathfrak{p})\). Classical examples are given by the local ring \(R_{\mathfrak{p}}\) or its completion \(\widehat{R}_{\mathfrak{p}}\) at its maximal ideal. For a finite projective \(R\)-algebra \(A\) and a complex \(T\) in \(\mathbf{D}(\operatorname{Mod}A)\) we set \[A_{S(\mathfrak{p})}\coloneqq A\otimes_{R}S(\mathfrak{p})\quad\text{and} \quad T_{S(\mathfrak{p})}\coloneqq T\otimes_{R}^{\operatorname{L}}S(\mathfrak{ p})\,.\] The algebra \(A_{S(\mathfrak{p})}\) might be thought of as a thickened or intermediate fibre. Certain intermediate fibres will play a role in the lifting problems of the next section. The following statement is the main result of the present one. **4.6**.: **Proposition**.: _Let \(A\) be a finite projective \(R\)-algebra and fix \(T\) in \(\mathbf{D}^{\operatorname{b}}(\operatorname{mod}A)\). The following conditions are equivalent._ 1. _The_ \(A\)_-complex_ \(T\) _is tilting and_ \(\operatorname{End}(T)\) _is projective as_ \(R\)_-module;_ 2. _For each_ \(\mathfrak{p}\) _in_ \(\operatorname{Spec}R\) _and for any intermediate ring_ \(S(\mathfrak{p})\) _the_ \(A_{S(\mathfrak{p})}\)_-complex_ \(T_{S(\mathfrak{p})}\) _is tilting and_ \(\operatorname{End}(T_{S(\mathfrak{p})})\) _is projective as_ \(S(\mathfrak{p})\)_-module;_ 3. _For each_ \(\mathfrak{p}\) _in_ \(\operatorname{Spec}R\) _there exists an intermediate ring_ \(S(\mathfrak{p})\) _such that the_ \(A_{S(\mathfrak{p})}\)_-complex_ \(T_{S(\mathfrak{p})}\) _is tilting and_ \(\operatorname{End}(T_{S(\mathfrak{p})})\) _is projective as_ \(S(\mathfrak{p})\)_-module;_ 4. _For each_ \(\mathfrak{p}\) _in_ \(\operatorname{Spec}R\) _the_ \(A_{k(\mathfrak{p})}\)_-complex_ \(T_{k(\mathfrak{p})}\) _is tilting._ Proof.: The implications (1)\(\Rightarrow\)(2) and (3)\(\Rightarrow\)(4) follow from Theorem 4.4. (4)\(\Rightarrow\)(1) Assume that \(T_{k(\mathfrak{p})}\) is tilting for each \(\mathfrak{p}\) in \(\operatorname{Spec}R\). Then the \(A\)-complex \(T\) is perfect by Proposition 3.5. Set \(X\coloneqq\operatorname{RHom}_{A}(T,T)\), viewed as an \(R\)-complex. Since \(T\) is perfect, there is a natural isomorphism \[X_{k(\mathfrak{p})}=\operatorname{RHom}_{A}(T,T)\otimes_{R}^{\operatorname{L} }k(\mathfrak{p})\cong\operatorname{RHom}_{A_{k(\mathfrak{p})}}(T_{k(\mathfrak{ p})},T_{k(\mathfrak{p})}) \tag{4.7}\] in \(\mathbf{D}(\operatorname{Mod}R)\) for each \(\mathfrak{p}\) in \(\operatorname{Spec}R\). Given this, (1) is readily deduced from Corollary 3.9 and Proposition 4.5. This proposition shows that the tilting property of a certain complex \(T\) of \(A\) with \(R\)-projective endomorphism ring is local with respect to any choice of intermediate fibres. In particular, fibre-wise tilting complexes are precisely the tilting complexes with \(R\)-projective endomorphism rings. The equivalence of (1) and (4) explains the simultaneous absence of these properties in Example 4.1. We will give another characterisation of fibre-wise tilting complexes in 6.14. _4.8 Remark_.: The tilting property is local in a classical sense. More precisely, for a finite \(R\)-algebra \(A\) and complex \(T\) in \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}A)\) the following are equivalent. 1. The \(A\)-complex \(T\) is tilting; 2. The \(A_{\mathfrak{p}}\)-complex \(T_{\mathfrak{p}}\) is tilting for each \(\mathfrak{p}\) in \(\operatorname{Spec}R\). If \(T\) is perfect, the above conditions are also equivalent to the following. 1. The \(A_{\mathfrak{m}}\)-complex \(T_{\mathfrak{m}}\) is tilting for each \(\mathfrak{m}\) in \(\max(\operatorname{Spec}R)\). The proof of these equivalences uses a result due to Iyama and Kimura [44, Theorem 2.8 (a)] and a characterisation of tilting complexes by Rickard [63, Lemma 1.1]. The equivalence (1)\(\Leftrightarrow\)(2) holds for silting complexes as well [44, Theorem 2.8 (b)]. ## 5. Lifting problems Let \(A\) be a finite \(R\)-algebra. Since the residue field \(k(\mathfrak{p})\) of each local ring \(R_{\mathfrak{p}}\) is isomorphic to that of its completion \(\widehat{R}_{\mathfrak{p}}\), there are natural morphisms of rings Rickard's theorem 4.4 yields a global-to-local picture on derived equivalences. _5.1 Corollary_.: Let \(A\) and \(B\) be finite projective \(R\)-algebras such that \(A\sim B\). For each \(\mathfrak{p}\) in \(\operatorname{Spec}R\) and any intermediate ring \(S(\mathfrak{p})\) it follows that \(A_{S(\mathfrak{p})}\sim B_{S(\mathfrak{p})}\). In particular, the following implications hold. \[\begin{CD}\text{global}&\text{local}&\text{complete local}&\text{ finite dimensional}\\ A\sim B&\implies&A_{\mathfrak{p}}\sim B_{\mathfrak{p}}\ \forall\mathfrak{p}&\implies&\widehat{A}_{\mathfrak{p}}\sim\widehat{B}_{ \mathfrak{p}}\ \forall\mathfrak{p}&\overset{(\star)}{\implies}&A_{k(\mathfrak{p})}\sim B_{k( \mathfrak{p})}\ \forall\mathfrak{p}\,,\end{CD}\] where "\(\forall\mathfrak{p}\)" abbreviates "for any prime ideal \(\mathfrak{p}\) of the ring \(R\)". In the present section, we focus mainly on the converse of the third implication, which might be viewed as the problem to lift derived equivalences of algebras. This problem can be solved in a certain setup where \(R\) is a Dedekind domain and the fibres \(A_{k(\mathfrak{p})}\) and \(B_{k(\mathfrak{p})}\) are replaced by thickened versions (Theorem 5.6). ### Lifting tilting complexes and isomorphisms of algebras In [63, SS3] Rickard proved lifting results for tilting complexes. _5.2 Theorem_.: Let \(\Lambda\) be a finite free \(S\)-algebra over a complete local ring \(S\) with residue field \(k\) and let \(T\) be a tilting complex of \(\Lambda\otimes_{S}k\). Then there is precisely one complex \(L\) in \(\mathbf{D}(\operatorname{Mod}\Lambda)\) up to isomorphism such that \[L\otimes_{S}^{\mathrm{L}}k\cong T\quad\text{in}\quad\mathbf{D}(\operatorname{ Mod}(\Lambda\otimes_{S}k))\,.\] Moreover, the complex \(L\) is tilting, \(\operatorname{End}(L)\) is a finite free \(S\)-algebra and there is an isomorphism of \(k\)-algebras \(\operatorname{End}(L)\otimes_{S}k\cong\operatorname{End}(T)\). This theorem yields a partial converse of (\(\star\)) in Corollary 5.1: if \(A_{k(\mathfrak{p})}\sim B_{k(\mathfrak{p})}\), then there is a finite free \(\widehat{R}_{\mathfrak{p}}\)-algebra \(\Gamma\) with \(\widehat{A}_{\mathfrak{p}}\sim\Gamma\) and \(\Gamma\otimes_{\widehat{R}_{\mathfrak{p}}}k(\mathfrak{p})\cong B_{k( \mathfrak{p})}\). So the problem to lift derived equivalences is reduced to a lifting problem for isomorphisms of algebras. However, there are non-isomorphic commutative local noetherian rings with isomorphic residue fields. The converse of (\(\star\)) fails also for certain blocks of group algebras [64, p.192]. Here is another quiver-theoretic example. **5.3 Example**.: Let \(n\geq 1\), and \(A\) and \(B\) denote the following matrix rings: \[A\coloneqq\begin{pmatrix}k[c]&(c)\\ k[c]&k[c]\end{pmatrix}\supset B\coloneqq\begin{pmatrix}k[c]&(c^{n})\\ (c^{n})&k[c]\end{pmatrix}\,.\] Similar to \(A\), the subring \(B\) is also a finite free \(R\)-algebra whose centre is isomorphic to \(R\coloneqq k[c]\). In contrast to the quiver of \(A\), the quiver of \(B\) has relations: The fibres \(A_{k(\mathfrak{p})}\) and \(B_{k(\mathfrak{p})}\) at a prime ideal \(\mathfrak{p}\) of \(R\) are related as follows. 1. If \(c\notin\mathfrak{p}\), it holds that \((c)_{\mathfrak{p}}=(c^{n})_{\mathfrak{p}}=R_{\mathfrak{p}}\) and there are isomorphisms \[A_{\mathfrak{p}}\cong\operatorname{Mat}_{2\times 2}(R_{\mathfrak{p}}) \cong B_{\mathfrak{p}}\quad\text{and}\quad A_{k(\mathfrak{p})}\cong \operatorname{Mat}_{2\times 2}(R_{k(\mathfrak{p})})\cong B_{k(\mathfrak{p})}\] of \(R_{\mathfrak{p}}\)-algebras and \(k(\mathfrak{p})\)-algebras, respectively. 2. If \(c\in\mathfrak{p}\), it holds that \(\mathfrak{p}=(c)\eqqcolon\mathfrak{m}\). Since \(R/\mathfrak{m}\cong k(\mathfrak{m})\), there are isomorphisms of \(k(\mathfrak{m})\)-algebras \[A_{k(\mathfrak{m})}\cong\begin{pmatrix}k[c]/(c)&(c)/(c^{2})\\ k[c]/(c)&k[c]/(c)\end{pmatrix}\cong\begin{pmatrix}k[c]/(c)&(c^{n})/(c^{n+1})\\ (c^{n})/(c^{n+1})&k[c]/(c)\end{pmatrix}\cong B_{k(\mathfrak{m})}\,.\] In other terms, both algebras \(A_{k(\mathfrak{m})}\) and \(B_{k(\mathfrak{m})}\) are isomorphic to the path algebras of the two-cycle quiver from (4.2) with relations \(\alpha\beta=\beta\alpha=0\). Since \(\Lambda\coloneqq\widehat{A}_{\mathfrak{m}}\) is a finite \(\widehat{R}_{\mathfrak{m}}\)-algebra over the complete local ring \(\widehat{R}_{\mathfrak{m}}\), it is semiperfect, and thus any simple \(\Lambda\)-module is a direct summand of \(\Lambda/\operatorname{rad}(\Lambda)\), see [53, Example 23.3 and Theorem 25.3]. It follows that \[\operatorname{gl}\dim\Lambda=\operatorname{proj}\dim_{\Lambda}\Lambda/ \operatorname{rad}(\Lambda)=1\] where the first equality holds according to [45, Proof of Proposition 2.2] and the second follows by computation. On the other hand, the projective dimension of any simple \(\widehat{B}_{\mathfrak{m}}\)-module, and thus the global dimension of \(\widehat{B}_{\mathfrak{m}}\) are both infinite. So \(\widehat{A}_{\mathfrak{m}}\) and \(\widehat{B}_{\mathfrak{m}}\) are not derived equivalent. Together with Corollary 5.1, this shows that despite the fact that the fibres \(A_{k(\mathfrak{p})}\) and \(B_{k(\mathfrak{p})}\) are isomorphic at each ideal \(\mathfrak{p}\in\operatorname{Spec}R\), the algebras \(A\) and \(B\) are not derived equivalent. Varying the parameter \(n\geq 1\) of the algebra \(B\) yields a family of algebras, each of which is fibre-wise but not globally derived equivalent to \(A\). _5.4 Remark_.: In [26, Lemma 8.1] Eisele describes sufficient conditions (\(\Lambda\)1)-(\(\Lambda\)5) to lift the isomorphism of \(k(\mathfrak{m})\)-algebras \(A_{k(\mathfrak{m})}\cong B_{k(\mathfrak{m})}\) to an isomorphism \(\widehat{A}_{\mathfrak{m}}\cong\widehat{B}_{\mathfrak{m}}\) of \(\widehat{R}_{\mathfrak{m}}\)-algebras. In Example 5.3, property (\(\Lambda\)3) fails. Eisele has also studied the uniqueness of lifts in connection with derived equivalences [25]. In [40], Higman proved that it is possible to lift isomorphisms of suitably large quotients in the following setup. **5.5 Theorem**.: _Let \(\Lambda\) and \(\Gamma\) be finite free \(S\)-algebras over a complete discrete valuation ring \(S\) such that the \(Q(S)\)-algebra \(\Lambda\otimes_{S}Q(S)\) is separable. Let \(\mathfrak{m}\) denote the maximal ideal of \(S\) and set \(S_{n}\coloneqq S/\mathfrak{m}^{n}\) for any \(n\geq 1\). Then there is an integer \(\ell\coloneqq\ell(\Lambda,\mathfrak{m})\geq 1\) such that any isomorphism \(\Lambda\otimes_{S}S_{\ell}\xrightarrow{\sim}\Gamma\otimes_{S}S_{\ell}\) of \(S_{\ell}\)-algebras lifts to an isomorphism \(\Lambda\xrightarrow{\sim}\Gamma\) of \(S\)-algebras. _ ### Lifting derived equivalences We return to our initial setup of finite projective \(R\)-algebras \(A\) and \(B\). Instead of the fibre \(A_{k(\mathfrak{p})}\) at a prime ideal \(\mathfrak{p}\) of \(R\) we consider possibly bigger intermediate quotients for certain integers \(n\geq 1\). Rickard's theorems can be combined with Higman's result in order to lift derived equivalences of certain intermediate fibres. **5.6 Theorem**.: _Let \(\mathfrak{p}\) be a nonzero prime ideal of \(R\), and \(A\) and \(B\) be finite projective \(R\)-algebras over a Dedekind domain \(R\) such that the \(Q(S)\)-algebra \(B\otimes_{S}Q(S)\) is separable, where \(S\coloneqq\widehat{R}_{\mathfrak{p}}\). Let \(\ell\) denote the number \(\ell(\widehat{B}_{\mathfrak{p}},\mathfrak{p}S)\) provided by Theorem 5.5. Then \(\widehat{A}_{\mathfrak{p}}\sim\widehat{B}_{\mathfrak{p}}\) if and only if there is an integer \(n\geq\ell\) such that \(A_{\mathfrak{p},n}\sim B_{\mathfrak{p},n}\)._ Proof.: Let \(n\geq 1\). The 'only if'-implication follows from Theorem 4.4. To show the converse, set \(\Lambda\coloneqq\widehat{A}_{\mathfrak{p}}\), \(S_{n}\coloneqq S/\mathfrak{p}^{n}S\) and \(\Lambda_{n}\coloneqq\Lambda\otimes_{S}S_{n}\). Let \(T\) be a tilting complex of \(\Lambda_{n}=A_{\mathfrak{p},n}\) with \(\operatorname{End}(T)\cong B_{\mathfrak{p},n}\). Theorem 4.4 implies that \(T_{1}\coloneqq T\otimes_{S_{n}}^{\operatorname{L}}S_{1}\) is a tilting complex of \(\Lambda_{1}\) such that \(\operatorname{End}(T_{1})\cong B_{\mathfrak{p},1}\). Because of Theorem 5.2, \(T_{1}\) lifts to a tilting complex \(L\) of \(\Lambda\) with \(S\)-free endomorphism ring \(\Gamma\). Theorem 4.4 yields that \(L_{n}\coloneqq L\otimes_{S}^{\operatorname{L}}S_{n}\) is a tilting complex of \(\Lambda_{n}\) with \(\operatorname{End}(L_{n})\cong\Gamma\otimes_{S}S_{n}\eqqcolon\Gamma_{n}\). Since \(L_{n}\) and \(T\) are both lifts of \(T_{1}\) to \(\mathbf{D}(\operatorname{Mod}\Lambda_{n})\), it follows that \(L_{n}\cong T\) by the uniqueness statement in Theorem 5.2. Thus, there are isomorphisms of \(S_{n}\)-algebras \(\Gamma_{n}\cong\operatorname{End}(L_{n})\cong\operatorname{End}(T)\cong B_{ \mathfrak{p},n}\). Applying the argument above to an integer \(n\geq\ell\), Theorem 5.5 yields an isomorphism \(\Gamma\cong\widehat{B}_{\mathfrak{p}}\) of \(S\)-algebras. This shows that \(\widehat{A}_{\mathfrak{p}}=\Lambda\sim\Gamma\cong\widehat{B}_{\mathfrak{p}}\). The last statement can be viewed as a first step to reduce derived equivalence problems of Gorenstein algebras to finite dimensional algebras. **5.7 Remark**.: Rickard's theorems and the last proof yield a bijection between tilting complexes of \(\widehat{A}_{\mathfrak{p}}\) with \(\widehat{R}_{\mathfrak{p}}\)-free endomorphism rings and those of any intermediate quotient \(A_{\mathfrak{p},n}\). This bijection extends to certain settings without the \(R\)-projectivity assumption on \(A\) as well as to silting complexes [36], see also [26]. ## 6. Gorenstein algebras An associative ring \(A\) is _Iwanaga-Gorenstein_ if it is noetherian on both sides, and both \(\operatorname{inj\,dim}A\) and \(\operatorname{inj\,dim}A^{\operatorname{op}}\) are finite. Zaks [73] proved that in this case \(\operatorname{inj\,dim}A=\operatorname{inj\,dim}A^{\operatorname{op}}\); see also [51, Lemma 6.2.1]. **6.1 Definition**.: Following [46] an associative ring \(A\) is a _Gorenstein \(R\)-algebra_ if it satisfies the following conditions. 1. \(A\) is a finite projective \(R\)-algebra; 2. \(A_{\mathfrak{p}}\) is Iwanaga-Gorenstein for each \(\mathfrak{p}\) in \(\operatorname{supp}_{R}A\). Here \(\operatorname{supp}_{R}A\) denotes the support of \(A\) as an \(R\)-module; see 2.3. It is immediate from the definition that \(A\) is a Gorenstein \(R\)-algebra if and only if \(A^{\operatorname{op}}\) is a Gorenstein \(R\)-algebra. **6.2 Example**.: Any commutative Gorenstein ring \(R\) is Gorenstein as an \(R\)-algebra. Any algebra that is finite dimensional over a field \(k\) and Iwanaga-Gorenstein is clearly a Gorenstein \(k\)-algebra. Let \(R\) be commutative Gorenstein ring. For any finite group \(G\), the group algebra \(RG\) is a Gorenstein \(R\)-algebra. This can be checked directly, or, better still, it follows from Theorem 6.8. In the same vein, any finitely generated exterior algebra over \(R\), or any matrix ring over \(R\), is a Gorenstein \(R\)-algebra; see also Theorem 6.12. In all the examples above we started with \(R\) being Gorenstein. This is a necessity, because of the result below from [46, Lemma 4.1]. Its converse need not hold: consider the case of an algebra finite dimensional over a field. **6.3 Lemma**.: _When \(A\) is a Gorenstein \(R\)-algebra, the ring \(R_{\mathfrak{p}}\) is Gorenstein for each \(\mathfrak{p}\) in \(\operatorname{supp}_{R}A\)._ Proof.: Fix a \(\mathfrak{p}\) in \(\operatorname{supp}_{R}A\) and let \(k(\mathfrak{p})\) denote the residue field at \(\mathfrak{p}\). One has \[\operatorname{Ext}^{i}_{R_{\mathfrak{p}}}(k(\mathfrak{p}),A_{\mathfrak{p}}) \cong\operatorname{Ext}^{i}_{A_{\mathfrak{p}}}(A_{\mathfrak{p}}\otimes_{R_{ \mathfrak{p}}}k(\mathfrak{p}),A_{\mathfrak{p}})=0\qquad\text{for $i\gg 0$}\,,\] where the first isomorphism holds by adjunction, and the second one holds because \(A_{\mathfrak{p}}\) is Iwanaga-Gorenstein. Since the \(R_{\mathfrak{p}}\)-module \(A_{\mathfrak{p}}\) is finite free and nonzero--this is where we use the fact \(\mathfrak{p}\) is in \(\operatorname{supp}_{R}A\)--it follows that \[\operatorname{Ext}^{i}_{R_{\mathfrak{p}}}(k(\mathfrak{p}),R_{\mathfrak{p}})= 0\qquad\text{for $i\gg 0$}\,.\] Thus the local ring \(R_{\mathfrak{p}}\) is Gorenstein; see 2.2. The argument above only uses the hypothesis that \(A_{\mathfrak{p}}\) has finite injective dimension on one side. This observation will be used in the proof of Theorem 6.16. **6.4**.: As explained in 3.4, when \(A\) is a finite projective \(R\)-algebra one can drop down to a ring factor of \(R\) and ensure \(A\) is also faithful as an \(R\)-module, equivalently, that \(\operatorname{supp}_{R}A=\operatorname{Spec}R\). In this case when \(A\) is a Gorenstein \(R\)-algebra, the ring \(R\) is Gorenstein, by Lemma 6.3. **Dependence on base**.: Next we discuss the dependence of the Gorenstein property on the ring \(R\). The following result, essentially contained in [46, Theorem 4.6], is the key. **6.5 Theorem**.: _Let \(R\) be a commutative noetherian ring and \(A\) a finite projective \(R\)-algebra. Then \(A\) is Gorenstein as an \(R\)-algebra if and only if_ \[\operatorname{Ext}^{i}_{A}(M,A)=0=\operatorname{Ext}^{i}_{A^{\operatorname{op }}}(N,A)\qquad\text{for $i\gg 0$}\] _for each \(M\in\operatorname{mod}A\) and \(N\in\operatorname{mod}A^{\operatorname{op}}\)._ Proof.: In view of the discussion in 6.4, one can assume \(A\) is also faithful as an \(R\)-module. Then the forward implication is part of [46, Theorem 4.6]. By the same token, for the converse statement it suffices to prove that the stated vanishing of \(\operatorname{Ext}\) modules implies that the ring \(R\) is Gorenstein. The argument for this is basically in the proof of Lemma 6.3. Let \(\mathfrak{p}\) be a prime ideal of \(R\). Then for \(i\gg 0\) one has \[\operatorname{Ext}_{R_{\mathfrak{p}}}^{i}(k(\mathfrak{p}),A_{\mathfrak{p}}) \cong\operatorname{Ext}_{R}^{i}(R/\mathfrak{p},A)_{\mathfrak{p}}\cong \operatorname{Ext}_{A}^{i}(A\otimes_{R}R/\mathfrak{p},A)_{\mathfrak{p}}=0\] where the first isomorphism holds since localisation is flat and the second one is by adjunction. The equality is by hypothesis, which applies as \(A\otimes_{R}R/\mathfrak{p}\) is in \(\operatorname{mod}A\). Since \(A\) is a finite projective \(R\)-module, \(A_{\mathfrak{p}}\) is a finite free \(R_{\mathfrak{p}}\)-module; it is non-zero because \(A\) is faithful over \(R\). Thus \(\operatorname{Ext}_{R_{\mathfrak{p}}}^{i}(k(\mathfrak{p}),R_{\mathfrak{p}})=0\) for \(i\gg 0\), so that \(R_{\mathfrak{p}}\) is Gorenstein. Since this holds for any \(\mathfrak{p}\) in \(\operatorname{Spec}R\), the ring \(R\) is Gorenstein. Since the vanishing of \(\operatorname{Ext}\) modules in the statement above has nothing to do with the ring \(R\), the result above has the following consequence. **6.6 Corollary**.: _Let \(R\) and \(S\) be commutative noetherian rings, and \(A\) an associative ring that is finite projective over \(R\) and over \(S\). Then \(A\) is Gorenstein as an \(R\)-algebra if and only if it is Gorenstein as an \(S\)-algebra. _ **Dualising bimodule**.: Let \(A\) be a finite projective \(R\)-algebra. The \(A\)-bimodule \[\omega_{A/R}\coloneqq\operatorname{Hom}_{R}(A,R)\] is the _dualising bimodule_ of the \(R\)-algebra \(A\). The following characterisation of the Gorenstein property is contained in [46, Theorem 4.6]. Its proof uses the result of Bass and Murthy recalled in 3.7. **6.7 Theorem**.: _Let \(R\) be a commutative Gorenstein ring and \(A\) a finite projective \(R\)-algebra. Then \(A\) is a Gorenstein \(R\)-algebra if and only if the \(A\)-bimodule \(\omega_{A/R}\) is perfect on both sides. _ When \(A\) is commutative, the forward direction of the preceding statement can be strengthened to: the \(A\)-module \(\omega_{A/R}\) is projective. This is one of the special features of commutative algebras. As explained in Section 3, a finite projective \(R\)-algebra \(A\) can be viewed as a family of finite dimensional algebras, parameterised by \(\operatorname{Spec}R\). The result below lends further credence to this perspective. **6.8 Theorem**.: _Let \(A\) be a finite projective \(R\)-algebra. Then \(A\) is a Gorenstein \(R\)-algebra if and only if \(R_{\mathfrak{p}}\) is Gorenstein and \(A_{k(\mathfrak{p})}\) is Iwanaga-Gorenstein for each \(\mathfrak{p}\in\operatorname{supp}_{R}A\); equivalently, for each \(\mathfrak{m}\in\max(\operatorname{supp}_{R}A)\)._ Proof.: Fix \(\mathfrak{p}\) in \(\operatorname{supp}_{R}A\). The hypothesis that \(A\) is finite projective over \(R\) yields \[\left(\omega_{A/R}\right)_{k(\mathfrak{p})}=\operatorname{Hom}_{R}(A,R) \otimes_{R}^{\operatorname{L}}k(\mathfrak{p})\xrightarrow{\sim}\operatorname {Hom}_{k(\mathfrak{p})}(A_{k(\mathfrak{p})},k(\mathfrak{p}))=\omega_{A_{k( \mathfrak{p})}/k(\mathfrak{p})} \tag{6.9}\] as \(A_{k(\mathfrak{p})}\)-bimodules. The stated result is immediate from the characterisation of the Gorenstein property in terms of the dualising bimodule; see Theorem 6.7 and Proposition 3.5. **6.10 Corollary**.: _If \(A\) and \(B\) are Gorenstein \(R\)-algebras, then so is \(A\otimes_{R}B\)._ Proof.: Given Theorem 6.8, it suffices to verify that when \(k\) is a field and \(A\) and \(B\) are finite dimensional \(k\)-algebra that are Iwanaga-Gorenstein, then so is \(A\otimes_{k}B\). This is well-known; see, for instance, [51, Proposition 6.2.6]. It is known that the dualising bimodule \(\omega_{A/k}\) of a finite dimensional \(k\)-algebra \(A\) is already projective on both sides if it is projective on one side, see, for example, [69, Propositions I.8.16 (i) and II.7.6]. This fact extends to finite projective \(R\)-algebras. **6.11 Lemma**.: _Let \(A\) be a finite projective \(R\)-algebra. Then \(\omega_{A/R}\) is projective as an \(A\)-module if and only if \(\omega_{A/R}\) is projective as an \(A^{\mathrm{op}}\)-module._ Proof.: Because of the \(R\)-algebra isomorphism \((A^{\mathrm{op}})^{\mathrm{op}}\cong A\), it suffices to show the 'only if'-implication. Assume that \(M\coloneqq\omega_{A/R}\) is projective as an \(A\)-module. Let \(\mathfrak{p}\) be a prime ideal of \(R\). As \(A\) is finite projective as \(R\)-module, so is \(M\). Using Lemma 3.6 and that the isomorphism in (6.9) is \(A_{k(\mathfrak{p})}\)-linear, it follows that the \(A_{k(\mathfrak{p})}\)-module \(N\coloneqq\omega_{A_{k(\mathfrak{p})}/k(\mathfrak{p})}\) is projective. Since \(A_{k(\mathfrak{p})}\) is a finite dimensional \(k(\mathfrak{p})\)-algebra, \(N\) is also projective as \((A_{k(\mathfrak{p})})^{\mathrm{op}}\)-module. Using that the isomorphism in (6.9) is also \((A_{k(\mathfrak{p})})^{\mathrm{op}}\)-linear together with the isomorphism \((A_{k(\mathfrak{p})})^{\mathrm{op}}\cong(A^{\mathrm{op}})_{k(\mathfrak{p})}\) of \(k(\mathfrak{p})\)-algebras, it holds that \(M_{k(\mathfrak{p})}\) is projective as \((A^{\mathrm{op}})_{k(\mathfrak{p})}\)-module for any \(\mathfrak{p}\in\mathrm{Spec}\,R\). As the \(R\)-algebra \(A^{\mathrm{op}}\) and the \(A^{\mathrm{op}}\)-module \(M\) are both finite projective over \(R\), another application of Lemma 3.6 implies that \(M\) is projective over \(A^{\mathrm{op}}\). ### Derived Morita invariance Here is our main result on the Morita invariance of the Gorenstein property. Fibre-wise tilting objects are introduced in Definition 4.3. **6.12 Theorem**.: _Let \(A\) and \(B\) be finite projective \(R\)-algebras which are derived equivalent. Then \(A\) is a Gorenstein \(R\)-algebra if and only if \(B\) is a Gorenstein \(R\)-algebra._ Proof.: Assume that the \(R\)-algebra \(A\) is Gorenstein. Let \(T\) be a tilting complex with \(\mathrm{End}(T)\cong B\). Consider the equivalence \[F\coloneqq\mathrm{RHom}_{A}(T,-)\colon\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A) \xrightarrow{\sim}\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,B).\] Fix \(M\) in \(\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A)\). Since \(T\) and \(A\) generate the same thick subcategory in \(\mathbf{D}(\mathrm{Mod}\,A)\) one gets the first equivalence below: \[\mathrm{Ext}^{i}_{A}(M,A)=0\quad\text{for $i\gg 0$} \Longleftrightarrow\,\mathrm{Ext}^{i}_{A}(M,T)\quad\text{for $i\gg 0$}\] \[\Longleftrightarrow\,\mathrm{Ext}^{i}_{B}(F(M),B)\quad\text{for $i \gg 0$}\] The second one holds because \(F(T)\cong B\) and \(F\) is an equivalence of categories. Since \(F\) is an equivalence, we conclude that \(\mathrm{Ext}^{*}_{A}(M,A)\) is bounded for each \(M\) in \(\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A)\) if and only if \(\mathrm{Ext}^{*}_{B}(N,B)\) is bounded for each \(N\) in \(\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,B)\). Since \(T\) is a tilting complex in \(\mathbf{D}(\mathrm{Mod}\,A)\), its dual \(\mathrm{RHom}_{A}(T,A)\) is tilting in \(\mathbf{D}(\mathrm{Mod}\,A^{\mathrm{op}})\), with endomorphism algebra \(B^{\mathrm{op}}\); see [51, Remark 9.2.7]. Thus the conclusion above applies also to \(\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A^{\mathrm{op}})\) and \(\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,B^{\mathrm{op}})\). Theorem 6.5 now yields that \(B\) is Gorenstein. Interchanging \(A\) and \(B\), the converse follows by the same arguments. A (classical) Morita equivalence is fibre-wise tilting, so Proposition 4.6 and the preceding result yield: **6.13 Corollary**.: _Let \(A\) be a Gorenstein \(R\)-algebra. If \(B\) is Morita equivalent to \(A\), then \(B\) is a Gorenstein \(R\)-algebra as well. _ **6.14**.: Let \(A\) be a finite \(R\)-projective algebra and \(T\) a tilting complex of \(A\). By [1, Corollary 2.3, Lemma 4.1 (3)] the \(R\)-algebra \(\mathrm{End}(T)\) is projective as \(R\)-module if and only if \(\mathrm{Ext}^{i}_{A}(T,\omega_{A/R}\otimes_{A}^{\perp}T)=0\) for any \(i>0\). In particular, \(T\) is fibre-wise tilting if \(T\cong\omega_{A/R}\otimes_{A}^{\perp}T\) in \(\mathbf{D}(\mathrm{Mod}\,A)\). A finite \(R\)-algebra \(A\) is _symmetric_ if there is an isomorphism \(\omega_{A/R}\cong A\) of \(A\)-bimodules. If \(R\) is Gorenstein, a finite projective symmetric \(R\)-algebra is Gorenstein by Theorem 6.7. By a Theorem of Abe and Hoshino [1, Theorem 4.3], symmetric finite projective \(R\)-algebras, and thus symmetric Gorenstein \(R\)-algebras because of 6.12, are closed under derived equivalences. The same is true for symmetric finite \(R\)-algebras over a Gorenstein ring \(R\) satisfying certain depth conditions [45, Theorems 3.3, 3.1(1)]. Cohomologically noetherian ringsLet \(A\) be a noetherian ring. As in [11, SS4], the graded-centre of \(\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A)\) is denoted \(Z^{*}(\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A))\). Following Avramov and Iyengar, we say the ring \(A\) is _cohomologically noetherian_ if there exists a graded-commutative ring \(S\) and a homomorphism of graded rings \(S\to Z^{*}(\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A))\) such that \(\mathrm{Ext}^{*}_{A}(M,N)\) is noetherian as an \(S\)-module for all \(M,N\) in \(\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A)\). In particular \(A\) is noetherian as \(S^{0}\)-module, and hence \(Z(A)\) is a noetherian ring, and \(A\) is finitely generated as a \(Z(A)\)-module; this follows, for example, from work of Formanek [28]. **6.16 Theorem**.: _Let \(A\) be a finite projective \(R\)-algebra. If \(A\) is cohomologically noetherian (with respect to \(S\)), then so is \(A^{\mathrm{op}}\) (also with respect to \(S\)), and the \(R\)-algebra \(A\) is Gorenstein._ Proof.: We can assume that \(R\) is a subring of the centre of \(A\); see the discussion in 3.4. For any \(M\) in \(\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A)\) the action of \(S\) on \(\mathrm{Ext}^{*}_{A}(M,A)\) factors through \(\mathrm{Ext}^{*}_{A}(A,A)=A\), so the cohomologically noetherian condition implies \[\mathrm{Ext}^{i}_{A}(M,A)=0\qquad\text{for $i\gg 0$}\,.\] Now an argument akin to the one in proof of Lemma 6.3 yields that the ring \(R\) is Gorenstein. Here are details: We wish to verify that \(R_{\mathfrak{m}}\) is Gorenstein for each maximal ideal \(\mathfrak{m}\) of \(R\). With \(k\) denoting the residue field at \(\mathfrak{m}\), from the preceding discussion, for \(i\gg 0\) one gets the equality below \[\mathrm{Ext}^{i}_{R_{\mathfrak{m}}}(k,A_{\mathfrak{m}})\cong\mathrm{Ext}^{i}_ {R}(k,A)\cong\mathrm{Ext}^{i}_{A}(A\otimes_{R}k,A)=0\,.\] The first isomorphism holds because \(\mathrm{Ext}^{i}_{R}(k,A)\) is annihilated by \(\mathfrak{m}\) and so it is already \(\mathfrak{m}\)-local. Since \(A_{\mathfrak{m}}\) is a non-zero free \(R_{\mathfrak{m}}\)-module of finite rank, it follows that \(\mathrm{Ext}^{i}_{R_{\mathfrak{m}}}(k,R_{\mathfrak{m}})=0\) for \(i\gg 0\), and hence \(R_{\mathfrak{m}}\) is Gorenstein; see 2.1. Given that \(R\) is Gorenstein, Theorem 2.7 yields that the functor \[\mathrm{RHom}_{R}(-,R)\colon\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A) \longrightarrow\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A^{\mathrm{op}})\] is a (contravariant) equivalence of categories; it is its own quasi-inverse. It follows from this that \(A^{\mathrm{op}}\) is cohomologically noetherian with respect to \(S\), and hence as before that \(\mathrm{Ext}^{i}_{A^{\mathrm{op}}}(N,A)=0\) for each \(N\) in \(\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A^{\mathrm{op}})\) and \(i\gg 0\). Now Theorem 6.5 yields that the \(R\)-algebra \(A\) is Gorenstein. **Example**.: Examples of rings that are cohomologically noetherian include group algebras \(kG\), where \(k\) is a field and \(G\) is a finite group or even a finite (super) group scheme. In these cases the ring \(S\) that witnesses this property is the cohomology ring of \(G\). More generally, any finite dimensional algebra \(A\) over a field \(k\) satisfying the \(\mathrm{Fg}\) property, in the sense of Erdmann, Holloway, Snashall, Solberg, Taillefer [27], is cohomologically noetherian, with \(S\) the Hochschild cohomology ring of \(A\) over \(k\). ### The work of Buchweitz Let \(R\) be a commutative noetherian ring. In Chapter 7 of [17] Buchweitz studies maps of rings \(\varphi\colon R\to A\) satisfying the following properties. 1. \(A\) is a finite \(R\)-algebra, of finite projective dimension as an \(R\)-module; 2. \(A\) is Iwanaga-Gorenstein ring. Condition (1) is weaker than the corresponding condition defining a Gorenstein algebra 6.1, whereas (2) is stronger. It seems plausible to develop a theory of Gorenstein algebras along the lines presented above and in [46], where, instead of requiring that \(A\) is a projective \(R\)-algebra, it is allowed to be of finite projective dimension over \(R\), as in Buchweitz's definition. In that context, the fibres \(A_{k(\mathfrak{p})}\) would no longer be rings, but rather differential graded algebras, finite dimensional over the field \(k(\mathfrak{p})\). One would have to contend with not-necessarily-commutative Gorenstein dg algebras, which have been studied by Frankild and Jorgensen [31], and also by Dwyer, Greenlees, and Iyengar [24]. One motivation for pursuing such a project is that the proposed definition captures some natural families of examples that ought to be considered Gorenstein but are not covered by our present framework. Gentle algebras, discussed in the next section, are one such, as are algebras which are derived equivalent to Gorenstein algebras via complexes that are tilting but not necessarily fibre-wise tilting. ## 7. Maximal Cohen-Macaulay modules In this section we discuss maximal Cohen-Macaulay modules over Gorenstein algebras. Once again, we begin with commutative rings. Let \((R,\mathfrak{m},k)\) be a commutative local ring. A finitely generated \(R\)-module \(M\) is _maximal Cohen-Macaulay_ if \(\operatorname{depth}_{R}M\geq\dim R\); equivalently, if \[\operatorname{Ext}_{R}^{i}(k,M)=0\qquad\text{for $i<\dim R$}\,.\] When \(M\neq 0\) one always has \(\operatorname{Ext}_{R}^{\dim R}(k,M)\neq 0\), see [29, Theorem (1.1)], so such a module is maximal Cohen-Macaulay if and only if \(\operatorname{depth}_{R}M=\dim R\). Let now \(R\) be a Gorenstein local ring. It is immediate from the Grothendieck's local duality theorem [16, Theorem 3.5.8] that \(M\) is maximal Cohen-Macaulay if and only if \[\operatorname{Ext}_{R}^{i}(M,R)=0\qquad\text{for $i\geq 1$}\,.\] Observe that since \(R\) is Gorenstein one has \(\operatorname{inj}\dim R=\dim R\), as explained in 2.1, so for any \(M\in\operatorname{Mod}R\) one has \[\operatorname{Ext}_{R}^{i}(M,R)=0\qquad\text{for $i\geq\dim R+1$}\,.\] Thus a simple dimension shifting argument shows that for any \(M\) in \(\operatorname{mod}R\) the \(R\)-modules \(\Omega_{R}^{i}(M)\) are maximal Cohen-Macaulay for \(i\geq\dim R\). This suggests the following definition. Let \(R\) be a commutative Gorenstein ring. An \(R\)-module \(M\) in \(\operatorname{mod}R\) is _maximal Cohen-Macaulay_ if \(\operatorname{Ext}_{R}^{i}(M,R)=0\) for \(i\geq 1\). Since a finitely generated \(R\)-module is zero if and only if its localisation at any prime (equivalently, maximal) ideal is zero, a finitely generated \(R\)-module \(M\) is maximal Cohen-Macaulay if and only if the \(R_{\mathfrak{p}}\)-module \(M_{\mathfrak{p}}\) is maximal Cohen-Macaulay of all \(\mathfrak{p}\) in \(\operatorname{Spec}R\); equivalently, for all \(\mathfrak{p}\) in \(\operatorname{max}(\operatorname{Spec}R)\). As in the local case one has the following result. **7.3 Corollary**.: _Let \(R\) be a commutative Gorenstein ring and \(M\) a finitely generated \(R\)-module and set \(s\coloneqq\sup\{i\in\mathbb{N}\mid\operatorname{Ext}_{R}^{i}(M,R)\neq 0\}\). The \(R\)-module \(\Omega_{R}^{i}(M)\) is maximal Cohen-Macaulay for \(i\geq s\). _ Goto's theorem 2.6 ensures that the \(s\) is finite, so for any \(M\) all high syzygy modules will be maximal Cohen-Macaulay. The noteworthy aspect is that there is no uniform bound on \(s\), since \(\operatorname{inj\,dim}R\) need not be finite; see 2.5. ### Gorenstein algebras Let \(A\) be a Gorenstein \(R\)-algebra. A finitely generated \(A\)-module \(M\) is _maximal Cohen-Macaulay_ if \(\operatorname{Ext}_{A}^{i}(M,A)=0\) for \(i\geq 1\). Observe that this property does not depend on \(R\); see also Corollary 6.6. **7.4 Lemma**.: _Let \(A\) be a Gorenstein \(R\)-algebra and \(M\) a finitely generated \(A\)-module. If \(M\) is maximal Cohen-Macaulay as an \(A\)-module, then it is maximal Cohen-Macaulay also as an \(R\)-module. The converse holds when \(\omega_{A/R}\) is projective over \(A\) or over \(A^{\operatorname{op}}\), and, in particular, when \(A\) is commutative._ Proof.: When \(M\) is maximal Cohen-Macaulay it follows by a standard dimension shifting argument that \(\operatorname{Ext}_{A}^{i}(M,W)=0\) for \(i\geq 1\) and any \(A\)-module \(W\) of finite projective dimension. Applying this observation to \(W\coloneqq\omega_{A/R}\), and this is legitimate by Theorem 6.7, justifies the equality below: \[\operatorname{Ext}_{R}^{i}(M,R)\cong\operatorname{Ext}_{A}^{i}(M,\omega_{A/R} )=0\quad\text{for $i\geq 1$}\,.\] The isomorphism is by adjunction. Thus \(M\) is maximal Cohen-Macaulay also as an \(R\)-module. The isomorphism also yields that when \(M\) is maximal Cohen-Macaulay as an \(R\)-module, one has \(\operatorname{Ext}_{A}^{i}(M,\omega_{A/R})=0\) for \(i\geq 1\). The two projectivity assumptions on \(\omega_{A/R}\) are equivalent by Lemma 6.11. When the \(A^{\operatorname{op}}\)-module \(\omega_{A/R}\) is projective, then \(A\cong\operatorname{Hom}_{R}(\omega_{A/R},R)\) is a direct summand of \(\omega_{A/R}^{n}\) for a certain integer \(n\geq 1\), which implies \(\operatorname{Ext}_{A}^{i}(M,A)=0\) for \(i\geq 1\), yielding the converse. In the statement above, the converse does not hold for general non-commutative rings: consider, for example, a finite dimensional algebra \(A\) over a field \(k\) such that \(A\) is Gorenstein but not self-injective. In this case not every finite dimensional \(A\)-module is maximal Cohen-Macaulay, but every \(k\)-module is maximal Cohen-Macaulay. This highlights an important feature of the commutative context. Nevertheless many aspects of the theory of maximal Cohen-Macaulay over commutative rings carry over, including Goto's theorem 2.6 and its consequence 2.7. This is recorded in the statement below, which can be proved by a simple reduction to the commutative case; see [46, Theorem 4.6] for details. **7.5 Theorem**.: _When \(A\) is a Gorenstein algebra, the functor \(\operatorname{RHom}_{A}(-,A)\) induces an equivalence of categories_ \[\operatorname{RHom}_{A}(-,A)\colon\mathbf{D}^{\operatorname{b}}(\operatorname {mod}A)^{\operatorname{op}}\stackrel{{\sim}}{{\longrightarrow}} \mathbf{D}^{\operatorname{b}}(\operatorname{mod}A^{\operatorname{op}})\,.\] _In particular, for any finitely generated \(A\)-module \(M\) one has \(\operatorname{Ext}_{A}^{i}(M,A)=0\) for \(i\gg 0\), so the syzygy modules \(\Omega_{A}^{i}(M)\) are maximal Cohen-Macaulay for \(i\gg 0\). _ The collection of maximal Cohen-Macaulay modules over a Gorenstein \(R\)-algebra \(A\) form a Frobenius exact category, with exact structure inherited from \(\operatorname{mod}A\), with projective/injective objects the finitely generated projective \(A\)-modules. We denote by \(\underline{\operatorname{mCM}}(A)\) the corresponding stable category. For any \(A\)-modules \(M\), \(N\), we write \(\underline{\operatorname{Hom}}_{A}(M,N)\) for the quotient of \(\operatorname{Hom}_{A}(M,N)\) modulo the ideal of morphisms which factor through a projective \(A\)-module. Let \(\mathbf{D}_{\operatorname{sg}}(A)\) be the _singularity category_ of \(A\) introduced by Buchweitz [17] and independently by Orlov [58]. This is the quotient of \(\mathbf{D}^{\operatorname{b}}(\operatorname{mod}A)\) by the thick subcategory of perfect complexes; see [51, SS6.2]. The following result was proved by Buchweitz [17, Theorem 4.4.1] for Iwanaga-Gorenstein rings. The version stated below is from [46, Theorem 6.5]. **7.6 Theorem**.: _When \(A\) is a Gorenstein \(R\)-algebra the embedding of the category of maximal Cohen-Macaulay modules into \(\mathbf{D}^{\operatorname{b}}(\operatorname{mod}A)\) induces an equivalence of triangulated categories \(\underline{\operatorname{mCM}}(A)\xrightarrow{\sim}\mathbf{D}_{\operatorname {sg}}(A)\). _ As in Buchweitz's work [17] one can also identify \(\underline{\operatorname{mCM}}(A)\) with the homotopy category of acyclic complexes of projective \(A\)-modules; see [46, SS6]. **Duality**.: We are interested in duality properties of \(\underline{\operatorname{mCM}}(A)\). To that end, we record that when \(A\) is a Gorenstein \(R\)-algebra the exact functor \[\omega_{A/R}\otimes_{A}^{\operatorname{L}}-\colon\mathbf{D}(\operatorname{ Mod}A)\xrightarrow{\sim}\mathbf{D}(\operatorname{Mod}A)\] is an equivalence, and induces an equivalence on \(\mathbf{D}^{\operatorname{b}}(\operatorname{mod}A)\); see [46, Theorem 4.5]. Since the \(A\)-module \(\omega_{A/R}\) is perfect on either side, the functor takes perfect complexes to perfect complexes and hence induces an equivalence of categories \[\omega_{A/R}\otimes_{A}^{\operatorname{L}}-\colon\mathbf{D}_{\operatorname{sg }}(A)\xrightarrow{\sim}\mathbf{D}_{\operatorname{sg}}(A)\,.\] See [46, Proposition 6.7]. **7.7**.: Let \(R\) be a commutative noetherian local ring and \(E\) an injective hull of its residue field. We consider the Matlis duality functor \[(-)^{\vee}\coloneqq\operatorname{Hom}_{R}(-,E)\] on the category of \(R\)-modules. A finite \(R\)-algebra \(A\) has an _isolated singularity_ if the ring \(A_{\mathfrak{p}}\) has finite global dimension for each non-maximal prime \(\mathfrak{p}\) in \(\operatorname{Spec}R\). This terminology is motivated by work of Buchweitz [17, Definition 7.7.3] and differs from Auslander's notion of isolated singularities. When a Gorenstein \(R\)-algebra \(A\) has an isolated singularity, for any objects \(M,N\) in \(\underline{\operatorname{mCM}}(A)\), it holds that \(\underline{\operatorname{Hom}}_{A}(M,N)_{\mathfrak{p}}\cong\underline{ \operatorname{Hom}}_{A_{\mathfrak{p}}}(M_{\mathfrak{p}},N_{\mathfrak{p}})=0\) for each non-maximal prime ideal \(\mathfrak{p}\) by [4, Lemma 7.6 c)], and together with Theorem 7.6 it follows that each morphism space in \(\mathbf{D}_{\operatorname{sg}}(A)\) is an \(R\)-module of finite length; see also [17, Subsection (C.4.2)]. **7.8 Theorem**.: _Let \(R\) be a local ring and \(A\) a Gorenstein \(R\)-algebra with an isolated singularity. For \(M,N\) in \(\mathbf{D}_{\operatorname{sg}}(A)\) there is a natural isomorphism_ \[\operatorname{Hom}_{\mathbf{D}_{\operatorname{sg}}(A)}(M,N)^{\vee}\cong \operatorname{Hom}_{\mathbf{D}_{\operatorname{sg}}(A)}(N,\Sigma^{d-1}(\omega _{A/R}\otimes_{A}^{\operatorname{L}}M))\,,\] _where \(d\) is the Krull dimension of \(R\). Said otherwise, the singularity category \(\mathbf{D}_{\operatorname{sg}}(A)\) has Serre duality, with Serre functor \(\Sigma^{d-1}(\omega_{A/R}\otimes_{A}^{\operatorname{L}}-)\). _ The preceding result is due to Buchweitz [17, Theorem 7.7.5], extending results of Auslander [4] and Auslander and Reiten [5]; see also [46, Corollary 9.3]. One noteworthy consequence is the existence of almost-split sequences in \(\mathbf{D}_{\operatorname{sg}}(A)\); this is by the work of Reiten and Van den Bergh [60]. Several independent works show that the equivalence \[\Sigma^{d}(\omega_{A/R}\otimes_{A}^{\mathrm{L}}-)\colon\mathbf{D}^{\mathrm{b}}( \operatorname{mod}A)\stackrel{{\sim}}{{\longrightarrow}}\mathbf{D}^ {\mathrm{b}}(\operatorname{mod}A)\] restricts to a Serre functor on a certain full subcategory of \(\mathbf{D}^{\mathrm{b}}(\operatorname{mod}A)\) whose morphism spaces are finite length \(R\)-modules, see [34, Theorem 7.2.14], [45, Theorem 3.7], [49, Lemma 4.1] and [71, Lemma 6.4.1]. The work in [46] started with the question whether the Serre duality as in Theorem 7.8 is a shadow of a more general duality statement, applicable to all Gorenstein \(R\)-algebras and not just those with an isolated singularity. There is indeed such a duality theorem, but to see it one has to widen ones perspective to include also infinitely generated analogues of maximal Cohen-Macaulay modules, namely the Gorenstein projective modules, and develop the results presented above in that generality. All this is explained in [46]. This endeavour was guided by Grothendieck duality theory in commutative algebra and algebraic geometry, that subsumes and generalises classical Serre duality in those contexts. Another inspiration was the work of the second and third authors with Benson and Pevtsova [13, 14] on duality phenomena in the stable module category of finite groups and group schemes. ## 8. Gentle algebras In this section we discuss gentle algebras. These arise frequently in representation theory and were introduced by Assem and Skowronski in their study of iterated tilted algebras of Dynkin type \(\mathbb{A}\)[3]. Let \(k\) be a field. A \(k\)-algebra is called _gentle_ if it is Morita equivalent to an algebra of the form \(kQ/I\) where \(Q\) is a finite quiver and \(I\) is an ideal generated by paths of length two such that the local shape at any vertex is one of the following: (8.1) Zero relations are indicated by dotted lines. For later considerations the vertices are divided into two types, depicted by full and hollow circles. Rigorously formulated, the quiver \(Q\coloneqq(Q_{0},Q_{1},s,t)\) and the ideal \(I\) are subject to the following conditions. 1. For each \(i\in Q_{0}\), there are at most two arrows starting at \(i\). 2. For each \(i\in Q_{0}\), there are at most two arrows ending at \(i\). 3. For each \(\alpha\in Q_{1}\), there is at most one arrow \(\beta\) such that \(s(\beta)=t(\alpha)\) and \(\beta\alpha\in I\), and there is at most one arrow \(\beta^{\prime}\) such that \(s(\beta^{\prime})=t(\alpha)\) and \(\beta^{\prime}\alpha\not\in I\). 4. For each \(\beta\in Q_{1}\), there is at most one arrow \(\alpha\) such that \(t(\alpha)=s(\beta)\) and \(\beta\alpha\in I\), and there is at most one arrow \(\alpha^{\prime}\) with \(t(\alpha^{\prime})=s(\beta)\) and \(\beta\alpha^{\prime}\not\in I\). Note that in \(Q\) we compose paths from right to left. ### Gentle algebras are Koszul Let \(k\) be a field and \(Q\coloneqq(Q_{0},Q_{1},s,t)\) a finite quiver. Let \(Q^{\mathrm{op}}\) be the _opposite quiver_ that is obtained by reversing the arrows, so \[Q^{\mathrm{op}}_{0}\coloneqq Q_{0}\qquad\text{and}\qquad Q^{\mathrm{op}}_{1} \coloneqq\{\alpha^{-}\mid\alpha\in Q_{1}\}\] with \(s(\alpha^{-})\coloneqq t(\alpha)\) and \(t(\alpha^{-})\coloneqq s(\alpha)\). Let \(Q_{2}\) denote the set of paths of length \(2\) and fix a partition \(Q_{2}=P_{+}\sqcup P_{-}\). It is not difficult to check that the algebra \(kQ/\langle P_{-}\rangle\) (not necessarily finite dimensional) is gentle if and only if \(kQ^{\mathrm{op}}/\langle P_{-}^{\mathrm{op}}\rangle\) is gentle, where \(P_{-}^{\mathrm{op}}\coloneqq\{\alpha^{-}\mid\alpha\in P_{+}\}\). Now suppose that the algebra \(A\coloneqq kQ/\langle P_{-}\rangle\) is gentle. Then \(A\) is graded by path length and we have a decomposition \(A_{0}=\bigoplus_{i\in Q_{0}}S_{i}\) into simple \(A\)-modules. Each arrow \(\alpha\colon i\to j\) corresponds to an extension \(\xi_{\alpha}\colon 0\to S_{i}\to E_{\alpha}\to S_{j}\to 0\). A theorem of Green and Zacharia [39] shows that the algebra \(A\) is Koszul, and the algebra \(kQ^{\mathrm{op}}/\langle P_{-}^{\mathrm{op}}\rangle\) identifies with the Koszul dual \(A^{!}\cong\mathrm{Ext}_{A}^{*}(A_{0},A_{0})\) via the assignment \(\alpha^{-}\mapsto\xi_{\alpha}\). For example, consider the two-loop quiver with partitions Then the algebra \(A\coloneqq k\langle x,y\rangle/(x^{2},y^{2})\) is gentle with Koszul dual \(A^{!}\cong k[x,y]/(xy)\). ### Gluing gentle algebras For the remainder of this section, we fix an algebra \(A\coloneqq kQ/I\) such that \((Q,I)\) satisfies the above conditions (61)-(64). Furthermore, we set \(\mathfrak{m}\coloneqq(c)\subset R\coloneqq k[c]\). The gentle algebra \(A\) shares some properties with the path algebra \(B\coloneqq kQ^{\prime}\) of a quiver defined as follows. To obtain \(Q^{\prime}\), we split each full vertex from diagram (8.1) into a pair of vertices: Formally, the quiver \(Q^{\prime}\coloneqq(Q^{\prime}_{0},Q^{\prime}_{1},s^{\prime},t^{\prime})\) is defined as follows. Let \(Q_{0}^{*}\) denote the subset of all vertices \(i\in Q_{0}\) such that no arrow of \(Q\) starts or ends in \(i\). Set \(Q^{\prime}_{0}\coloneqq Q_{0}^{*}\cup\{s_{\alpha},t_{\alpha}\mid\alpha\in Q_{1}\} /\mathord{\sim}\), where \(\sim\) is the smallest equivalence relation such that \(s_{\beta}\sim t_{\alpha}\) for any \(\alpha,\beta\in Q_{1}\) with \(s(\beta)=t(\alpha)\) and \(\beta\alpha\notin I\). Set \(Q^{\prime}_{1}\coloneqq Q_{1}\), \(s^{\prime}(\alpha)\coloneqq[s_{\alpha}]\), and \(t^{\prime}(\alpha)\coloneqq[t_{\alpha}]\) for any \(\alpha\in Q^{\prime}_{1}\). The ring \(B\) is a central tool in the work by Burban and Drozd [18, 19, 20] to study the representation theory of \(A\). The statements of the next lemma are known in similar settings; see [19, Lemma 2.12] and [20, Theorem 3.11]. **8.2 Lemma**.: _With the notation above, the following statements hold._ 1. _The quiver_ \(Q^{\prime}\) _is a finite union of quivers of equioriented type_ \(\mathbb{A}\) _or_ \(\widetilde{\mathbb{A}}\)_. In different terms, there are numbers_ \(0\leq m\leq n\)_,_ \(\ell_{1},\dots,\ell_{n}\geq 1\) _such that there is an isomorphism of rings_ \(B\cong\prod_{v=1}^{m}T_{\ell_{v}}(k)\times\prod_{v=m+1}^{n}T_{\ell_{v}}(R)\)_, where_ \[T_{\ell_{v}}(k)\coloneqq\{(a_{ij})\in\mathrm{Mat}_{\ell_{v}\times\ell_{v}}(k) \mid a_{ij}=0\text{ for any }1\leq i<j\leq\ell_{v}\},\] \[T_{\ell_{v}}(R)\coloneqq\{(a_{ij})\in\mathrm{Mat}_{\ell_{v}\times \ell_{v}}(R)\mid a_{ij}\in\mathfrak{m}\text{ for any }1\leq i<j\leq\ell_{v}\}\,.\] 2. _Let_ \(J_{A}\) _denote the two-sided ideal of_ \(A\) _generated by the arrows of its quiver. Similarly, let_ \(J_{B}\) _denote the arrow ideal of_ \(B\)_. There is a canonical monomorphism_ \(\iota\colon A\hookrightarrow B\) _of_ \(k\)_-algebras such that_ \(\iota(J_{A})=J_{B}\)_._ Proof.: (1) Properties (63) and (64) of \((Q,I)\) ensure that there is at most one arrow starting and at most one arrow ending at each vertex in \(Q^{\prime}\). (2) The morphism \(\iota\) in question is given by the unique \(k\)-algebra morphism with \(\alpha\mapsto\alpha\) for \(\alpha\in Q_{1}\), \(e_{i}\mapsto e_{i}\) for \(i\in Q_{0}^{*}\), and \(e_{i}\mapsto\sum_{j\in Q_{0}^{\prime}(i)}e_{j}\) for \(i\in Q_{0}\backslash Q_{0}^{*}\), where \[Q_{0}^{\prime}(i)\coloneqq\{[s_{\beta}]\mid\beta\in Q_{1}\colon s(\beta)=i\} \cup\{[t_{\alpha}]\mid\alpha\in Q_{1}\colon t(\alpha)=i\}\] is a subset of \(Q_{0}^{\prime}\) with only one or two elements. The above construction can be reversed: given a union \(Q^{\prime}\) of equioriented quivers of type \(\mathbb{A}\) or \(\widetilde{\mathbb{A}}\), one may identify certain pairs of vertices adopting the relations of the arrows in \(Q^{\prime}\) to glue a gentle quiver \((Q,I)\). In fact, any gentle quiver can be obtained in this way. #### Completions of gentle algebras The dimension of the gentle algebra \(A\) is related to the following notion. For any arrow \(\alpha\in Q_{1}\), there is a unique maximal repetition-free path \(p_{\alpha}=\alpha_{n}\dots\alpha_{1}\notin I\) beginning with \(\alpha_{1}\coloneqq\alpha\) by 1. In other words, all \(n\) arrows of \(p_{\alpha}\) are different, \(p_{\alpha}\notin I\) and for any arrow \(\beta\in Q_{1}\) with \(s(\beta)=t(\alpha_{n})\) it follows that \(\beta=\alpha_{1}\) or \(\beta\alpha_{n}\in I\). We call \(p_{\alpha}\) an _admissible cycle_ if it is a cyclic path with \(\alpha_{1}\alpha_{n}\notin I\). Next, we consider certain completions of an infinite dimensional gentle algebra. **8.3 Lemma**.: _Assume that the quiver \((Q,I)\) has an admissible cycle. The following statements hold._ 1. _There is a canonical monomorphism_ \(\iota\colon A\hookrightarrow B\) _of finite_ \(R\)_-algebras._ 2. _For any_ \(\mathfrak{p}\in\operatorname{Spec}R\backslash\{\mathfrak{m}\}\) _there is an isomorphism_ \(\widehat{A}_{\mathfrak{p}}\cong\prod_{v=1}^{n}\operatorname{Mat}_{\ell_{v} \times\ell_{v}}(\widehat{R}_{\mathfrak{p}})\) _of finite_ \(\widehat{R}_{\mathfrak{p}}\)_-algebras, where_ \(n,\ell_{1},\dots,\ell_{n}\geq 1\)_._ 3. _The ring_ \(\widehat{A}_{\mathfrak{m}}\) _is a finite_ \(\widehat{R}_{\mathfrak{m}}\)_-algebra which is isomorphic to the completion of the path algebra_ \(A\) _with respect to its arrow ideal_ \(J_{A}\) _and the_ \(Q(\widehat{R}_{\mathfrak{m}})\)_-algebra_ \(\widehat{A}_{\mathfrak{m}}\otimes_{\widehat{R}_{\mathfrak{m}}}Q(\widehat{R}_{ \mathfrak{m}})\) _is semisimple._ Proof.: (1) Identifying \(c\) in \(R=k[c]\) with the sum of all admissible cycles in \((Q,I)\) yields an \(R\)-algebra map \(\phi\colon R\to A\). The composition \(\iota\phi\) with the embedding \(\iota\) from Lemma 8.2 (2) maps \(c\) to the sum of all repetition-free cyclic paths in \(Q^{\prime}\) and endows the overring \(B\) with the structure of a finite \(R\)-algebra. Viewing \(\iota\colon A\hookrightarrow B\) as monomorphism of \(R\)-modules, it follows that \(A\) is a finite \(R\)-algebra as well. (2) Let \(\mathfrak{p}\in\operatorname{Spec}R\backslash\{\mathfrak{m}\}\). It holds that \(cB\subseteq\iota(A)\) and \(c\,T_{\ell_{v}}(k)=0\) for any ring factor \(T_{\ell_{v}}(k)\) of \(B\) as well as \(c\notin\mathfrak{p}\). These observations imply that \(\iota_{\mathfrak{p}}(A_{\mathfrak{p}})=B_{\mathfrak{p}}\cong\prod_{v=1}^{n}( T_{\ell_{v}}(R))_{\mathfrak{p}}\cong\prod_{v=1}^{n}\operatorname{Mat}_{\ell_{v} \times\ell_{v}}(R_{\mathfrak{p}})\) with the notation of Lemma 8.2 (1). Taking completions yields statement (2). (3) The map \(\iota\) gives rise to a monomorphism of algebras \(\widehat{\iota}_{\mathfrak{m}}\colon\widehat{A}_{\mathfrak{m}}\hookrightarrow \widehat{B}_{\mathfrak{m}}\) over the ring \(\widehat{R}_{\mathfrak{m}}\cong k[\![c]\!]\) of formal power series. Since \(\widehat{\iota}_{\mathfrak{m}}\) has finite dimensional cokernel, \(\widehat{A}_{\mathfrak{m}}\otimes_{k[\![c]\!]}k(c)\cong\widehat{B}_{\mathfrak{ m}}\otimes_{k[\![c]\!]}k(c)\) is a semisimple \(k(c)\)-algebra. Since \(J_{A}^{n}\subseteq A\mathfrak{m}\subseteq J_{A}\) for certain \(n\geq 1\), it follows that \(\widehat{A}_{\mathfrak{m}}\) is \(J_{A}\)-adically complete. The last lemma shows that an infinite dimensional gentle algebra \(A\) has only one interesting completion at a prime ideal of its central subring \(R\). A simpler analogue of the last lemma holds for finite dimensional gentle algebras, in which the ring \(R\) has to be replaced by the field \(k\) and \(\widehat{A}_{\mathfrak{m}}\) can be identified with the algebra \(A\) itself. ### Finite projective gentle algebras At this point, we can clarify when the gentle algebra \(A\) satisfies property 6.1 (1) of a Gorenstein \(R\)-algebra over the ring \(R=k[c]\). ### Proposition _With the notation above, the following are equivalent._ 1. _A admits the structure of a finite projective_ \(R\)_-algebra;_ 2. \(B\) _admits the structure of a finite projective_ \(R\)_-algebra;_ 3. _Each arrow of_ \(Q^{\prime}\) _lies on a cyclic path;_ 4. _Each arrow of_ \(Q\) _lies on an admissible cycle._ Proof.: (1)\(\Rightarrow\)(2) Because of (1), \(Q^{*}_{0}\) is empty and \(A\) has no ring factor isomorphic to \(k\). Together with Lemma 8.2 (2) assumption (1) implies that \(B\) can be viewed as a finite \(R\)-module such that its arrow ideal \(J\coloneqq J_{B}\) is torsion-free as an \(R\)-module. Since \(J\operatorname{tor}_{R}(B)+\operatorname{tor}_{R}(B)\,J\subseteq \operatorname{tor}_{R}(J)=0\) and \(B\) has no simple ring factor, \(B\) must be also torsion-free, and thus projective over \(R\). Note that a triangular matrix ring of the form \(T_{\ell_{v}}(k)\) cannot be endowed with the structure of a projective \(R\)-module. Therefore, each indecomposable ring factor of \(B\) is isomorphic to a matrix ring of the form \(T_{\ell_{v}}(R)\), which is equivalent to property (2). (2)\(\Rightarrow\)(3) follows from the just mentioned characterisation of (2). (3)\(\Leftrightarrow\)(4) holds by the definition of the quiver \(Q^{\prime}\). (3)\(\Rightarrow\)(1) Because of (3), every ring factor of \(B\) is given by a matrix ring of the form \(T_{\ell_{v}}(R)\). We may repeat the proof of Lemma 8.3 (1) adding that \(\iota\phi\) endows the ring \(B\) with the structure of a finite projective \(R\)-algebra. Thus its \(R\)-subalgebra \(A\) is finite and projective as \(R\)-module as well. ### Gentle algebras are Iwanaga-Gorenstein This was proved by Geiss and Reiten [33] when the algebra is also finite dimensional; the general case is handled in [51, Proposition 6.2.9]. In fact, the injective dimension can be computed explicitly. Koszul duality suggests a concept analogous to a maximal repetition-free path without relations. Namely, for any arrow \(\alpha\in Q_{1}\) (63) implies that there is a unique path \(d_{\alpha}=\alpha_{n}\dots\alpha_{2}\,\alpha_{1}\) of pairwise distinct arrows beginning with \(\alpha_{1}\coloneqq\alpha\) such that \(s(\alpha_{i+1})=t(\alpha_{i})\) and \(\alpha_{i+1}\alpha_{i}\in I\) for any \(1\leq i<n\), and for any arrow \(\beta\in Q_{1}\) with \(s(\beta)=t(\alpha_{n})\) it follows that \(\beta=\alpha_{1}\) or \(\beta\alpha_{n}\notin I\). We call \(d_{\alpha}\) a _differential cycle_ if it is a cyclic path with \(\alpha_{1}\alpha_{n}\in I\), and a _differential walk_ otherwise. We define an integer \(w(A)\) for the gentle algebra \(A=kQ/I\) as follows. Set \(w(A)\) to be equal to the maximum of all lengths of all differential walks in \((Q,I)\) if there is at least one differential walk, \(w(A)\coloneqq 0\) if each indecomposable ring factor of \(A\) is isomorphic to \(k\) or to the path algebra of a finite quiver of equioriented type \(\widetilde{\mathbb{A}}\) modulo the square of its arrow ideal, and \(w(A)\coloneqq 1\) in the remaining cases. ### Proposition _The gentle algebra \(A\) is Iwanaga-Gorenstein. Its injective dimension is given by \(w(A)\). _ ### Remark The only obstacle of the gentle algebra \(A\) being a Gorenstein \(R\)-algebra is that \(A\) may not be projective as an \(R\)-module. Crawley-Boevey pointed out that this happens for the path algebra of the quiver \((Q^{*},I^{*})\) in Example 4.1. This again suggests that one should relax condition (1) in 6.1 to allow \(A\) to have finite projective dimension over \(R\); see the discussion on the work of Buchweitz at the end of Section 6. In the remainder of this paper we will focus on arrow ideal completions of infinite dimensional gentle Gorenstein algebras, which will be called _gentle Gorenstein algebras_ for brevity. ## 9. Serre duality for gentle Gorenstein algebras From now on, let \((Q,I)\) be a gentle quiver such that each arrow lies on an admissible cycle. Equivalently, \(Q\) is a finite quiver, the ideal \(I\) is generated by paths of length two and each vertex is either _2-regular_ with gentle relations or a _transition vertex_: In contrast to the previous section, we denote by \(R\) the ring \(k\llbracket c\rrbracket\) of formal power series and by \(A\) the completion of the infinite dimensional path algebra \(kQ/I\) with respect to its arrow ideal. According to Lemma 8.3 (3) and Proposition 8.4, the ring \(A\) is a gentle Gorenstein \(R\)-algebra. The purpose of this section is to give an explicit description of the dualising bimodule \(\omega_{A/R}\coloneqq\operatorname{Hom}_{R}(A,R)\). Its computation is similar in spirit to work of Ladkani [52] and has been carried out in [35] for the case that the quiver \(Q\) has 2-regular vertices only. The description of the bimodule \(\omega_{A/R}\) requires some notation, which will not be relevant in the next section. The main conclusion of this description is summarised in Corollary 9.8. For any arrow \(\alpha\in Q_{1}\) there is a unique arrow \(\sigma(\alpha)\in Q_{1}\) such that \(\sigma(a)a\notin I\). Moreover, \(\alpha\) is the beginning of a unique admissible cycle \(c_{\alpha}\) of a certain length \(\ell_{\alpha}\geq 1\). We may view the ring \(A\) as a Gorenstein \(R\)-algebra via the structure map \[R=k\llbracket c\rrbracket\longrightarrow A,\quad c\longmapsto\sum_{\alpha\in Q _{1}}c_{\alpha} \tag{9.2}\] For any integer \(1\leq n<\ell_{\alpha}\) we denote by \(\alpha_{n}\) the unique path of length \(n\) which begins with the arrow \(\alpha\) and is not contained in the ideal \(I\), and by \(c_{\alpha}\partial^{n}\) the unique path such that \((c_{\alpha}\partial^{n})\alpha_{n}=c_{\alpha}\). The path \(c_{\alpha}\partial^{n}\) can also be obtained by deleting the first \(n\) arrows in the path \(c_{\alpha}\). When using these notation, we assume implicitly that \(\alpha\neq c_{\alpha}\). Let \(Q_{0}^{r}\) and \(Q_{0}^{t}\) denote the set of 2-regular vertices and the set of transition vertices, respectively. Let \(Q_{1}^{t}\) be the set of all arrows starting in transition vertices. To fix an \(R\)-linear basis of \(A\), we choose a map \(\varepsilon\colon Q_{1}\to\{-1,1\}\), \(\alpha\mapsto\varepsilon_{\alpha}\coloneqq\varepsilon(\alpha)\), such that \(\varepsilon_{\alpha}\neq\varepsilon_{\beta}\) for any distinct arrows \(\alpha,\beta\) with \(s(\alpha)=s(\beta)\), and \(\varepsilon_{\gamma}=-1\) for any arrow \(\gamma\in Q_{1}^{t}\). Finally, let \(Q_{1}^{+}\) denote the set of all arrows \(\alpha\) with \(\varepsilon_{\alpha}=1\). **9.3 Example**.: The following quiver is glued from cycles of lengths two and nine: For this quiver, we may choose \(\varepsilon_{\alpha(i)}\coloneqq(-1)^{i}\) for \(i\neq 8\) and \(\varepsilon_{\alpha(8)}\coloneqq-1\). It is not hard to verify the following. **9.4 Lemma**.: _As an \(R\)-module \(A\) has an \(R\)-linear basis_ \[\mathcal{B}\coloneqq\{e_{i}\mid i\in Q_{0}\}\cup\{\alpha_{n}\mid\alpha\in Q_{1},\ 1\leq n<\ell_{\alpha}\}\cup\{c_{\beta}\mid\beta\in Q_{1}^{+}\}\,.\qed\] For any path \(p\in\mathcal{B}\) let \(p^{*}\colon A\to R\) denote the unique \(R\)-linear map such that for any \(q\in\mathcal{B}\) it holds that \(p^{*}(q)=\delta_{pq}\), which denotes the Kronecker delta. Each vertex \(i\in Q_{0}\) induces an indecomposable projective \(A\)-module \(P_{i}\coloneqq Ae_{i}\). The intersection of its maximal left submodules is denoted \(\operatorname{rad}(P_{i})\). The left \(A\)-module structure of the dualising bimodule is almost regular in the following sense. **9.5 Proposition**.: _There is an isomorphism of left \(A\)-modules_ \[\vartheta\colon A^{\circ}\coloneqq\big{(}\bigoplus_{i\in Q_{0}^{\circ}}P_{i} \big{)}\oplus\big{(}\bigoplus_{j\in Q_{0}^{\circ}}\operatorname{rad}(P_{j}) \big{)}\stackrel{{\sim}}{{\longrightarrow}}\omega_{A/R}= \operatorname{Hom}_{R}(A,R)\,.\] _More precisely, \(\vartheta\) is the unique left \(A\)-linear map such that_ \[e_{i}\longmapsto c_{\beta(i)}^{*}\text{ for any }i\in Q_{0}^{r}\,,\quad\text{ and}\quad\gamma(j)\longmapsto-(c_{\gamma(j)}\partial)^{*}\text{ for any }j\in Q_{0}^{t}\] _where \(\beta(i)\) denotes the unique arrow in \(Q_{1}^{+}\) starting in \(i\) and \(\gamma(j)\) the unique arrow starting in \(j\)._ Proof.: First, we compute \(\vartheta\) on an \(R\)-linear basis of \(P_{i}\) for \(i\in Q_{0}^{r}\). Set \(c_{i}\coloneqq c_{\beta(i)}\). For any \(p\in\mathcal{B}\) it holds that \(\vartheta(pe_{i})=p\cdot\vartheta(e_{i})=p\cdot c_{i}^{*}=c_{i}^{*}(-p)=\sum_{ q\in\mathcal{B}}c_{i}^{*}(qp)\,q^{*}\). It can be shown that \(c_{i}^{*}(qp)\neq 0\) if and only if \(qp=c_{i}^{2}\) or \(c_{\alpha}\) for an arrow \(\alpha\) starting at \(i\). In this case, it holds that \(c_{i}^{*}(qp)=c\) or \(\varepsilon_{\alpha}\), respectively. It follows that \(\vartheta(e_{i})=e_{i}\cdot\vartheta(e_{i})\), \(\vartheta(c_{i})=e_{i}^{*}+c\,c_{i}^{*}\) for \(i\in Q_{0}^{r}\), and \(\vartheta(\alpha_{n})=\varepsilon_{\alpha}(c_{\alpha}\partial^{n})^{*}\) for any \(\alpha\in Q_{1}\backslash Q_{1}^{t}\) and \(1\leq n<\ell_{\alpha}\). Next, we describe \(\vartheta\) on an \(R\)-linear basis of \(\operatorname{rad}(P_{j})\) for \(j\in Q_{0}^{t}\). Set \(\gamma\coloneqq\gamma(j)\). For any \(p,q\in\mathcal{B}\) it holds that \((c_{\gamma}\partial)^{*}(qp)\neq 0\) if and only if \(qp=c_{\gamma}\partial\), which may occur only if \(p=e_{t(\gamma)}\) or \(\sigma(\gamma)_{n}\) with \(1\leq n<\ell_{\gamma}\). By similar arguments as in the previous case, it follows that \(e_{t(\gamma)}\cdot\vartheta(\gamma)=\vartheta(\gamma)\), \(\vartheta(\gamma_{n})=\sigma(\gamma)_{n-1}\cdot\vartheta(\gamma)=-(c_{\gamma} \partial^{n})^{*}\) for \(1<n<\ell_{\gamma}\) and \(\vartheta(c_{\gamma})=c_{\gamma}\partial\cdot\vartheta(\gamma)=-e_{j}^{*}\). This shows also that \(\vartheta\) is well-defined. Let \(\psi\colon\omega_{A/R}\to A^{\circ}\) be the unique \(R\)-linear map such that \(e_{i}^{*}\mapsto c_{\beta(i)}-ce_{i}\) for \(i\in Q_{0}^{r}\), \(e_{j}^{*}\mapsto-c_{\gamma(j)}\) for \(j\in Q_{0}^{t}\), \(c_{\beta}^{*}\mapsto e_{s(\beta)}\) for \(\beta\in Q_{1}^{+}\) and \(\alpha_{n}^{*}\mapsto\varepsilon_{\sigma^{n}(\alpha)}c_{\alpha}\partial^{n}\) for \(\alpha\in Q_{1}\) and \(1\leq n<\ell_{\alpha}\). It can be checked that \(\psi\) is the inverse of the map \(\vartheta\). **9.6**.: To describe the right \(A\)-module structure of \(\omega_{A/R}\) we need a few more notions. Let \(\nu\coloneqq\widehat{\nu}_{\varepsilon}\colon A\to A\) be the completion of the unique \(k\)-algebra morphism \(\nu_{\varepsilon}\colon kQ/I\to kQ/I\) such that \(e_{i}\mapsto e_{i}\) for any \(i\in Q_{0}\) and \(\alpha\mapsto\varepsilon_{\sigma(\alpha)}\varepsilon_{\alpha}\alpha\) for any \(\alpha\in Q_{1}\). In particular, each path \(p\) in \((Q,I)\) coincides with \(\nu(p)\) up to sign. Let \(A_{\nu}\) be the \(A\)-bimodule with regular left \(A\)-module structure and right \(A\)-module structure twisted by \(\nu\), that is, we set \(x\cdot_{\nu}a\coloneqq x\,\nu(a)\) for any \(x\in A_{\nu}\) and \(a\in A\). Let \(A_{\nu}^{\circ}\) be the subbimodule of \(A_{\nu}\) generated by the idempotents \(e_{i}\) with \(i\in Q_{0}^{r}\) and the arrows \(\gamma\in Q_{1}^{t}\). This twisted bimodule is the dualising bimodule. **9.7 Theorem**.: _The left \(A\)-linear isomorphism \(\vartheta\colon A_{\nu}^{\circ}\stackrel{{\sim}}{{\longrightarrow}} \omega_{A/R}\) is \(A\)-bilinear._ Proof.: For any vertex \(i\in Q_{0}^{r}\) we set \(c_{i}\coloneqq c_{\beta(i)}\) where \(\beta(i)\) is the arrow in \(Q_{1}^{+}\) starting at \(i\). Fix \(i\in Q_{0}^{r}\) and \(p\in\mathcal{B}\). Set \(h\coloneqq s(p)\). * If \(t(p)\neq i\), it holds that \(\vartheta(e_{i})\,p=0=\vartheta(e_{i}\cdot_{\nu}p)\). * Assume that \(t(p)=i\). The type of the vertex \(h\) leads to the following cases. * Assume that \(h\in Q_{0}^{\tau}\). For any \(q\in\mathcal{B}\) it can be verified that \(c_{h}^{*}(q\,\nu(p))=c_{i}^{*}(pq)\). Thus, \(\vartheta(e_{i}\cdot_{\nu}p)=\vartheta(\nu(p)\,e_{h})=c_{h}^{*}(-\nu(p))=\sum_{ q\in\mathcal{B}}c_{h}^{*}(q\,\nu(p))\,q^{*}=c_{i}^{*}(p_{-})=\vartheta(e_{i})\,p\). * Otherwise, \(h\notin Q_{0}^{\tau}\). Then \(p=\alpha_{n}\) for an arrow \(\alpha\in Q_{1}^{t}\) and \(1\leq n<\ell_{\alpha}\). The equality \(c_{i}^{*}(p\,(c_{\alpha}\partial^{n}))=\varepsilon_{\sigma^{n}(\alpha)}\) implies \(\vartheta(e_{i}\cdot_{\nu}p)=-\varepsilon_{\sigma^{n}(\alpha)}\,\vartheta( \alpha_{n})=\varepsilon_{\sigma^{n}(\alpha)}\,(c_{\alpha}\partial^{n})^{*}= \sum_{q\in\mathcal{B}}c_{i}^{*}(p\,q)\,q^{*}=c_{i}^{*}(p_{-})=\vartheta(e_{i})p\). Let \(\gamma\in Q_{1}^{t}\). Set \(j\coloneqq s(\gamma)\) and \(\varepsilon\coloneqq\varepsilon_{\sigma(\gamma)}\). It holds that \(\vartheta(\gamma\cdot_{\nu}p)=0=\vartheta(\gamma)\cdot p\) for any \(p\in\mathcal{B}\) with \(t(p)\neq j\) and \(\vartheta(\gamma\cdot_{\nu}e_{j})=\vartheta(\gamma)=\vartheta(\gamma)\,e_{j}\). Furthermore, it can be computed that \(\vartheta(\gamma\cdot_{\nu}c_{\gamma}\partial^{n})=-\sigma(\gamma)_{n-1}= \vartheta(\gamma)\cdot c_{\gamma}\partial^{n}\) for any \(1<n<l_{\alpha}\) and \(\vartheta(\gamma\cdot_{\nu}c_{\gamma}\partial)=-e_{t(\gamma)}^{*}-\delta_{ \varepsilon,+}\,c\,c_{\sigma(\gamma)}^{*}=\vartheta(\gamma)\cdot c_{\gamma}\partial\). As \(\vartheta\) is \(R\)-linear the claim follows. This theorem yields an explicit description of the relative Serre functor of the category \(\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A)\) mentioned in 7.9. We will not need the precise formulation of Theorem 9.7 in the following, but only an approximate version. **9.8 Corollary**.: _For any complex \(P\) of projective \(A\)-modules the complex \(\omega_{A/R}\otimes_{A}P\) is given by replacing each indecomposable module \(P_{j}\) with \(j\in Q_{0}^{t}\) in \(P\) with its radical and applying a certain sign change to each path in each differential. _ In the applications below, the sign change will not affect the isomorphism class. ## 10. Singularity categories of gentle Gorenstein algebras As in the previous section, we study the arrow ideal completion \(A\) of the path algebra of a gentle quiver such that each of its arrows lies on an admissible cycle, and view \(A\) as an \(R\)-algebra over the ring \(R=k\llbracket c\rrbracket\) via the structure map (9.2). In this section, we give a description of the singularity category \(\mathbf{D}_{\mathrm{sg}}(A)\) in Theorem 10.13, which is an analogue of a result by Kalck for finite dimensional gentle algebras [47]. The resulting description of \(\mathbf{D}_{\mathrm{sg}}(A)\) also follows from a classification of indecomposable objects in the equivalent homotopy category of acyclic projective complexes by work of Bennett-Tennenhaus [10] or of those in the bigger category \(\mathbf{D}^{\mathrm{b}}(\mathrm{mod}\,A)\) using techniques by Burban and Drozd developed in [18] and [19]. Our approach is based on the representation theory of lattices over orders. A key point is that Corollary 9.8 allows to compute the Auslander-Reiten translations of non-projective \(A\)-lattices. After determining the whole Auslander-Reiten quiver of \(A\)-lattices we will translate classification results along the chain of categories This method might be useful for other Gorenstein algebras with a manageable, for instance, representation-tame, category of lattices. ### The Auslander-Reiten theory of lattices Let \(\mathrm{lat}(A)\) denote the full subcategory of finitely generated \(A\)-modules which are free, or, equivalently, _maximal Cohen-Macaulay over \(R\)_ in the sense of 7.2. In different terms, these are precisely the _lattices over the \(R\)-algebra \(A\)_. We will use the latter terminology in order to avoid confusion with the later notion of _maximal Cohen-Macaulay \(A\)-modules_. **10.1**.: According to [4, Proposition I.7.3], there is an exact duality \[(-)^{*}\coloneqq\mathrm{Hom}_{R}(-,R)\colon\mathrm{lat}(A^{\mathrm{op}}) \stackrel{{\sim}}{{\longrightarrow}}\mathrm{lat}(A)\,.\] Moreover, there is an isomorphism of functors \[\operatorname{Hom}_{A}(-,A)^{*}\cong\omega_{A/R}\otimes_{A}-\colon\operatorname{ proj}(A)\xrightarrow{\sim}\operatorname{inj}\operatorname{lat}(A) \tag{10.2}\] which yield an equivalence between the category of finitely generated projective \(A\)-modules and the full subcategory of injective objects in the exact category of \(A\)-lattices. Lemma 8.3 (3) states that \(A\) is a finite free \(R\)-algebra such that the \(Q(R)\)-algebra \(A\otimes_{R}Q(R)\) is semisimple. This has the following consequences. **10.3 Lemma**.: _In the setup above, the following statements hold._ 1. _An_ \(A\)_-module_ \(L\) _is an_ \(A\)_-lattice if and only if there is an_ \(A\)_-linear embedding of_ \(L\) _into a finitely generated projective_ \(A\)_-module._ 2. _For any finitely generated_ \(A\)_-modules_ \(M\) _and_ \(N\) _and any integer_ \(i>0\) _the_ \(R\)_-module_ \(\operatorname{Ext}^{i}_{A}(M,N)\) _is a torsion module._ The first statement is known by [21, Exercise 23.1], the second follows from the proof of [21, Proposition 25.5]. We give the arguments for the sake of completeness. Proof.: In the following, \(Q(R)\) is abbreviated to \(Q\). 1. Assume that \(L\) is an \(A\)-lattice. Since \(A\otimes_{R}Q\) is self-injective, \(L\otimes_{R}Q\) embeds into \((A\otimes_{R}Q)^{n}\) for certain \(n\geq 0\). Let \(\iota\) denote the \(A\)-linear composition \[L\xhookrightarrow{\eta_{L}}L\otimes_{R}Q\xhookrightarrow{\eta_{L}}(A\otimes_{R} Q)^{n}\xrightarrow{\sim}A^{n}\otimes_{R}Q\,.\] Let \(x_{1},x_{2}\ldots x_{m}\) be generators of \(L\). For any \(1\leq i\leq m\) we may write \(\iota(x_{i})\) as \(\sum_{j=1}^{n}\sum_{k=1}^{\ell_{j}}a_{ijk}\otimes_{R}\frac{r_{ijk}}{s_{ijk}}\). Set \(s\) to be the product of all denominators \(s_{ijk}\). Since \(\iota(Ls)\subseteq\operatorname{Im}\eta_{A^{n}}\), there is an \(A\)-linear embedding of \(L\cong Ls\) into \(A^{n}\). Vice versa, the category \(\operatorname{lat}(A)\) contains \(\operatorname{proj}(A)\) and is closed under submodules, because \(A\) is finite and projective over the Dedekind domain \(R\). 2. In the notations above, semisimplicity of \(A\otimes_{R}Q\) implies that \[\operatorname{Ext}^{i}_{A}(M,N)\otimes_{R}Q\cong\operatorname{Ext}^{i}_{A \otimes_{R}Q}(M\otimes_{R}Q,N\otimes_{R}Q)=0\,.\qed\] More importantly, the category \(\operatorname{lat}(A)\) admits almost-split sequences by results of Auslander [4]. The next statement allows to compute these using the dualising bimodule \(\omega_{A/R}\) of the algebra \(A\). **10.4 Proposition**.: _Let \(L\) be a non-projective \(A\)-lattice. Choose a projective cover \(\pi\colon P_{1}\twoheadrightarrow L\) and an \(A\)-linear monomorphism \(\iota\colon L\hookrightarrow P_{0}\) into a finitely generated projective \(A\)-module. Then there is an isomorphism \(\tau(L)\cong\operatorname{Ker}(\omega_{A/R}\otimes_{A}\partial_{1})\) of \(A\)-lattices, where \(\partial_{1}\) denotes the composition \(\iota\pi\colon P_{1}\longrightarrow P_{0}\)._ Proof.: Applying \((-,A)\coloneqq\operatorname{Hom}_{A}(-,A)\) to a minimal projective presentation of \(L\) yields an exact sequence of \(A^{\operatorname{op}}\)-modules with \(\operatorname{Tr}(L)\coloneqq\operatorname{Coker}\left(\partial_{2},A\right)\) and \(\Omega(\operatorname{Tr}(L))\cong\operatorname{Coker}\left(\pi,A\right)\). According to [4, Proposition I.8.8] the Auslander-Reiten translation of \(L\) is given by \[\tau(L)\coloneqq(\Omega(\operatorname{Tr}(L)))^{*}=\operatorname{Ker}((\pi,A )^{*})\,,\qquad\text{where }(-)^{*}\coloneqq\operatorname{Hom}_{R}(-,R)\,.\] Since \(\Omega(\operatorname{Tr}(L))\) embeds into an \(A^{\operatorname{op}}\)-lattice, it is an \(A^{\operatorname{op}}\)-lattice as well, and thus \((\pi,A)^{*}\) is surjective. Lemma 10.3 ensures the existence of an embedding \(\iota\colon L\hookrightarrow P_{0}\) and implies that \(\operatorname{Coker}(\iota,A)=\operatorname{Ext}^{1}_{A}(\operatorname{Coker} \iota,A)\) is torsion over \(R\). Therefore \((\operatorname{Coker}(\iota,A))^{*}\) is zero, and thus \((\iota,A)^{*}\) is injective. It follows that there is a commutative diagram with exact rows where \(\eta_{P_{1}}\) and \(\eta_{P_{0}}\) are specialisations of the isomorphism of functors (10.2). The Snake Lemma implies bijectivity of the induced \(A\)-linear map \(\phi\). ### From vertices and arrows to lattices While the statements of the previous subsection extend to a more general context, the next results concern the specific combinatorics of lattices over the gentle Gorenstein \(R\)-algebra \(A\). By (10.2), each vertex \(j\in Q_{0}\) gives rise to an indecomposable projective \(A\)-module \(P_{j}=Ae_{j}\) as well as an indecomposable \(A\)-lattice \[I_{j}\coloneqq\omega_{A/R}\otimes_{A}P_{j}\cong\begin{cases}\operatorname{rad} (P_{j})&\text{ if $j$ is a transition vertex},\\ P_{j}&\text{ if $j$ is $2$-regular},\end{cases}\] where the last isomorphism is a special case of Corollary 9.8. The arrows of \(Q\) give rise to lattices with modest properties as well. According to previous notation, for an \(A\)-lattice \(L\) we denote by \(\underline{\operatorname{End}}_{A}(L)\) the quotient of \(A\)-linear endomorphisms of \(L\) modulo the ideal of endomorphisms which factor through a projective \(A\)-module. **10.5 Lemma**.: _For any arrow \(\alpha\in Q_{1}\) the following statements hold._ 1. _The left ideal_ \(L_{\alpha}\coloneqq A\alpha\) _is an indecomposable_ \(A\)_-lattice._ 2. _For any arrow_ \(\beta\in Q_{1}\) _any non-isomorphism_ \(f\colon L_{\beta}\to L_{\alpha}\) _factors through the projective cover_ \(\pi\colon P_{t(\alpha)}\to L_{\alpha}\)_,_ \(p\mapsto p\alpha\)_._ 3. _It holds that_ \(\operatorname{End}_{A}(L_{\alpha})\cong R\) _and_ \(\underline{\operatorname{End}}_{A}(L_{\alpha})\cong k\)_._ 4. _The lattice_ \(L_{\alpha}\) _is projective if and only if_ \(t(\alpha)\) _is a transition vertex._ Proof.: The claims are consequences of the following observations. (1) The category \(\operatorname{lat}(A)\) is closed under submodules. (2) The map \(f\) is given by the right multiplication with \(f(\beta)\in e_{t(\beta)}A\alpha\). (3) In case \(\alpha=\beta\), it holds that \(f(\alpha)\in k[\![c_{\alpha}]\!]\). (4) The epimorphism \(\pi\) splits if and only if \(\operatorname{Ker}\pi=0\). Next, we compute the Auslander-Reiten translation \(\tau\) on all non-projective arrow ideals. **10.6 Lemma**.: _Let \(\alpha\in Q_{1}\) such that \(t(\alpha)\) is not a transition vertex. Let \(\varrho(\alpha)\) denote the unique arrow in \((Q,I)\) with \(s(\varrho(\alpha))=t(\alpha)\) and \(\varrho(\alpha)\alpha\in I\). Then the sequence_ (10.7) _is an almost-split sequence in the category of \(A\)-lattices. So \(\tau(L_{\alpha})\cong L_{\varrho(\alpha)}\cong\Omega(L_{\alpha})\)._ Proof.: Since \(t(\alpha)\) is \(2\)-regular, the arrow \(\beta\coloneqq\varrho(\alpha)\) is well-defined and \(L_{\alpha}\) is a non-projective \(A\)-lattice by Lemma 10.5. The relations in \((Q,I)\) yield that \(\operatorname{Ker}\pi=L_{\beta}\). Since the composition \(\partial_{1}\) of the projective cover \(\pi\colon P_{t(\alpha)}\twoheadrightarrow L_{\alpha}\) with the embedding \(L_{\alpha}\hookrightarrow P_{s(\alpha)}\) is given by the right multiplication with \(\alpha\), Proposition 10.4 and Corollary 9.8 imply that \[\tau(L_{\alpha})\cong\operatorname{Ker}(\omega_{A/R}\otimes_{A}\partial_{1}) \cong\operatorname{Ker}(P_{t(\alpha)}\xrightarrow{\cdot(\pm\alpha)}I_{s( \alpha)})\cong L_{\beta}\cong\Omega(L_{\alpha})\,.\] The Auslander-Reiten formula yields that \(\operatorname{Ext}^{1}_{A}(L_{\alpha},\tau(L_{\alpha}))\cong\underline{ \operatorname{End}}_{A}(L_{\alpha})^{\vee}\cong k\). This implies that any non-split exact sequence \(0\to\tau(L_{\alpha})\to E\to L_{\alpha}\to 0\) of \(A\)-lattices, and thus (10.7) are almost-split. The last lemma suggests to compute the projective resolutions of arrow ideals. Any arrow \(\alpha\in Q_{1}\) is the beginning of a unique path \(d_{\alpha}\coloneqq\alpha_{n}\dots\alpha_{2}\alpha_{1}\), introduced in 8.5, which determines the projective resolution of \(L_{\alpha}\) as follows. 1. If \(d_{\alpha}\) is a differential cycle, the projective resolution of \(L_{\alpha}\) is periodic: \[\dots P_{t(\alpha_{1})}\xrightarrow{\cdot\alpha_{1}}P_{t(\alpha_{n})} \xrightarrow{\cdot\alpha_{n}}\quad\dots P_{t(\alpha_{3})}\xrightarrow{\cdot \alpha_{3}}P_{t(\alpha_{2})}\xrightarrow{\cdot\alpha_{2}}P_{t(\alpha_{1})}\] 2. Otherwise, \(d_{\alpha}\) is a differential walk and the projective resolution of \(L_{\alpha}\) is finite: \[\dots 0\xrightarrow{\cdot\alpha_{n}}P_{t(\alpha_{n})}\xrightarrow{\cdot\alpha_ {n}}\quad\dots P_{t(\alpha_{3})}\xrightarrow{\cdot\alpha_{3}}P_{t(\alpha_{2})} \xrightarrow{\cdot\alpha_{2}}P_{t(\alpha_{1})}\] Adding the syzygies to projective resolutions of arrow ideals and certain radical embeddings yields the complete picture of the category \(\operatorname{lat}(A)\). In particular, any indecomposable \(A\)-lattice is projective or an arrow ideal. **10.9 Proposition**.: _The Auslander-Reiten quiver of \(A\)-lattices is given by the syzygy sequences of the form (10.7) together with the inclusion \(\operatorname{rad}(P_{j})\hookrightarrow P_{j}\) for each transition vertex \(j\) of \(Q\)._ Proof.: For simplicity of the presentation, assume that the quiver \(Q\) is connected. Each differential cycle in \((Q,I)\) induces a periodic \(\tau\)-orbit in the Auslander-Reiten quiver \(\Gamma_{\operatorname{lat}(A)}\). Each differential walk between transition vertices gives rise to a full finite \(\tau\)-orbit. Completing these orbits by radical embeddings of indecomposable projectives and projective covers yields a full connected component \(\mathcal{C}\) of the quiver \(\Gamma_{\operatorname{lat}(A)}\). A result of Wiedemann [72] implies that \(\mathcal{C}=\Gamma_{\operatorname{lat}(A)}\). ### Example The Auslander-Reiten quiver of \(A\)-lattices in Example 9.3 has three finite and two periodic \(\tau\)-orbits, indicated by the dashed arrows: ### Remark It is also possible to apply the Drozd-Kiriicenko Rejection Lemma [23] (see also [22]) to derive Proposition 10.9. Alternatively, one may determine the indecomposable \(A\)-lattices using the works [38] or [65] on _Backstrom orders_, and then compute their Auslander-Reiten quiver via a method by Roggenkamp [68]. **Maximal Cohen-Macaulay modules and the singularity category.** According to Lemma 7.4, we may define _maximal Cohen-Macaulay \(A\)-modules_ as \(A\)-lattices \(M\) satisfying \(\operatorname{Ext}_{A}^{i}(M,A)=0\) for \(i\geq 1\). As in Section 7, we denote by \(\underline{\operatorname{mCM}}(A)\) the stable category of such modules. The projective resolutions computed in 10.8 can be used to show the following. **10.12 Lemma**.: _An arrow \(\alpha\in Q_{1}\) appears in differential cycle if and only if its left ideal \(L_{\alpha}\) is a non-projective maximal Cohen-Macaulay \(A\)-module. _ According to Propositions 8.4 and 8.6, \(A\) is a Gorenstein \(R\)-algebra. Moreover, Lemma 8.3 (3) implies that \(A\) has an isolated singularity. In particular, the dualising bimodule of \(A\) gives rise to a Serre functor on \(\mathbf{D}_{\operatorname{sg}}(A)\) by Theorem 7.8. **10.13 Theorem**.: _The singularity category \(\mathbf{D}_{\operatorname{sg}}(A)\) of the gentle Gorenstein \(R\)-algebra \(A\) has the following description._ 1. _There is a bijection between the set_ \(Q_{1}^{dc}\) _of all arrows in_ \(Q\) _which appear in differential cycles and the set of isomorphism classes of indecomposable objects in the category_ \(\mathbf{D}_{\operatorname{sg}}(A)\) _given by_ \[Q_{1}^{dc}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ surfaces. Koszul duality of two gentle quivers translates into a natural duality of their associated graphs, see [57] and [59]. Finite dimensional gentle algebras have been classified up to derived equivalence by Amiot, Plamondon and Schroll [2] and Opper [56] using a geometric model of the corresponding surface elaborated by Opper, Plamondon and Schroll [57]. We refer to the introductions therein for a broader overview on gentle algebras. _Acknowledgements_. We would like to acknowledge our intellectual debt to Ragnar-Olaf Buchweitz, whose pioneering monograph [17] served as a major inspiration for this project. The motto of that work: _Guided, but not led, by commutative algebra_, may well be a subtitle to [46] and to this manuscript. We thank our collaborators Luchezar Avramov, Dave Benson, John Greenlees, and Julia Pevtsova, for freely sharing their insights on Gorenstein phenomena and duality. Our thanks also to Janina Letz for comments and corrections on an earlier version of this manuscript. The authors are very grateful to an anonymous referee for important corrections and useful suggestions on the exposition of this paper. The first author would like to thank the Algebra group of the Ruhr University Bochum for a stimulating research environment. The second author thanks the organisers of the ICRA workshop for giving an opportunity to presenting this work in that forum, and the editors of this volume for their patience. _Funding_. WG was fully supported by the Ruhr University Bochum during his work on the present article. SBI was partly supported by NSF grant DMS-2001368.
2308.07030
Expanding bipartite Bell inequalities for maximum multi-partite randomness
Nonlocal tests on multipartite quantum correlations form the basis of protocols that certify randomness in a device-independent (DI) way. Such correlations admit a rich structure, making the task of choosing an appropriate test difficult. For example, extremal Bell inequalities are tight witnesses of nonlocality, however achieving their maximum violation places constraints on the underlying quantum system, which can reduce the rate of randomness generation. As a result there is often a trade-off between maximum randomness and the amount of violation of a given Bell inequality. Here, we explore this trade-off for more than two parties. More precisely, we study the maximum amount of randomness that can be certified by correlations exhibiting a violation of the Mermin-Ardehali-Belinskii-Klyshko (MABK) inequality. We find that maximum quantum violation and maximum randomness are incompatible for any even number of parties, with incompatibility diminishing as the number of parties grow, and conjecture the precise trade-off. We also show that maximum MABK violation is not necessary for maximum randomness for odd numbers of parties. To obtain our results, we derive new families of Bell inequalities certifying maximum randomness from a technique for randomness certification, which we call "expanding Bell inequalities". Our technique allows one to take a bipartite Bell expression, known as the seed, and transform it into a multipartite Bell inequality tailored for randomness certification, showing how intuition learned in the bipartite case can find use in more complex scenarios.
Lewis Wooltorton, Peter Brown, Roger Colbeck
2023-08-14T09:41:04Z
http://arxiv.org/abs/2308.07030v1
# Expanding bipartite Bell inequalities for maximum multi-partite randomness ###### Abstract Nonlocal tests on multipartite quantum correlations form the basis of protocols that certify randomness in a device-independent (DI) way. Such correlations admit a rich structure, making the task of choosing an appropriate test difficult. For example, extremal Bell inequalities are tight witnesses of nonlocality, however achieving their maximum violation places constraints on the underlying quantum system, which can reduce the rate of randomness generation. As a result there is often a trade-off between maximum randomness and the amount of violation of a given Bell inequality. Here, we explore this trade-off for more than two parties. More precisely, we study the maximum amount of randomness that can be certified by correlations exhibiting a violation of the Mermin-Ardehali-Belinskii-Klyshko (MABK) inequality. We find that maximum quantum violation and maximum randomness are incompatible for any even number of parties, with incompatibility diminishing as the number of parties grow, and conjecture the precise trade-off. We also show that maximum MABK violation is not necessary for maximum randomness for odd numbers of parties. To obtain our results, we derive new families of Bell inequalities certifying maximum randomness from a technique for randomness certification, which we call "expanding Bell inequalities". Our technique allows one to take a bipartite Bell expression, known as the seed, and transform it into a multipartite Bell inequality tailored for randomness certification, showing how intuition learned in the bipartite case can find use in more complex scenarios. ## I Introduction Distant measurements on a shared quantum system can display correlations inaccessible to any locally realistic model [1; 2]. Such correlations are termed nonlocal and provide a device-independent (DI) witness of useful quantum characteristics, opening up a new paradigm of information processing [3]. Tasks such as randomness expansion (DIRE) [4; 5; 6; 7; 8], amplification [9] and key distribution [10; 11; 12; 13; 14; 15] can all be achieved in the DI regime, where security is proven based on the observed correlations and without trusting the inner workings of the devices. Moreover, nonlocal correlations can be used to prove the presence of a particular state or sets of measurements, known as self-testing [16; 17; 18; 19; 20; 21]. Certification of randomness typically uses a Bell inequality and different Bell inequalities give different amounts of certified randomness. Extremal Bell inequalities are a natural set constituting the minimal set separating local and non-local correlations. However, the set of correlations compatible with high violation is tightly constrained and it is natural to ask how these constraints affect the certification of randomness. For example, in the simplest bipartite scenario, correlations with higher violation of the only extremal Bell inequality, the Clauser-Horne-Shimony-Holt (CHSH) inequality, cannot certify maximal DI randomness, while those with a lower violation can [22; 23]. Further complications arise when scarios with more inputs, outputs or parties are considered because the number of different classes of extremal Bell inequalities grows and it becomes more difficult to navigate the trade-offs between extremal Bell violation and maximum certifiable randomness. Alongside foundational interest, this question is also motivated practically, in finding more robust DIRE protocols [23]. In this work we study the family of multipartite Bell inequalities due to Mermin, Ardehali, Belinskii, and Klyshko (MABK) [24; 25; 26] and their usefulness for DIRE. These all have two inputs and two outputs per party. The one and two sided randomness given a tripartite MABK violation was bounded in [27; 28], and compared to other tripartite inequalities in [29]. Considering global randomness, reference [30] gave a bipartite construction for certification of \(N\) bits of randomness using \(N\) copies of almost unentangled states. Reference [31] showed how, for an odd number of parties \(N\), maximum MABK violation can certify \(N\) bits of DI randomness, whilst reference [32] provided an alternative family of Bell inequalities certifying \(N\) bits in the even case. This leaves the open the following questions: is MABK suitable for maximum randomness generation when \(N\) is even, and if not, what is the compatibility trade-off between MABK violation and maximum randomness for arbitrary \(N\)? Are there correlations with lower MABK that achieve maximal randomness when \(N\) is odd? To analyse multipartite nonlocality, we require tools to construct multipartite Bell tests. To do so, one approach is to generalize intuition from the well understood CHSH scenario; this has been done to extend the number of measurements per party [33; 34], or the number of parties [35], where it was shown how to construct multipartite Bell inequalities from a bipartite seed. Self-testing results using projections onto bipartite subsystems [36; 37; 38] suggest stronger capabilities of the technique presented in reference [35], prompting the question: can it be leveraged for multipartite randomness certification? Our work seeks to answer these questions. We show maximum MABK violation certifies \(N+1/2-\log_{2}(1+\sqrt{2})/\sqrt{2}\approx N-0.4\) bits of global randomness when \(N\) is even, and find a new family of constructions, for arbitrary \(N\), that certify \(N\) bits. For the even case, our constructions simultaneously achieve any MABK violation up to a non-maximal value, beyond which, using a second construction, randomness necessarily trades off for higher violation. Our results lower bound the randomness versus MABK trade-off, and we provide a numerical upper bound which suggests tightness. Additionally, we show that in the asymptotic limit our constructions achieve arbitrarily close to the maximum quantum MABK value, whilst certifying maximal DI randomness. We also use our first construction to conjecture that maximum MABK violation is not necessary to achieve maximum randomness when \(N\) is odd, confirming this for specific values of \(N\). Our constructions are formed from concatenation, following intuition derived from the bipartite case [23]. Specifically, we extend the technique presented in reference [35], originally introduced to witness genuinely multipartite nonlocality. Our extension exploits self-testing properties of the seed to certify randomness. Specifically, our main technical contribution is a decoupling lemma (see Lemma 2), which shows under certain circumstances that the maximum violation of an expanded Bell expression implies a decoupling between the honest parties and Eve. Such decoupling occurs for extremal quantum correlations, and guarantees security [39]. We envisage this technique to be a useful tool in multipartite DI cryptography and of independent interest. For the seed inequality, we introduce a new family of versatile bipartite Bell expressions (Lemma 8) and prove that they are self-tests, which may also be of independent interest. This family generalises the families found in [23]. The paper is structured as followed. In Section II we provide the necessary background and notation. In Section III we detail our enhancement of the technique from [35], along with the decoupling lemma. In Section IV we describe our constructions, and apply this technique to certify \(N\) bits of randomness across a broad range of MABK violations, which we conjecture to be optimal; this result is summarized in Lemma 6. We additionally show how the conjectured highest value of MABK at which maximum randomness remains possible tends to the maximum quantum value as the number of parties increases (see Proposition 4). In Section V we consider MABK values beyond the conjectured threshold for maximum randomness, up to the maximum quantum value. Here we give a candidate form for the exact trade-off, which is summarized in Lemma 9. We conclude with a discussion in Section VI. All technical proofs can be found in the appendices. ## II Background ### Multiparty DI-scenario We consider an \(N\) party, 2-input 2-output DI scenario, where \(N\) isolated devices are given a random input, labeled \(x_{k}\in\{0,1\}\), and produce an output \(a_{k}\in\{0,1\}\) stored in a classical register \(R_{k}\), where \(k\in\{1,...,N\}\) indexes the party. We use bold to denote tuples; for example, \(\mathbf{a}=(a_{1},...,a_{N})\) denotes an \(N\) bit string of device outputs. The behaviour of the devices is then described by the joint conditional probability distribution \(p(\mathbf{a}|\mathbf{x})\), which must be no-signalling following the isolation of the devices. A behaviour \(p(\mathbf{a}|\mathbf{x})\) is quantum if there exists (following Naimark's dilation theorem [40]) a pure state and sets of orthonormal projectors that can reproduce the distribution via the Born rule. Specifically, we consider an adversary Eve, who wishes to guess the device outputs, and let \(|\Psi\rangle_{\tilde{\mathbf{Q}}E}\) denote the global state in the Hilbert space \(\mathcal{H}_{\tilde{\mathbf{Q}}}\otimes\mathcal{H}_{E}\), where \(\mathcal{H}_{\tilde{\mathbf{Q}}}=\bigotimes_{k=1}^{N}\mathcal{H}_{\tilde{Q}_{k}}\). Throughout the text, tildes will denote elements pertaining to physical Hilbert spaces \(\mathcal{H}_{\tilde{Q}_{k}}\) (whose dimension is unknown), whilst no tilde will describe qubit Hilbert spaces \(\mathcal{H}_{Q_{k}}\). We let \(\{\tilde{P}^{(k)}_{a_{k}|x_{k}}\}_{a_{k}}\) be binary-outcome projective measurements on \(\mathcal{H}_{\tilde{Q}_{k}}\), and we denote the corresponding observables of each party by \(\tilde{A}^{(k)}_{x_{k}}=\tilde{P}^{(k)}_{0|x_{k}}-\tilde{P}^{(k)}_{1|x_{k}}\); here bracket superscripts will typically keep track of the party to which the object belongs. Following measurements \(\mathbf{x}\) by the honest parties, we obtain the classical quantum state \(\rho_{\mathbf{R}E|\mathbf{x}}=\sum_{\mathbf{a}}|\mathbf{a}\rangle\!\langle\mathbf{a}|\mathbf{R}\otimes \rho^{\mathbf{a}|\mathbf{x}}_{\mathbf{E}}\), where \(\rho^{\mathbf{a}|\mathbf{x}}_{E}=\mathrm{Tr}_{\tilde{\mathbf{Q}}}\big{[}(\tilde{P}_{\mathbf{a}| \mathbf{x}}\otimes\mathbb{I}_{E})|\Psi\rangle\!\langle\Psi|_{\tilde{\mathbf{Q}}E}\big{]}\) and \(\tilde{P}_{\mathbf{a}|\mathbf{x}}=\bigotimes_{k=1}^{N}\tilde{P}^{(k)}_{a_{k}|x_{k}}\). The behaviour is recovered by \(p(\mathbf{a}|\mathbf{x})=\mathrm{Tr}\big{[}\rho^{\mathbf{a}|\mathbf{x}}_{E}\big{]}\). ### Multiparty Nonlocality Given an observed distribution \(p(\mathbf{a}|\mathbf{x})\), what is a suitable way to quantify its distance from the local boundary? When \(N=2\), the CHSH inequality, as the only non-trivial facet, is a natural choice. The multiparty scenario is more complex however, and the number of classes of Bell inequality increases rapidly with \(N\)[41]. Instead a general way to quantify this distance for an arbitrary scenario can be used, which is related to how much the local polytope needs to be diluted to encompass a given non-local correlation. Computing this can be done using linear programming (see Appendix B.5). In this work, we choose to study one such CHSH generalization and its relationship to DI randomness; the MABK family [24; 25; 26], \[\langle M_{N}\rangle =\frac{1}{2}\Bigg{(}\frac{1-i}{2}\Bigg{)}^{N-1}\Big{\langle}\prod _{k=1}^{N}(\tilde{A}_{0}^{(k)}+i\tilde{A}_{1}^{(k)})\Big{\rangle}\] \[\quad+\frac{1}{2}\Bigg{(}\frac{1+i}{2}\Bigg{)}^{N-1}\Big{\langle} \prod_{k=1}^{N}(\tilde{A}_{0}^{(k)}-i\tilde{A}_{1}^{(k)})\Big{\rangle}\] \[=2^{\frac{1-N}{2}}\sum_{\mathbf{x}}\cos\Big{[}\frac{\pi}{2}\Big{(} \frac{N-1}{2}-\sum_{k=1}^{N}x_{k}\Big{)}\Big{]}\langle\tilde{A}_{\mathbf{x}}\rangle, \tag{1}\] where \(\langle\tilde{A}_{\mathbf{x}}\rangle=\sum_{\mathbf{a}}(-1)^{\sum_{k=1}^{N}a_{k}}p(\bm {a}|\mathbf{x})=\langle\Psi|\tilde{A}_{x_{1}}^{(1)}\otimes\cdots\otimes\tilde{A}_{ x_{N}}^{(N)}\otimes\mathbb{I}_{E}|\Psi\rangle\) when the behaviour is quantum. [In effect the prefactor \((-1)^{\sum_{k=1}^{N}a_{k}}\) corresponds to relabelling the outcomes \(0\mapsto 1\) and \(1\mapsto-1\) to match the usual formulation of observables with eigenvalues \(\pm 1\).] The local bound is given by \(\langle M_{N}\rangle\leq 1\), and the maximum quantum value is \(2^{(N-1)/2}\). ### Self-testing It is known that maximum quantum violation of the MABK family is uniquely achieved, up to local isometries, by the GHZ state and pairs of maximally anticommuting observables [42; 43]. This form of uniqueness arising from a Bell expression is known as self-testing. In this work, we take the choice \(|\psi_{\text{GHZ}}\rangle=(|0\rangle^{\otimes N}+i|1\rangle^{\otimes N})/\sqrt {2}\), \(A_{0}^{(k)}=\cos\theta_{N}^{+}\,\sigma_{X}+\sin\theta_{N}^{+}\,\sigma_{Y}\), and \(A_{1}^{(k)}=\cos\theta_{N}^{-}\,\sigma_{X}+\sin\theta_{N}^{-}\,\sigma_{Y}\), where \(\theta_{N}^{\pm}=(\pi/4)(1/N\pm 1)\). Whilst self-testing statements can be formally defined between \(N\) parties (see, e.g. [36; 20]), in our formulation it will only be necessary to use a bipartite definition of self-testing. **Definition 1** (Bipartite self-test).: Let \(k,l\in\{1,...,N\}\) index two distinct parties. Define the sets of target qubit projectors \(\{P_{a_{k}|x_{k}}^{(k)}\}_{a_{k}}\), \(\{P_{a_{l}|x_{l}}^{(l)}\}_{a_{l}}\), and a target two qubit state \(|\Phi\rangle_{Q_{k}Q_{l}}\). Let \(I^{(k,l)}\) be a Bell operator between parties \(k\) and \(l\), with maximum quantum value \(\eta^{\text{Q}}\). The inequality \(\langle I^{(k,l)}\rangle\leq\eta^{\text{Q}}\)_self-tests_ the target state and measurements if, for all physical quantum strategies \((\tilde{\rho}_{Q_{k}\tilde{Q}_{l}},\tilde{P}_{a_{l}|x_{k}}^{(k)},\tilde{P}_{a _{l}|x_{l}}^{(l)})\) that satisfy \(\langle I^{(k,l)}\rangle=\eta^{\text{Q}}\), there exists a local isometry \(V:\mathcal{H}_{\tilde{Q}_{k}}\otimes\mathcal{H}_{\tilde{Q}_{l}}\otimes \mathcal{H}_{E}\rightarrow\mathcal{H}_{Q_{k}}\otimes\mathcal{H}_{Q_{l}} \otimes\mathcal{H}_{\text{Junk}}\) and ancillary state \(|\xi\rangle_{\text{Junk}}\), such that \[V\Big{(}\tilde{P}_{a_{k}|x_{k}}^{(k)}\otimes\tilde{P}_{a_{l}|x_ {l}}^{(l)}\otimes\mathbb{I}_{E}\Big{)}|\Psi\rangle_{\tilde{Q}_{k}\tilde{Q}_{l }E}\\ =\Big{(}P_{a_{k}|x_{k}}^{(k)}\otimes P_{a_{l}|x_{l}}^{(l)}\Big{|} \Phi\rangle_{Q_{k}Q_{l}}\otimes|\xi\rangle_{\text{Junk}}, \tag{2}\] for all \(a_{k}\), \(a_{l}\), \(x_{k}\), \(x_{l}\), where \(|\Psi\rangle\) is a purification of \(\tilde{\rho}\). We define the shifted Bell operator as \(\bar{I}^{(k,l)}=\eta^{\text{Q}}\mathbb{I}-I^{(k,l)}\), and we say \(\bar{I}^{(k,l)}\) admits a sum-of-squares (SOS) decomposition if there exists a set of polynomials, \(\{M_{i}^{(k,l)}\}_{i}\), of the operators \(\tilde{P}_{a_{k}|x_{k}}^{(k)}\), \(\tilde{P}_{a_{l}|x_{l}}^{(l)}\), satisfying \[\bar{I}^{(k,l)}=\sum_{i}M_{i}^{(k,l)\dagger}M_{i}^{(k,l)}. \tag{3}\] SOS decompositions can be used to enforce algebraic constraints on any quantum state \(\tilde{\rho}\), and measurements \(\tilde{P}_{a_{k}|x_{k}}^{(k)},\tilde{P}_{a_{l}|x_{l}}^{(l)}\), satisfying \(\langle\bar{I}^{(k,l)}\rangle=\eta^{\text{Q}}\). For example, when \(\tilde{\rho}=|\psi\rangle\!\langle\psi|\) is pure, it must satisfy \[M_{i}^{(k,l)}|\psi\rangle=0,\ \forall i, \tag{4}\] (or \(\text{Tr}[(M_{i}^{(k,l)})^{\dagger}M_{i}^{(k,l)}\tilde{\rho}]=0\) more generally). Typically, we find that constraints of the form in Eq. (4) are only satisfied by a unique state (up to local isometries), and when that is the case we call Eq. (4) the self-testing criteria. ### DI randomness certification We use the conditional von Neumann entropy, \(H(\mathbf{R}|\mathbf{X}=\mathbf{x}^{*},E)_{\rho_{\mathbf{R}|\mathbf{x}^{*}}}\) to measure the DI randomness generation rate. This is the correct quantity for spot-checking DI random number generation, where \(\mathbf{x}^{*}\) is a specified measurement choice used to generate randomness (see [44] for a discussion of this and other possibilities). The asymptotic rate of DI randomness generation is given by \[r=\inf_{\begin{subarray}{c}|\Psi\rangle_{\tilde{\mathcal{G}}^{E}}\big{\{} \{\tilde{P}_{a_{k}|x_{k}}^{(k)}\}_{a_{k}}\big{\}}_{k}\\ \text{compatible with }f_{i}(P_{\text{obs}})\end{subarray}}H(\mathbf{R}|\mathbf{X}=\mathbf{x}^{*},E)_{ \rho_{\mathbf{R}|\mathbf{x}^{*}}} \tag{5}\] and we require lower bounds on this quantity over states and measurements compatible with the observed statistics, \(P_{\text{obs}}\), or linear functions \(f_{i}(P_{\text{obs}})\) of them, such as the value of some Bell expression. We will typically only consider one functional, \(f\), which will be a Bell inequality with observed value \(f(P_{\text{obs}})=\omega\); then we denote the infimum in Eq. (5) as the function \(R_{f}(\omega)\). The asymptotic rate can be used as a basis for rates with finite statistics using tools such as the entropy accumulation theorem [45; 46; 47]. Because we study fundamental limits on certifiable DI randomness we work in the noiseless scenario, using self-testing statements to certify DI randomness. Here, \(f(P_{\text{obs}})\) is a Bell expression, and we show all states and measurements that achieve its maximum violation correspond to a post-measurement state in tensor product with Eve, \(\rho_{\mathbf{R}|\mathbf{x}^{*}}=\rho_{\mathbf{R}|\mathbf{x}^{*}}\otimes\rho_{E}\). This allows us to directly evaluate the conditional entropy \(H(\mathbf{R}|\mathbf{X}=\mathbf{x}^{*},E)=H(\mathbf{R}|\mathbf{X}=\mathbf{x}^{*})\), and the infimum becomes trivial since all compatible strategies give rise to the same classical distribution (see Appendix A.3 for details). The Bell expressions we use are also applicable in the case of noise, following a method analogous to that used in [23]. ## III Expanding Bell inequalities In this section we discuss and enhance the technique presented in [35], which allows us to derive the main results of this work. Ref. [35] introduced a method for building multipartite Bell inequalities by expanding a bipartite inequality, called the "seed", which can be used to witness genuinely multipartite nonlocality. One constructs a new Bell expression by summing the seed over different subsets of parties, whilst the remaining parties perform some fixed measurement. The more multipartite nonlocal the correlations, the more bipartite terms are violated. This resembles other recent results that enable multipartite self-testing by projections onto bipartite subsystems [36; 37; 38]. In this work we extend this technique to make it suitable for DI cryptographic purposes. More precisely, we consider cases where the seed is a bipartite self-test, and use the maximum quantum violation of the expanded Bell expression, constructed in an equivalent manner to [35], to draw conclusions about the post-measurement state held between the honest parties and Eve. These conclusions allow us to derive rates for randomness certification. ### Definition We begin by introducing a generic formulation for expanding Bell expressions [35], shortly followed by an example. **Definition 2** (Expanded Bell expressions).: Let \(\mathbf{C}\) be an \(N\times N\) nonzero matrix with entries \(c_{k,l}\) satisfying \(c_{k,l}\in\{0,1\}\) if \(k<l\), and \(c_{k,l}=0\) otherwise. For each pair \(k,l\) for which \(c_{k,l}\neq 0\) let \(\{I^{(k,l)}_{\mathbf{\mu}}\}_{\mathbf{\mu}}\) denote a set of bipartite Bell expressions between parties \(k\) and \(l\) where \(\mathbf{\mu}\) indexes over the \(N-2\) measurement outcomes of all parties excluding \(k,l\) and each Bell-inequality is equivalent up to output relabellings, with a local bound \(\eta^{\text{L}}\) strictly less than the maximum quantum value \(\eta^{\text{Q}}\). Then the expanded Bell operator \(I\) from the seed \(I^{(k,l)}_{\mathbf{\mu}}\) is defined as \[I=\sum_{k,l}c_{k,l}\Bigg{(}\sum_{\mathbf{\mu}}\tilde{P}^{(\overline{k,l})}_{\mathbf{ \mu}|\mathbf{0}}I^{(k,l)}_{\mathbf{\mu}}\Bigg{)}, \tag{6}\] where \(\tilde{P}^{(\overline{k,l})}_{\mathbf{\mu}|\mathbf{0}}=\prod_{k^{\prime}=1,k^{\prime }\neq k,l}^{N}\tilde{P}^{(k^{\prime})}_{\mu_{k^{\prime}}|0}\) is the projector for all parties excluding \(k\) and \(l\), corresponding to the joint setting \(\mathbf{0}\) and joint outcome \(\mathbf{\mu}\)1. Footnote 1: We have written the Bell operator in Eq. (6) in terms of projectors, which is convenient since we consider quantum strategies. We can however easily rewrite it in a theory independent way, by taking the expectation value and making the substitution \(p(\mathbf{a}|\mathbf{x})=\langle\tilde{P}_{\mathbf{a}|\mathbf{x}}\rangle\). Consider the following tripartite example. To ease notation we label the observables of the three parties \(A_{x}=P^{A}_{0|x}-P^{A}_{1|x}\) and similarly for \(B\) and \(C\). From reference [23], observing saturation of the following Bell expression certifies 2 bits of global randomness, \[-3\sqrt{3}\leq\langle I^{A,B}\rangle\leq 3\sqrt{3}, \tag{7}\] where \(I^{A,B}=A_{0}B_{0}+2\big{(}A_{0}B_{1}+A_{1}B_{0}-A_{1}B_{1}\big{)}\). The bound \(\langle I^{A,B}\rangle=\pm 3\sqrt{3}\) is uniquely achieved, up to local isometeries, by the bipartite strategy \[\begin{split}&\rho_{Q_{A}Q_{B}}=|\Phi_{\pm}\rangle\langle\Phi_{ \pm}|,\ |\Phi_{\pm}\rangle=\frac{|00\rangle\pm i|11\rangle}{\sqrt{2}},\\ & A_{0}=B_{0}=\sigma_{X},\ A_{1}=B_{1}=\frac{-\sigma_{X}+\sqrt{3} \sigma_{Y}}{2}.\end{split} \tag{8}\] Now consider a tripartite extension of the above strategy, \[\rho_{Q_{A}Q_{B}Q_{C}}=|\psi_{\text{GHZ}}\rangle\langle\psi_{\text{GHZ}}|,\] \[A_{0}=B_{0}=C_{0}=\sigma_{X},\ A_{1}=B_{1}=C_{1}=\frac{-\sigma_{X}+\sqrt{3} \sigma_{Y}}{2}. \tag{9}\] One can verify that when all three parties measure \(x=y=z=0\), their outcomes are uniformly distributed, resulting in 3 bits of raw randomness; how can we certify this device-independently? Notice \(|\psi_{\text{GHZ}}\rangle\) has the following property: when a single party, say \(A\), measures \(\sigma_{X}\), the leftover state held by \(BC\) is \(|\Phi_{\pm}\rangle\) where the sign depends on the parity of \(A\)'s measurement outcome \(\mu\in\{0,1\}\). Party \(B\) and \(C\) can now saturate one of the bounds \(\langle I^{B,C}\rangle=\pm 3\sqrt{3}\) conditioned on \(A\)'s measurement. By using \(I^{A,B}_{\mu}=(-1)^{\mu}I^{A,B}\) as the seed, and choosing the matrix \(\mathbf{C}\) with party indexes \(k,l\in\{A,B,C\}\), \[\mathbf{C}=\begin{bmatrix}0&0&1\\ 0&0&1\\ 0&0&0\end{bmatrix}, \tag{10}\] we can construct the expanded Bell expression according to Definition 2: \[I=\underbrace{P^{A}_{\mu=0|y=0}I^{B,C}-P^{A}_{\mu=1|y=0}I^{B,C}} _{k=A,l=C}\\ &\qquad\qquad+\underbrace{P^{B}_{\mu=0|y=0}I^{A,C}-P^{B}_{\mu=1|y= 0}I^{A,C}}_{k=B,l=C}\\ &\qquad\qquad\qquad=A_{0}I^{B,C}+B_{0}I^{A,C}.\end{split} \tag{11}\] Due to the properties of \(|\psi_{\text{GHZ}}\rangle\) discussed above, we find \(\langle I\rangle=2\cdot 3\sqrt{3}\) is achieved by the strategy in Eq. (9), and that \(I\) is a nontrivial Bell inequality, i.e., one that can be violated by quantum theory. In Lemma 1 we prove this claim in full generality, and will later rigorously prove that this is a sufficient condition for certifying maximum randomness. We call \(I\) an expanded Bell expression, since it is built from combining a bipartite seed, \(I^{(k,l)}_{\mathbf{\mu}}\), conditioned on fixed measurement settings \(\mathbf{0}\) and outcomes \(\mathbf{\mu}\) for the remaining \(N-2\) parties. We then sum over all possible outcomes of this \(N-2\) party measurement, changing the Bell expression accordingly, and over different combinations of parties \((k,l)\) chosen according to \(\mathbf{C}\). There needs to be a gap between the local and quantum bounds of \(\langle I\rangle\) for \(I\) to define a nontrivial Bell inequality; typically, we find the following upper bound is achievable, which is strictly greater than the local bound: **Lemma 1**.: _Let \(I\) be an expanded Bell expression according to Definition 2. The maximum quantum value of \(\langle I\rangle\) is upper bounded by \(\eta^{\mathrm{Q}}_{N}:=\sum_{k,l}c_{k,l}\eta^{\mathrm{Q}}\)._ Naturally, the only strategy that can achieve \(\langle I\rangle=\eta^{\mathrm{Q}}_{N}\) is the one satisfying \(\big{\langle}\sum_{\mathbf{\mu}}\tilde{P}^{(k,l)}_{\mathbf{\mu}|\mathbf{0}}I^{(k,l)}_{\mathbf{ \mu}}\big{\rangle}=\eta^{\mathrm{Q}}\) for all pairing combinations \(k,l\) with \(c_{k,l}>0\), i.e., the reduced state held between parties \(k\) and \(l\), following the projection \(\tilde{P}^{(k,l)}_{\mathbf{\mu}|\mathbf{0}}\) on the global state, must achieve the maximum quantum value of \(\langle I^{(k,l)}_{\mathbf{\mu}}\rangle=\eta^{\mathrm{Q}}\). This explains why we must include a dependence on \(\mathbf{\mu}\) for the Bell expression, since it should be tailored to the bipartite state following the outcome \(\mathbf{\mu}\). For simplicity we assume each bipartite state is equivalent up to local unitaries, meaning we effectively use a single seed \(I^{(k,l)}_{\mathbf{\mu}}\). In principle however, one could use different seeds depending on the value of \(\mathbf{\mu}\), which tailors the construction to non-symmetric states [35]. For our work we only consider the symmetric case. Additionally, one could further generalize by choosing specific measurement choices \(\mathbf{y}\) dependent on the choice of non-projecting parties \(k,l\), instead of fixing \(\mathbf{y}=\mathbf{0}\) as we have done here. A proof of Lemma 1 is given in Appendix A.1. ### Expanding Bell expressions and entropy evaluation Next we present the characteristic of expanded Bell expressions that allows us to certify DI randomness from witnessing its maximum quantum value, forming the main technical contribution of our work. **Lemma 2** (Decoupling lemma).: _Let \(I\) be an expanded \(N\)-party Bell expression defined in Definition 2 with binary inputs and outputs, and \(c_{k,N}=1\) if \(k<N\) and zero otherwise. Suppose for every \(I^{(k,N)}_{\mathbf{\mu}}\), there exists an SOS decomposition that self-tests the same pure bipartite entangled state \(|\Phi\rangle\) between parties \(k\) and \(N\), along with some ideal measurements \(P^{(k)}_{a_{k}|x_{k}},P^{(N)}_{a_{N}|x_{N}}\), according to Definition 1, satisfying \(\langle\Phi|P^{(k)}_{a_{k}|0}\otimes P^{(N)}_{a_{N}|0}|\Phi\rangle>0\) for all \(a_{k},a_{N}\). Then for any strategy \(|\Psi\rangle_{\tilde{\mathbf{G}}\tilde{\mathbf{E}}},\big{\{}\{\tilde{P}^{(k)}_{a_{k}| x_{k}}\}_{x_{k}}\big{\}}_{k}\) that achieves \(\langle I\rangle=\eta^{\mathrm{Q}}_{N}\), the post-measurement state \(\rho_{\mathbf{R}E}\), for measurement settings \(\mathbf{x}=\mathbf{0}\), admits the tensor product decomposition,_ \[\rho_{\mathbf{R}E|\mathbf{0}}=\rho_{\mathbf{R}|\mathbf{0}}\otimes\rho_{E}. \tag{12}\] Having established that Eve is decoupled it is then straightforward to evaluate the rate in Eq. (5) conditioned on observing the maximum quantum value of \(I\). **Lemma 3**.: _Let \(I,\eta^{\mathrm{Q}}_{N}\) be defined as in Lemma 2. Then_ \[R_{I}(\eta^{\mathrm{Q}}_{N})=H(\{p(\mathbf{a}|\mathbf{0})\}), \tag{13}\] _where \(H(\{p_{i}\})\) is the Shannon entropy of a distribution \(\{p_{i}\}_{i}\)._ The proof can be found in Appendix A.3. Lemma 2 allows one to relate observing the maximum quantum value of the expanded Bell expression \(I\), to a condition on the post-measurement state held by the parties and Eve, namely that it must be in tensor product with the purifying system \(E\). This allows one to directly evaluate the conditional entropy according to Lemma 3. ## IV Certifying \(N\) bits of DI randomness We now consider constructions for generating \(N\) bits of DI randomness. As previously mentioned, when \(N\) is odd the MABK family can be used [31]. For an arbitrary number of parties, we apply the previously outlined techniques to derive a suitable Bell expression for this task. This will involve generalizing a one parameter family of quantum strategies, symmetric under permutation of the parties. ### \(N\) odd Using symmetry arguments, the authors of Ref. [31] showed that maximum violation of the MABK inequality, for \(N\) odd, implies maximum randomness. **Proposition 1** (Maximum randomness for \(N\) odd [31]).: _When \(N\) is odd, maximum quantum violation of the MABK family of Bell inequalities, given by Eq. (1), certifies \(N\) bits of global DI randomness, i.e._ \[R_{M_{N}}(2^{(N-1)/2})=N. \tag{14}\] From a self-testing point of view, maximum violation of the MABK family can only be achieved with a GHZ state and maximally anticommuting observables [42; 43]. For the form given in Eq. (1), the following strategy achieves maximum violation for \(N\) odd: \[\rho_{\mathbf{Q}}=|\psi_{\rm GHZ}\rangle\!\langle\psi_{\rm GHZ}| \tag{15}\] \[A_{0}^{(k)}=\cos\theta_{N}^{+}\,\sigma_{X}+\sin\theta_{N}^{+}\, \sigma_{Y},\] \[A_{1}^{(k)}=\cos\theta_{N}^{-}\,\sigma_{X}+\sin\theta_{N}^{-}\, \sigma_{Y},\] \[\qquad\qquad\qquad k\in\{1,...,N\},\] where \(\theta_{N}^{\pm}=\pi/(4N)\pm\pi/4\), and satisfies \(A_{0}^{(k)}A_{1}^{(k)}+A_{1}^{(k)}A_{0}^{(k)}=0\) for each \(k\). Following this self-testing property, the infimum in Eq. (14) reduces to a single point up to symmetries, and the pure state held by the parties implies Eve must be uncorrelated, i.e., \(\rho_{\mathbf{R}E|\mathbf{x}^{*}}=\rho_{\mathbf{R}}^{\rm e}\otimes\rho_{\mathbf{E}}\). By direct calculation, one can verify for \(N=4n-1\), \(n=1,2,3,...\), \(p(\mathbf{a}|\mathbf{0})=1/2^{N},\forall\mathbf{a}\), hence \(H(\{p(\mathbf{a}|\mathbf{0})\})=N\). Similarly for \(N=4n+1\), \(n=1,2,3,...\), \(p(\mathbf{a}|\mathbf{1})=1/2^{N},\forall\mathbf{a}\), implying \(H(\{p(\mathbf{a}|\mathbf{1})\})=N\). Taking \(N=3\) in the above construction we have \(M_{3}=\big{(}\langle\tilde{A}_{1}^{(0)}\tilde{A}_{0}^{(1)}\tilde{A}_{0}^{(2) }\rangle+\langle\tilde{A}_{0}^{(0)}\tilde{A}_{1}^{(1)}\tilde{A}_{0}^{(2)} \rangle+\langle\tilde{A}_{0}^{(0)}\tilde{A}_{0}^{(1)}\tilde{A}_{1}^{(2)} \rangle-\langle\tilde{A}_{1}^{(0)}\tilde{A}_{1}^{(1)}\tilde{A}_{1}^{(2)} \rangle\big{)}/2\), and the strategy in Eq. (15) reaches the algebraic bound of 2. It turns out \(M_{3}=0\) implies 3 bits of randomness when all three parties measure 0. On the other hand, the correlators in the inequality must take the values \(\pm 1\) to reach the algebraic bound, so none of the input settings used in the inequality can generate more than 2 bits of global randomness (in fact they give exactly 2 bits when \(M_{3}=0\)). ### \(N\) even When \(N\) is even, all full body correlators must have non-zero weight to achieve the largest possible quantum value of MABK. This is incompatible with maximum randomness from one input combination. Hence, achieving the quantum bound does not certify maximum randomness; we show in Section V that maximum MABK violation certifies \(N+1/2-\log_{2}(1+\sqrt{2})/\sqrt{2}\approx N-0.4\) bits of global DI randomness. To do so we find a new Bell expression which is tailored to a strategy of the form in Eq. (15). We use the techniques introduced in Section III to generalize the \(N=2\) case, which was addressed in [23]. #### iv.2.1 Bipartite self-tests To begin with, we summarize the results obtained for the \(N=2\) case. **Lemma 4** (\(I_{\theta}\) family of self-tests).: _Define the set \(\mathcal{G}=(\pi/4,\pi/2)\cup(\pi/2,3\pi/4)\cup(5\pi/4,3\pi/2)\cup(3\pi/2,7\pi/ 4)^{2}\). Let \(\theta\in\mathcal{G}\), and define the family of Bell expressions parameterized by \(\theta\), for parties \(k,l\in\{1,...,N\}\),_ \[\langle I_{\theta}^{(k,l)}\rangle=\cos\theta\cos 2\theta\langle A_{0} ^{(k)}A_{0}^{(l)}\rangle-\\ \cos 2\theta\big{(}\langle A_{0}^{(k)}A_{1}^{(l)}\rangle+\langle A _{1}^{(k)}A_{0}^{(l)}\rangle\big{)}+\cos\theta\langle A_{1}^{(k)}A_{1}^{(l)} \rangle. \tag{16}\] _Then we have the following:_ _(i) The local bounds are given by \(\pm\eta_{\theta}^{\rm L}\), where_ \[\eta_{\theta}^{\rm L}=\max\Bigl{(}|\cos\theta(1-\cos 2 \theta)|,\\ |\cos\theta\big{(}1+\cos 2\theta\big{)}|+|2\cos 2\theta|\Bigr{)}. \tag{17}\] _(ii) The quantum bounds are given by \(\pm\eta_{\theta}^{\rm Q}\), where \(\eta_{\theta}^{\rm Q}=2\sin^{3}\theta\)._ _(iii) Up to local isometries, there exists a unique strategy that achieves \(\langle I_{\theta}\rangle=\eta_{\theta}^{\rm Q}\):_ \[\rho_{Q_{k}Q_{l}} =|\psi\rangle\!\langle\psi|,\ {\rm where}\ |\psi\rangle=\frac{1}{\sqrt{2}} \big{(}|00\rangle+i|11\rangle\big{)} \tag{18}\] \[A_{0}^{(k)} =A_{0}^{(l)}=\sigma_{X}\] \[A_{1}^{(k)} =A_{1}^{(l)}=\cos\theta\,\sigma_{X}+\sin\theta\,\sigma_{Y}.\] The above lemma is a rewriting of Proposition 1 in [23], and its proof is deferred to Lemma 7 for which the above claim can be recovered as a special case. Importantly, the self-testing property implies both parties measure \(\sigma_{X}\) on \(|\psi\rangle\), which results in 2 bits of randomness, \(H(\{p(ab|00)\})=2\). We also remark that the state \(|\psi^{\prime}\rangle=(|00\rangle-|11\rangle)/\sqrt{2}\) with the same measurements achieves \(\langle I_{\theta}\rangle=-\eta_{\theta}^{\rm Q}\). This is a symmetry of the case above, since relabeling the measurement outcomes of party \(k\) yields the new Bell expression \(I_{\theta}^{\prime}=-I_{\theta}\). Therefore \(\langle I_{\theta}\rangle=-\eta_{\theta}^{\rm Q}\) implies \(\langle I_{\theta}^{\prime}\rangle=\eta_{\theta}^{\rm Q}\), which self-tests \(|\psi\rangle\) and the negated measurements, or equivalently \(|\psi^{\prime}\rangle\) and the original measurements. #### iv.2.2 Target strategy For the \(N\) partite case, we generalize the above bipartite strategy. The target \(N\)-partite strategy takes a form similar to that in Eq. (15), but with an added parameter \(\theta\) to choose the second measurement: \[\rho_{\mathbf{Q}}=|\psi_{\rm GHZ}\rangle\!\langle\psi_{\rm GHZ}| \tag{19}\] \[A_{0}^{(k)}=\sigma_{X},\] \[A_{1}^{(k)}=\cos\theta\,\sigma_{X}+\sin\theta\,\sigma_{Y},\ k\in\{1,...,N\}.\] As before, we find \(p(\mathbf{a}|\mathbf{0})=1/2^{N},\forall\mathbf{a}\), hence this is a target strategy for certifying \(N\) bits of global randomness. Our objective now will be to design a tailored Bell inequality that can certify \(N\) bits by witnessing its maximum value, attained by the strategy in Eq. (19). Constructing an \(N\)-partite Bell inequality We use the techniques introduced in Section III to construct the desired Bell expression by expanding the seed \(I_{\theta}\). Studying the strategy in Eq. (19), we notice when \(N-2\) parties perform their first measurement, \(\sigma_{X}\), depending on the outcome parity the post-measurement state for the remaining parties is \((|00\rangle\pm i|11\rangle)/\sqrt{2}\). Now, performing the measurements in Eq. (19) on that post-measurement state achieves \(\langle I_{\theta}\rangle=\pm\eta_{\theta}^{\mathrm{Q}}\), which, according to the self-testing properties of \(I_{\theta}\), will certify 2 bits of randomness. Since the strategy is symmetric under party permutations, we can repeat this argument over different subsets of \(N-2\) parties, and conclude that, when every bipartite self-test is satisfied, the output setting \(\mathbf{X}=\mathbf{0}\) must generate uniform DI randomness. The corresponding Bell inequality is constructed according to the following lemma. **Lemma 5** (Bell inequality for maximum randomness).: _Let \(\mathbf{\mu}\) be a tuple of \(N-2\) measurement outcomes for all parties excluding \(k,l\), and \(n_{\mathbf{\mu}}\in\{0,1\}\) be the parity of \(\mathbf{\mu}\). Let \(\theta\in\mathcal{G}\) and \(\left\{I_{\mathbf{\mu}}^{(k,l)}\right\}_{\mathbf{\mu}}\) be a set of bipartite Bell expressions between parties \(k,l\), where_ \[I_{\mathbf{\mu}}^{(k,l)}=(-1)^{n_{\mathbf{\mu}}}I_{\theta}^{(k,l)}. \tag{20}\] _Then the expanded Bell expression given by_ \[I_{\theta}=\sum_{k=1}^{N-1}\Bigg{(}\sum_{\mathbf{\mu}}\widetilde{P}_{\mathbf{\mu}|\bm {0}}^{\overline{(k,N)}}I_{\mathbf{\mu}}^{(k,N)}\Bigg{)} \tag{21}\] _has quantum bounds \(\pm\eta_{N,\theta}^{\mathrm{Q}}\) where \(\eta_{N,\theta}^{\mathrm{Q}}=2(N-1)\sin^{3}\theta\), achieved up to relabelings by the strategy in Eq. (19), and defines a nontrivial Bell inequality._ The proof can be found in Appendix B.1. We can now state our first main result: **Proposition 2** (Maximum randomness certification).: _Achieving the maximum quantum value of the Bell inequality in Lemma 5 certifies \(N\) bits of global randomness, i.e._ \[R_{I_{\theta}}(\eta_{N,\theta}^{\mathrm{Q}})=N. \tag{22}\] The proof is obtained directly by relating maximal Bell violation to decoupling Eve from the post-measurement state, proven in Lemmas 2 and 3. ### MABK value and maximum randomness Above we gave a new one parameter construction for certifying \(N\) bits of device-independent randomness in any \(N\)-party 2-input 2-output scenario. Now we will study some properties of the correlations used to certify maximum randomness. In particular we're interested in how maximal randomness and nonlocality trade-off against each other. For instance, how nonlocal can correlations be whilst certifying maximal randomness? #### iii.3.1 Achievable MABK values with maximum randomness When \(N\) is odd, we have discussed that one can always certify \(N\) bits of DI randomness from maximum MABK violation; for the even case, it is unclear how large the MABK value can be whilst certifying maximum randomness. Using the fact that, for the strategy in Eq. (19), \(\langle A_{\mathbf{x}}\rangle=\sin n\theta\), where \(n=\sum_{k=1}^{N}x_{k}\), its MABK value is given by \[\langle M_{N}(\theta)\rangle=2^{\frac{N-1}{2}}\Big{(}\cos^{N} \big{(}\theta/2+\pi/4\big{)}\sin\big{(}N\theta/2+\pi/4\big{)}\\ +\cos^{N}\big{(}\theta/2-\pi/4\big{)}\sin\big{(}N\theta/2-\pi/4 \big{)}\Big{)}. \tag{23}\] We begin with the following proposition: **Proposition 3**.: _For \(N\) even, let_ \[\theta_{N}^{*}=\frac{2\pi t_{N}}{N+1}, \tag{24}\] _where \(t_{N}\) is the \((N/2)^{\mathrm{th}}\) element of the sequence \(1,1,5,7,3,3,11,13,5,5,...\) given by_ \[t_{N} = \begin{cases}N/4+1/2,\ \mathrm{if}\ N=8n+2,\\ N/4,\ \mathrm{if}\ N=8n+4,\\ 3N/4+1/2,\ \mathrm{if}\ N=8n+6,\\ 3N/4+1,\ \mathrm{if}\ N=8n+8,\ n\in\mathbb{N}_{0}.\end{cases} \tag{25}\] _Then \(\theta_{N}^{*}\in\mathcal{G}\)._ Proof can be found in Appendix B.2. This allows us to state the following technical conjecture, for which we have convincing evidence for: **Conjecture 1**.: _Let \(\theta_{N}^{*}\) be defined in Proposition 3, and define the corresponding MABK violation_ \[m_{N}^{*}=\langle M_{N}(\theta_{N}^{*})\rangle. \tag{26}\] _Then we have the following for \(N\) even._ 1. _For every MABK value_ \(s\) _in the range_ \((1,m_{N}^{*}]\)_, there exists a_ \(\theta_{s}\in\mathcal{G}\) _that satisfies_ \(s=\langle M_{N}(\theta_{s})\rangle\)_._ 2. \(m_{N}^{*}\) _is the maximum MABK value achievable by quantum strategies which generate maximum randomness, i.e., strategies with_ \(p(\mathbf{a}|\mathbf{0})=1/2^{N},\ \forall\mathbf{a}\)_._ _For \(N\) odd we have:_ 1. _For every MABK value_ \(s\) _in the range_ \((1,2^{(N-1)/2})\)_, there exists a_ \(\theta_{s}\in\mathcal{G}\) _that satisfies_ \(s=\langle M_{N}(\theta_{s})\rangle\) Following Conjecture 1, we can state the range of MABK violations for which maximum DI randomness can be certified. **Lemma 6** (MABK violations achievable with maximum DI randomness).: _Suppose Conjecture 1 holds. Then_ 1. _for_ \(N\) _even the maximum amount of device-independent randomness that can be certified for the range of MABK values_ \((1,m_{N}^{*}]\) _is_ \(N\) _bits;_ 2. _for_ \(N\) _odd the maximum amount of device-independent randomness that can be certified for the range of MABK values_ \((1,2^{(N-1)/2}]\) _is_ \(N\) _bits._ Should part \((i)\) of Conjecture 1 hold, by varying \(\theta\in\mathcal{G}\), the family of strategies in Eq. (19) achieves every MABK value in the interval \((1,m_{N}^{*}]\). Since \(\theta\in\mathcal{G}\) for all these values, the corresponding bipartite strategy is self-testable according to Lemma 4; we can hence apply Lemma 5 to expand the Bell expression to \(N\) parties, and it follows from Lemma 3 that this Bell expression certifies \(N\) bits of DI randomness. We provide evidence for part \((i)\) in Fig. 1, by evaluating the Mermin value for a given \(N\) as a function of \(\theta\in\mathcal{G}\), and showing the full range of violations \((1,m_{N}^{*}]\) is accessible. Due to Proposition 3, the MABK value \(m_{N}^{*}\) becomes an achievable lower bound on the maximum MABK value for quantum correlations certifying maximum randomness. Part \((ii)\) of Conjecture 1 says this is optimal, in the sense that one cannot achieve a larger MABK value whilst simultaneously certifying maximum randomness. As evidence we derive a numerical technique that can generate upper bounds on this question, and show these upper bounds match the lower bounds for some specific \(N\). See Appendix B.3 for the details. Finally, part \((iii)\) of Conjecture 1 implies that when \(N\) is odd, one can always achieve maximum randomness and any MABK value simultaneously. For the case of \(\langle M_{N}\rangle=2^{(N-1)/2}\), i.e., the maximum quantum value, randomness can be certified according to Proposition 1. For all other MABK values, we conjecture maximum randomness can be certified by \(I_{\theta}\) for some \(\theta\in\mathcal{G}\). Evidence for this can be found in Fig. 2, where we plot the MABK value as a function of \(\theta\) for different \(N\), and see that maximal randomness is compatible with all nonlocal values of MABK that are achievable. The discussion above leads to the further conjecture that, as was proven for the \(N=2\) case [22, 23], one can always achieve maximum randomness device-independently with correlations that lie arbitrarily close to the MABK facet of the local polytope. There is also a nontrivial maximal MABK violation for which this randomness is achievable when \(N\) is even, unlike the odd case; as discussed in Section IV.1, this can be seen as a consequence of the \(N\) even MABK expressions containing every \(N\) party correlator. When maximum randomness is being certified, one of these correlators must be zero, which restricts the maximum MABK violation to be strictly less than the optimal quantum value. #### iv.2.2 Asymptotic behaviour We now consider the behaviour of the conjectured maximal MABK value achievable with maximum randomness, \(m_{N}^{*}\) for increasingly large even \(N\). We show that \(m_{N}^{*}\) converges to the largest possible quantum value in this limit. Note this is not based on a conjecture; \(m_{N}^{*}\) is an achievable lower bound, and in the following proposition we show this lower bound tends towards the global upper bound, namely the maximal quantum MABK value. **Proposition 4** (Maximum randomness in the asymptotic limit).: _In the limit of large even \(N\), one can achieve arbitrarily close to the maximum quantum violation of the \(N\) party MABK inequality, \(2^{(N-1)/2}\), whilst certify Figure 1: Using the family of strategies in (19) for even \(N\) we plot the MABK value renormalized by the maximum quantum value over all strategies (i.e., \(2^{(N-1)/2}\)) in terms of \(\theta\in[\pi/4,3\pi/4]\), and \(\theta\in[5\pi/4,7\pi/4]\) respectively. The dashed lines indicate where the strategy becomes local. All points in this interval, excluding the boundaries and center point, correspond to strategies that certify \(N\) bits of randomness device-independently using our expanded Bell expressions. ing maximum device-independent randomness._ This is proven in Appendix B.4. #### iv.2.3 Nonlocality and maximum randomness Whilst we have studied the MABK values achieved by the strategies in Eq. (19), we also consider how nonlocal they are, quantified by how much the local set needs to be "diluted" to contain them. We refer to this measure as the local dilution, which can be computed via a linear program (LP) for small \(N\), and we give details in Appendix B.5. Our findings are presented in Fig. 3. Interestingly, for \(N>2\) the correlations of the strategy in Eq. (19) are not close to the local boundary for any \(\theta\in\mathcal{G}\), even for small MABK violation; whilst the violation of a given MABK inequality can be made arbitrarily small, the correlations are still distant from the local boundary. In fact, by comparing Fig. 3 with Fig. 1 and Fig. 2, one can see how correlations achieving at or below the maximum local MABK value, \(\langle M_{N}\rangle\leq 1\), can still be nonlocal and certify maximum randomness. This is because the correlations violate some other Bell inequality, highlighting the complexity of randomness versus nonlocality in the multipartite scenario. ## V Trade-off between DI randomness and MABK violation In this section, we present an achievable lower bound on the maximum device-independent randomness that can be generated by correlations achieving any given MABK violation. Moreover, we conjecture this lower bound to be tight based on numerical evidence. This is achieved by introducing a new family of two parameter quantum strategies along with their self-testing Bell expressions. Using this as the seed, we construct a multipartite Bell expression which certifies their randomness. ### Constructing the Bell expression #### v.1.1 Bipartite self-tests We begin by introducing a versatile family of bipartite self-tests which generalize the results in [23]. **Lemma 7** (\(J_{\phi,\theta}\) family of self-tests).: _Let \((\phi,\theta)\in\mathbb{R}^{2}\) such Figure 3: Nonlocality, as measured using local dilution, of the strategies in Eq. (19), for second measurement angle of all parties \(\theta\in[0,\pi]\). All values of \(\theta\in(\pi/4,3\pi/2)\cup(\pi/2,3\pi/4)\) correspond to strategies which can certify \(N\) bits of maximum randomness using our technique for expanding Bell expressions. \(\theta=\pi/2\) can also correspond to maximum randomness when \(N\) is odd by testing the MABK inequality. Figure 2: A similar plot to that of Fig. 1 for odd \(N\) and second measurement angle of all parties \(\theta\in[\pi/4,3\pi/4]\), and \(\theta\in[5\pi/4,7\pi/4]\) respectively. All points in this interval, excluding the boundaries, correspond to strategies that can certify \(N\) bits of randomness device-independently, using our expanded Bell expressions, or using the MABK inequality for the center points. _that \(\cos(2\theta)\cos(2\phi)<0\) and \(\cos(\theta-\phi)\neq 0\)3. Define the family of Bell expressions parameterized by \(\phi\) and \(\theta\),_ Footnote 3: The reason for this condition is explained in Appendix B.6. \[\langle J^{(k,l)}_{\phi,\theta}\rangle=\cos 2\theta\cos(\theta-\phi) \langle A^{(k)}_{0}A^{(l)}_{0}\rangle\\ -\cos 2\theta\cos 2\phi\big{(}\langle A^{(k)}_{0}A^{(l)}_{1} \rangle+\langle A^{(k)}_{1}A^{(l)}_{0}\rangle\big{)}\\ +\cos 2\phi\cos(\theta-\phi)\langle A^{(k)}_{1}A^{(l)}_{1}\rangle. \tag{27}\] _Then we have the following:_ 1. _The local bounds are given by_ \(\pm\eta^{\rm L}_{\phi,\theta}\)_, where_ \[\eta^{\rm L}_{\phi,\theta}=\max\big{\{}|\cos(\theta-\phi)(\cos 2 \theta-\cos 2\phi)|,\] \[|\cos(\theta-\phi)\big{(}\cos 2\theta+\cos 2\phi\big{)}|+|2\cos 2 \phi\cos 2\theta|\big{\}}.\] (28) 2. _The quantum bounds are given by_ \(\pm\eta^{\rm Q}_{\phi,\theta}\)_, where_ \(\eta^{\rm Q}_{\phi,\theta}=2\sin^{2}(\theta+\phi)\sin(\theta-\phi)\)_._ 3. _Up to local isometries, there exists a unique strategy that achieves_ \(\langle J^{(k,l)}_{\phi,\theta}\rangle=\eta^{\rm Q}_{\phi,\theta}\)_:_ \[\rho_{Q_{k}Q_{l}}=|\psi\rangle\!\langle\psi|,\ {\rm where}\ |\psi\rangle=\frac{1}{\sqrt{2}}\big{(}|00 \rangle+i|11\rangle\big{)}\] (29) \[A^{(k)}_{0}=A^{(l)}_{0}=\cos(\phi)\,\sigma_{X}-\sin(\phi)\,\sigma _{Y}\] \[A^{(k)}_{1}=A^{(l)}_{1}=\cos(\theta)\,\sigma_{X}+\sin(\theta)\, \sigma_{Y}.\] Note that when \(\phi=0\) Lemma 7 reduces to Lemma 4. Moreover, this Bell expression retains the same symmetry properties of the \(I_{\theta}\) family, namely that \(\langle J_{\phi,\theta}\rangle=-\eta^{\rm Q}_{\phi,\theta}\) for the state \(|\psi^{\prime}\rangle\) and the same measurements, and is hence a self-test. We provide a full self-testing proof in Appendix B.6. For future convenience, we define the set \(\mathcal{F}=\left[\left(-\pi/4,\pi/4\right)\times\mathcal{G}\right]\cup \left[\left(-\pi/4,\pi/4\right)\setminus\{0\}\times\{\pi/2,3\pi/2\}\right] \subset\mathbb{R}^{2}\). One can verify that points \((\theta,\phi)\in\mathcal{F}\) satisfy \(\sin^{2}(\theta+\phi)\geq\cos^{2}(\theta-\phi)\), and \(\eta^{\rm L}_{\phi,\theta}<|\eta^{\rm Q}_{\phi,\theta}|\); i.e. they define a valid self-test according to Lemma 7. #### iii.1.2 Target strategy For the \(N\)-partite case, we will consider the following strategy: \[\rho_{\mathcal{Q}}=|\psi_{\rm GHZ}\rangle\!\langle\psi_{\rm GHZ}| \tag{30}\] \[A^{(k)}_{0}=\cos\phi\,\sigma_{X}-\sin\phi\,\sigma_{Y},\] \[A^{(k)}_{1}=\cos\theta\,\sigma_{X}+\sin\theta\,\sigma_{Y},\ k\in \{1,...,N\}.\] Using the fact that \(\langle A_{\mathbf{\pi}}\rangle=\sin(n\theta-(N-n)\phi)\), where \(n=\sum_{k}x_{k}\), the MABK value, defined in Eq. (1), of the above strategy is given by \[\langle M_{N}(\phi,\theta)\rangle=2^{\frac{N-1}{2}}\Big{(}\cos^{N }\big{[}(\theta+\phi)/2+\pi/4\big{]}\\ \cdot\sin\big{[}N(\theta-\phi)/2+\pi/4\big{]}\\ +\cos^{N}\big{[}(\theta+\phi)/2-\pi/4\big{]}\sin\big{[}N(\theta -\phi)/2-\pi/4\big{]}\Big{)}. \tag{31}\] #### iii.1.3 Constructing an \(N\)-partite Bell inequality Using the previous two building blocks, we construct the following Bell inequality using the \(J_{\phi,\theta}\) expressions as the seed. **Lemma 8**.: _Let \(\mathbf{\mu}\) be a tuple of \(N-2\) measurement outcomes for all parties excluding \(k,l\), and \(n_{\mathbf{\mu}}\in\{0,1\}\) be the parity of \(\mathbf{\mu}\). Let \((\phi^{\prime},\theta^{\prime})\in\mathcal{F}\) and \(\{I^{(k,l)}_{\mathbf{\mu}}\}_{\mathbf{\mu}}\) be a set of bipartite Bell expressions between parties \(k,l\), where_ \[I^{(k,l)}_{\mathbf{\mu}}=(-1)^{n_{\mathbf{\mu}}}J^{(k,l)}_{\theta^{\prime},\phi^{ \prime}}. \tag{32}\] _Then the expanded Bell expression given by_ \[I_{\theta^{\prime},\phi^{\prime}}=\sum_{k=1}^{N-1}\Bigg{(}\sum_{\mathbf{\mu}}\tilde {P}^{(\overline{k,N})}_{\mathbf{\mu}|\mathbf{0}}I^{(k,N)}_{\mathbf{\mu}}\Bigg{)} \tag{33}\] _has a quantum bound \(\pm\eta^{\rm Q}_{N,\theta^{\prime},\phi^{\prime}}\) where \(\eta^{\rm Q}_{N,\theta^{\prime},\phi^{\prime}}=2(N-1)\sin^{2}(\theta^{\prime}+ \phi^{\prime})\sin(\theta^{\prime}-\phi^{\prime})\), achieved up to relabelings by the strategy in Eq. (30) by setting \(\phi=-2\phi^{\prime}/(N-2)\) and \(\theta=\theta^{\prime}+\phi^{\prime}\), and defines a nontrivial Bell inequality._ The proof is given in Appendix B.7. Note that the Bell expressions (33) are written in terms of the shifted parameters \((\phi^{\prime},\theta^{\prime})\), instead of the measurement angles \((\phi,\theta)\). This is because the \(N-2\) parties performing the projection no longer use \(\sigma_{X}\), but use \(\cos(\phi)\,\sigma_{X}-\sin(\phi)\,\sigma_{Y}\). This accumulates a phase factor on the state of the remaining parties, \((|00\rangle+i(-1)^{n_{\mathbf{\mu}}}\exp\{i(N-2)\phi)|11\rangle)/\sqrt{2}\), which is equivalent to the action of some local unitary \(U\) on \((|00\rangle+i(-1)^{n_{\mathbf{\mu}}}|11\rangle)/\sqrt{2}\). To correct for this, we use the Bell expression \(U^{\dagger}J^{(k,l)}_{\phi,\theta}U=J^{(k,l)}_{\phi^{\prime},\phi^{\prime}}\) as the seed. See Appendix B.7 for the exact calculation. ### Randomness versus MABK value Using the previously derived Bell inequality, we consider the trade-off between maximum device-independent randomness and MABK violation. Let us define \[\theta(\phi)=\frac{N-1}{N+1}\phi+\theta_{N}^{\star}, \tag{34}\] where \(\theta_{N}^{\star}\) is defined in Eq. (24) and also \[\phi_{N}^{\star}=\text{sgn}[\sin(2\theta_{N}^{\star})]\frac{\pi}{4N}. \tag{35}\] For the strategy in Eq. (30), direct calculation yields \[\langle M_{N}(\phi_{N}^{*},\theta(\phi_{N}^{*}))\rangle=2^{(N-1)/2}, \tag{36}\] and \(\langle M_{N}(0,\theta(0))\rangle=m_{N}^{*}\). Hence by varying \(\phi\in[0,\phi_{N}^{*}]\) and choosing \(\theta=\theta(\phi)\) one can obtain the desired range of MABK values \([m_{N}^{*},2^{(N-1)/2}]\), from the conjectured maximum MABK value with maximum randomness to the maximum quantum value. Choosing this parameterization, the raw randomness, \(H(\mathbf{R}|\mathbf{X}=\mathbf{x}^{*})=H(\{p_{\phi}(\mathbf{a}|\mathbf{0})\})\), of the strategy in Eq. (30), as a function of \(\phi\), is given by \[H(\{p_{\phi}(\mathbf{a}|\mathbf{0})\})\equiv r(\phi)=N-1+H_{\rm bin}\Big{[}\frac{1-\sin N \phi}{2}\Big{]}, \tag{37}\] where \(H_{\rm bin}\) is the binary entropy function and \(r(\phi)\) is a smooth, monotonically decreasing function of \(\phi\) in the range \([0,\phi_{N}^{*}]\). As shown in Section IV, when \(\phi=0\) we can certify maximum DI randomness. It also follows from the self-testing properties of the MABK family that when \(\phi=\phi_{N}^{*}\) we obtain \(N-1+3/2-\log_{2}(1+\sqrt{2})/\sqrt{2}\approx N-0.4\) bits of global DI randomness. What remains then, is to apply the new Bell expression constructed in Lemma 8 to certify \(r(\phi)\) bits of DI randomness for \(\phi\in(0,\phi_{N}^{*})\). To do so, the following proposition shows that for every \(\phi\in[0,\phi_{N}^{*}]\), there exists a valid Bell expression given by Lemma 8. **Proposition 5**.: _Let \(\phi\in[0,\phi_{N}^{*}]\), and define \(\phi^{\prime}=-(N-2)\phi/2\), \(\theta^{\prime}=\theta(\phi)-\phi^{\prime}\). Then \((\phi^{\prime},\theta^{\prime})\in\mathcal{F}\), where \(\mathcal{F}\) is defined in Lemma 7._ This is proven in Appendix B.8. Proposition 5 implies that, for every \(\phi\) in the range we are interested in, the bipartite expression \((-1)^{n_{\mathbf{\mu}}}J_{\phi^{\prime},\theta^{\prime}}\) is a valid self-test of the strategy \[\rho_{Q_{k}Q_{l}} =|\Phi_{\mathbf{\mu}}\rangle\!\langle\Phi_{\mathbf{\mu}}|, \tag{38}\] \[A_{0}^{(k)} =A_{0}^{(l)}=\cos\phi^{\prime}\,\sigma_{X}-\sin\phi^{\prime}\, \sigma_{Y}\] \[A_{1}^{(l)} =A_{1}^{(l)}=\cos\theta^{\prime}\,\sigma_{X}+\sin\theta\,\sigma _{Y}.\] After applying the local unitary \(U_{\phi}\otimes U_{\phi}\), where \(U_{\phi}=|0\rangle\!\langle 0|+e^{-i(N-2)\phi/2}|1\rangle\!\langle 1|\), this is equivalent to \[\rho_{Q_{k}Q_{l}} =|\Phi_{\mathbf{\mu},\phi}\rangle\!\langle\Phi_{\mathbf{\mu},\phi}|, \tag{39}\] \[A_{0}^{(k)} =A_{0}^{(l)}=\cos\phi\,\sigma_{X}-\sin\phi\,\sigma_{Y}\] \[A_{1}^{(k)} =A_{1}^{(l)}=\cos\theta\,\sigma_{X}+\sin\theta\,\sigma_{Y},\] which is exactly the bipartite strategy we wanted to self-test, since it is the one held by parties \(k\) and \(N\) after the projector \(\tilde{P}_{\mathbf{\mu}|\mathbf{0}}^{\overline{(k,N)}}\) is applied to the global state \(|\psi_{\rm GHZ}\rangle\). As a result, the correlations generated by the strategy in Eq. (30), by setting \(\theta=\theta(\phi)\) and varying \(\phi\in[0,\phi_{N}^{*}]\), maximally violate the Bell inequality \(I_{\theta^{\prime},\phi^{\prime}}\) with \(\phi^{\prime}=-(N-2)\phi/2\) and \(\theta^{\prime}=\theta(\phi)-\phi^{\prime}\). We can therefore employ the decoupling lemma to make the rate unconditioned on Eve, \(r(\phi)\), device-independent. **Proposition 6**.: _Achieving the maximum quantum value of the Bell inequality in Lemma 8 certifies \(r(\phi)\) bits of randomness, i.e._ \[R_{I_{\theta^{\prime},\phi^{\prime}}}(\eta_{N,\theta^{\prime},\phi^{\prime}}^ {\rm Q})=r(\phi). \tag{40}\] We have established, for every MABK value \(s\in[m_{N}^{*},2^{(N-1)/2}]\), that there exists a \(\phi_{s}\in[0,\phi_{N}^{*}]\) which defines a quantum strategy achieving \(s\), for which its generated randomness \(r(\phi_{s})\) is device-independent. We have therefore related every MABK value with a DI rate. We conjecture the curve \((s,r(\phi_{s}))\) is optimal in terms of raw randomness, which can then be made device-independent following the above discussion -- see Lemma 9. **Conjecture 2**.: _For \(N\) even, the maximum randomness unconditioned on Eve, \(r\), that can be generated by quantum strategies achieving an MABK value, \(s\), is given by_ \[r(s)=\begin{cases}N,\ s\in(1,m_{N}^{*}],\\ r(\phi_{s}),\ s\in(m_{N}^{*},2^{(N-1)/2}],\end{cases} \tag{41}\] _where \(r(\phi)\) is defined in Eq. (37), and_ \[\phi_{s}=\arg\min\bigl{\{}|\phi|\ :\ \phi\in[0,\phi_{N}^{*}],\langle M_{N}(\phi, \theta(\phi))\rangle=s\bigr{\}}. \tag{42}\] _For the range \(s\in(1,m_{N}^{*}]\), and \(s\in(m_{N}^{*},2^{(N-1)/2}]\), the rate \(r(s)\) and MABK value \(s\) is achieved by the family of quantum strategies in Eq. (19) and Eq. (30), and certified by the Bell expressions in Lemma 5 and Lemma 8, respectively._ The minimization in the definition of \(\phi_{s}\) is included since we have not shown the set \(\{\,\phi\in[0,\phi_{N}^{*}]\,:\,\langle M_{N}(\phi,\theta(\phi))\rangle=s\}\) is unique. Intuitively, the closer \(\phi_{s}\) is to zero the closer we are to maximum randomness; hence we expect \(r(\phi)\) to be monotonically decreasing with \(|\phi|\) for \(\phi\in[0,\phi_{N}^{*}]\), and minimisation guarantees the best rate at a given \(s\). For the examples we have computed, the minimization turns out to be trivial, i.e. over a single point. **Lemma 9**.: _Suppose Conjecture 1 and Conjecture 2 hold. Then the maximum amount of device-independent randomness that can be certified for the range of Mermin values \((1,2^{(N-1)/2}]\) is given by Eq. (41)._ The above lemma tells us that if the rate in Eq. (41) is optimal without conditioning on Eve, it will also be optimal conditioned on Eve, i.e., the rate can be made device-independent. This is because Propositions 3 and 5 ensure that, for every MABK value, the corresponding quantum strategy achieving it, given in Eq. (19) or Eq. (30), can generate certifiable randomness using the corresponding Bell expression \(I_{\theta}\) or \(I_{\theta^{\prime},\phi^{\prime}}\). A rewriting of Lemma 9 without Conjectures 1 and 2, would say that (41) corresponds to a guaranteed lower bound on the maximum DI randomness as a function of MABK value. Our results are summarized in Fig. 4. We provide evidence for Conjectures 1 and 2 in Appendix B.9, where we derive a numerical technique for upper bounding the maximum randomness that can be generated by quantum strategies achieving a given MABK value \(s\). By studying the numerical results, we find our analytical lower bound closely agrees with the numerical upper bound. ### Other extremal Bell inequalities In the \(N\)-partite 2-input 2-output scenario, the MABK family is just one class of extremal Bell inequalities, and the techniques developed in this section can be readily applied to others. For example, when \(N=3\), there are a total of 3 non-trivial, new classes, one being the MABK inequality \(M_{3}\), and two being of the form [41] \[S_{1} =\frac{1}{4}\sum_{x,y,z}A_{x}B_{y}C_{z}-A_{1}B_{1}C_{1}, \tag{43}\] \[S_{2} =A_{0}B_{0}(C_{0}+C_{1})-A_{1}B_{1}(C_{0}-C_{1}), \tag{44}\] where we have swapped \(A_{x_{1}}^{(1)}\) with \(A_{x}\) etc. for legibility. \(S_{1}\) and \(S_{2}\) have local bounds of 1 and 2, and maximum quantum values \(5/3\) and \(2\sqrt{2}\), respectively. We applied our numerical technique (see Appendix B.9) to find upper bounds on the maximum amount of randomness certifiable whilst achieving a given violation of \(S_{1}\) and \(S_{2}\). A comparison can be found in Fig. 5, and we provide additional figures and details in Appendix B.10. We find both \(S_{1}\) and \(S_{2}\) exhibit a trade-off for violations close to the quantum maximum. Due to its structure, \(S_{2}\) exhibits identical trade-off characteristics to the CHSH inequality found in [23], with the maximum CHSH value achievable with maximum randomness being numerically close to \(3\sqrt{3}/2\). On the other hand, we find more randomness can be generated from maximum violation of \(S_{2}\), which is still strictly less than the maximum certified by MABK. Exact values are given in Appendix B.10. ## VI Discussion In this work, we studied how MABK violation constrains the generation of global DI randomness for an arbitrary number of parties. Whilst there is no trade-off in the odd case, we conjectured the precise trade-off for the even case. This conjecture is supported by an analytical lower bound and a numerical upper bound that agree up to several decimal places and have been checked for up to \(N=12\). We additionally showed that, in the asymptotic limit, this trade-off vanishes. Our first technical contribution was the advancement of a recent tool, originally introduced to witness genuinely multipartite nonlocality, to randomness certification. Our second was the introduction of a new, versatile family of bipartite Bell expressions which generalizes some that were previously known. Figure 4: The conjectured curves of maximum device-independent randomness versus MABK value, where the MABK value is normalized by its maximum quantum value \(2^{(N-1)/2}\). When \(N\) is odd, maximum global randomness can be achieved for any MABK value between the local and quantum bound, indicated by solid lines. The blue crosses indicate the conjectured maximum MABK value for which maximum randomness can be achieved when \(N\) is even, \(m_{N}^{*}\), which tends towards the maximum quantum value as \(N\to\infty\). To the left of the blue crosses, \(N\) bits of randomness can be achieved for MABK values between the local bound (not shown on this axis) and \(m_{N}^{*}\); since this is the global maximum, it is optimal, indicated by solid lines. To the right of the blue crosses, dashed lines indicate lower bounds on the trade-off between maximum randomness and MABK value from \(m_{N}^{*}\) to the maximum quantum value, which are conjectured to be tight. The case of \(N=2\) was proven in Ref [23], which we reproduce with our results. Figure 5: Upper bounds on the trade-off between asymptotic global DI randomness and violation of non-trivial extremal Bell inequalities, for the tripartite scenario with two binary measurements per party. The violation has been normalized by the maximum quantum value. \(S_{1}\) and \(S_{2}\) are given by Eq. (43) and Eq. (44) respectively, and \(M_{3}\) is the MABK expression. Note the above graph does not necessarily imply a weaker trade-off for \(S_{1}\), since the local bounds differ for all expressions once normalized by the maximum quantum value. More precisely, we discussed that odd MABK expressions contain half the full party correlators; considering input combinations not included in the Bell expression, we find some of these can be uniform. For even \(N\), all correlators are included and must be non-zero to ascertain maximum violation, resulting in a trade-off. The number of correlators is \(2^{N}\), whereas the quantum bound is \(2^{(N-1)/2}\). Maximum violation is achieved when the product of all correlators and their MABK coefficients have the same value, which in this case will be \(2^{-(N+1)/2}\). Hence we find the penalty for fixing one correlator to zero (which is necessary for maximum randomness) diminishes as \(N\) grows, and vanishes in the asymptotic limit. We also quantified the nonlocality of our constructions for low values of \(N\), calculated via a linear program. Interestingly we found our correlations are always separated from the local boundary. It would be interesting to find correlations that lie asymptotically close to the local boundary whilst still certifying maximum randomness, as was found in the \(N=2\) case [22; 23]. This could entail symmetry breaking between the second measurements of each party, and the versatility of the techniques presented here can be easily applied. Moreover, one could hope to go further and bound the trade-off between randomness and nonlocality in this setting; however, the challenge becomes finding a suitable measure of nonlocality that is efficient to work with. We hope that the results found here will inform future experiments in DIRE; initial works have already made progress in this direction [48]. In such experiments, one should consider the optimal protocol for DIRE. In this work, we have proposed protocols that rely on spot-checking [7; 8; 44], where randomness is generated by a single input combination. An alternative approach has been explored in reference [44], where it was found that generating randomness from all inputs and averaging the result boosts the rate when the CHSH inequality is used. For the noiseless scenario, spot-checking will always be optimal when a specific input combination leads to maximum randomness, as is the case in this work; having one maximally random setting necessarily implies that at least one other setting combination will not be maximally random for the correlations to be nonlocal, hence averaging will only decrease the overall rate. The noisy case remains an open question however, and the trade-off between using spot-checking and weighted averaging for our constructions deserves future investigation, with the aim of further improving practical rates. Another interesting direction for future research is to connect expanded Bell inequalities to multipartite self-testing. Our decoupling result allows the structure of the post-measurement state to be certified following the maximum violation of an expanded Bell expression; the open question we pose is if an expanded Bell inequality can constitute a full self-test, opening up a new range of useful applications beyond randomness certification. Intuitively, if each pair of parties can self-test all of their measurements along with a projected state, one might hope to use the marginal information to make a statement about the global state. Such a strategy has already been adopted for self-testing multipartite correlations [36]. However, there are subtleties. For example, for three parties with observables \(A_{x},B_{y},C_{z}\), our first Bell expression in Eq. (21) when \(\theta=\pi/2\), reads \[A_{0}\big{(}B_{0}C_{1}+B_{1}C_{0}\big{)}+B_{0}\big{(}A_{0}C_{1}+A_{1}C_{0} \big{)}\leq_{\mathcal{Q}}4\mathbb{I}, \tag{45}\] which is achievable classically. This originates from the classical achievability of the seed \(I_{\theta=\pi/2}^{(k,l)}=A_{0}^{(k)}A_{1}^{(l)}+A_{1}^{(k)}A_{0}^{(l)}\) in Eq. (16). Yet, the Bell expression was designed for the strategy \[\begin{split}&\rho_{Q_{A}Q_{B}Q_{C}}=|\psi_{\text{GHZ}}\rangle \langle\psi_{\text{GHZ}}|,\\ & A_{0/1}=B_{0/1}=C_{0/1}=\sigma_{X/Y},\end{split} \tag{46}\] which maximally violates the \(N=3\) MABK inequality, and is known to be self-testing. In other words, when any party measures \(\sigma_{X}\) or \(\sigma_{Y}\) on \(|\psi_{\text{GHZ}}\rangle\), the resulting bipartite correlations are classical, even though the global strategy is nonlocal. It is then necessary to add extra measurements to enable bipartite self-testing. On the other hand, the \(N=3\) MABK expression reads \[A_{0}\big{(}B_{0}C_{1}+B_{1}C_{0}\big{)}+A_{1}\big{(}B_{0}C_{0}-B_{1}C_{1} \big{)}\leq_{\mathcal{Q}}4\mathbb{I}, \tag{47}\] which holds similarities to Eq. (45). Both terms in the sum are achievable classically, given by a relabeling of the seed \(I_{\theta=\pi/2}^{(k,l)}\); the key difference is the lack of _simultaneous_ classical achievability in Eq. (47). Clearly this property is sufficient. It would be interesting to understand if the \(N=3\) MABK expression can still be viewed as an expansion of the seed \(I_{\theta=\pi/2}^{(k,l)}\) following an appropriate relabelling, using only the strategy in Eq. (46) as the starting point. Note however, for \(N>3\), we can use the \(N=3\) MABK expression as the self-testing seed, and we do not run into this situation. Thus, correlations that do not exhibit post-measurement nonlocality, yet are still nonlocal, could be viewed as a resource to construct richer correlations. Finally, whilst we studied the MABK family of inequalities, one could consider a similar analysis for other families of extremal multipartite Bell expressions [41]. For example, we explored upper bounds on the trade-offs for all extremal inequalities when \(N=3\), and found the MABK inequality to be optimal for global randomness; it would be interesting to find out if the upper bounds presented for the other inequalities are achievable. One might try to find explicit constructions that match these bounds, and use our techniques to make them device-independent. Analysing other extremal inequalities will build a more complete picture of how DI randomness can be generated in the multipartite scenario, better informing the way forward for future multipartite experiments. ###### Acknowledgements. This work was supported by the UK's Engineering and Physical Sciences Research Council (EPSRC) via the Quantum Communications Hub (Grant No. EP/T001011/1) and Grant No. EP/SO23607/1.
2303.05555
Pohang Canal Dataset: A Multimodal Maritime Dataset for Autonomous Navigation in Restricted Waters
This paper presents a multimodal maritime dataset and the data collection procedure used to gather it, which aims to facilitate autonomous navigation in restricted water environments. The dataset comprises measurements obtained using various perception and navigation sensors, including a stereo camera, an infrared camera, an omnidirectional camera, three LiDARs, a marine radar, a global positioning system, and an attitude heading reference system. The data were collected along a 7.5-km-long route that includes a narrow canal, inner and outer ports, and near-coastal areas in Pohang, South Korea. The collection was conducted under diverse weather and visual conditions. The dataset and its detailed description are available for free download at https://sites.google.com/view/pohang-canal-dataset.
Dongha Chung, Jonghwi Kim, Changyu Lee, Jinwhan Kim
2023-03-09T19:30:21Z
http://arxiv.org/abs/2303.05555v1
# Pohang Canal Dataset: A Multimodal ###### Abstract This paper presents a multimodal maritime dataset and the data collection procedure used to gather it, which aims to facilitate autonomous navigation in restricted water environments. The dataset comprises measurements obtained using various perception and navigation sensors, including a stereo camera, an infrared camera, an omnidirectional camera, three LiDARs, a marine radar, a global positioning system, and an attitude heading reference system. The data were collected along a 7.5-km-long route that includes a narrow canal, inner and outer ports, and near-coastal areas in Pohang, South Korea. The collection was conducted under diverse weather and visual conditions. The dataset and its detailed description are available for free download at [https://sites.google.com/view/pohang-canal-dataset](https://sites.google.com/view/pohang-canal-dataset). Autonomous surface vehicle, camera, LiDAR, marine radar, GPS, AHRS, maritime dataset + Footnote †: journal: Journal Title ## 1 Introduction Recent advances in autonomous vehicle technology have led to a growing interest in autonomous navigation in maritime environments. The development of advanced perception sensors with rapid progress of computer vision and deep learning have enabled researchers to explore the potential of autonomous surface vehicles (ASVs) in the maritime domain. However, unlike on-land applications such as self-driving cars, acquiring real-world datasets with sufficient quantity and variety for maritime environments is challenging. As a result, there is a shortage of such datasets available to researchers and developers in this field. Previous studies on maritime datasets have primarily focused on computer vision applications, specifically vessel detection and classification. While these visual datasets are useful for developing maritime computer vision algorithms, other types of perceptual and navigation data that are synchronously acquired onboard are necessary to perform multi-sensor fusion and achieve reliable and robust operation. For example, LiDARs and cameras may play a crucial role in tasks requiring precision, such as docking (Pereira et al., 2021), object detection, and collision avoidance (Han et al., 2020), as well as autonomous navigation in narrow waterways (Wang et al., 2019). Additionally, radar is capable of detecting objects in the distance (Kim et al., 2021) and can be used for vehicle localization through coastline detection (Han et al., 2019). Therefore, it is essential to collect and analyze a diverse range of perceptual and navigation data to develop effective autonomous navigation systems for the maritime domain. This paper presents a multimodal maritime dataset that includes diverse types of perceptual and navigation data to aid in developing autonomous navigation capabilities in restricted water environments such as narrow canals and port areas. The dataset was collected in Pohang, South Korea, following a 7.5-km-long route that comprised a narrow canal, port, and near-coastal areas. Figure 1 shows the vehicle and sensor suite used for data acquisition, and Fig. 2 displays data from all of the perceptual sensors used in this study. The vehicle was equipped with two GPS antennas, one in the front and one in the back, as well as an attitude heading reference system (AHRS) mounted at the back of the vehicle. Two visual cameras that served as a stereo camera were mounted in the front of the vehicle, along with an infrared camera. Additionally, three LiDARs were mounted on the vehicle's front, port, and starboard sides. An omnidirectional camera and a marine radar were mounted on a vertical slide at the back of the vehicle, as shown in Figure 1. Figure 1: Vehicle used for data collection The remainder of this paper is organized as follows. In Section 2, we summarize the existing open datasets related to maritime environments. Section 3 describes the system configuration, including the hardware sensor system and recording system. We present the sensor calibration methods used in this study in Section 4. In Section 5, we describe the configuration and characteristics of the dataset, including the environmental conditions and the structure of the dataset. Finally, the concluding remarks are provided in Section 6. ## 2 Related work Real-world datasets are crucial for validating algorithms and learning materials in deep learning-based mobile robotics applications. Many on-land datasets have been published and used for evaluating performance, particularly in simultaneous localization and mapping (SLAM), as well as for data-intensive deep learning studies. These datasets include perceptual and navigation data collected in various environments, such as parks and campuses using unmanned ground vehicles (Smith et al., 2009), roads using cars (Geiger et al., 2012), and indoors using unmanned aerial vehicles (UAVs) (Burri et al., 2016). To enhance these datasets, researchers have explored different scenes (Jeong et al., 2019) and employed additional sensors like radars (Barnes et al., 2020; Kim et al., 2020; Burnett et al., 2022). Previous research on maritime datasets has primarily focused on object detection and classification by providing image data with ground-truth annotations. For instance, in Zhang et al. (2015), cropped visible and infrared images of vessels captured during both the day and night were provided for classification tasks. In Bloisi et al. (2015), images obtained from surveillance cameras situated in buildings along the Grand Canal of Venice, Italy, were supplied with bounding box annotations. Furthermore, in Prasad et al. (2017), visible and near-infrared videos captured from onshore and onboard locations were provided with ground-truth annotations of horizon detection, object detection, and tracking data. In Ribeiro et al. (2019), airborne surveillance images captured using a UAV were supplied with bounding box annotations for object detection and tracking. In Bovcon et al. (2019), onboard camera images with semantic segmentation labels of the sky, obstacles, and water were provided. In Taijalmaa et al. (2019), the authors focused on water segmentation through the visual images captured using an onboard camera. Finally, in Lin et al. (2022), the authors presented simulated and real-world data of 3D LiDAR point clouds and ground-truth 3D bounding boxes, focusing on deep-learning-based object detection. There is a scarcity of publicly available maritime datasets that contain both onboard perceptual and navigation data. Only a few datasets such as Bovcon et al. (2018) and Cheng et al. (2021) have been made available. The former provides stereo camera images, GPS, and inertial measurement unit (IMU) data for semantic segmentation of waterways, while the latter presents onboard perceptual and navigation data collected in different weather conditions in an inland waterway using a small unmanned surface vehicle. This dataset includes a stereo camera, a 16-channel 3D LiDAR, three millimeter-wave radars, a GPS, and an onboard IMU. It also provides ground-truth annotations of water segmentation and the ground-truth trajectory using GPS and IMU data. ## 3 System configuration ### Sensor configuration The sensor configuration of the vehicle is shown in Figure 3. The vehicle used in the study was a 7.9 m long and 2.6 m wide cruise boat, with a weight of 1.7 tons and a capacity of 12 persons. Two GPS antennas were mounted on the front and back of the vehicle, respectively, with a baseline of 6.8 m. They were connected to a global navigation satellite system (GNSS) with real-time kinematic (RTK) receiver, providing location with an RTK accuracy of 0.01 m + 1 ppm and heading measurement with an error smaller than 0.28\({}^{\circ}\). The GPS data was recorded at a frequency of 5 Hz. An attitude and heading reference system (AHRS) was placed on the floor at the back of the vehicle. The AHRS provided acceleration measurements with a resolution of 0.02 mg and bias of 0.04 mg, gyroscopic measurements with a resolution of 0.003\({}^{\circ}/s\) and bias of 8\({}^{\circ}/h\), and attitude measurements with a root mean square error of 0.5\({}^{\circ}\) along the pitch and roll directions. The magnetometer of the AHRS was calibrated using the provided software. The AHRS data was recorded at a frequency of 100 Hz. To enhance the detection of nearby objects and safety hazards, three LiDARs were installed on the front and sides of the vehicle. A 64-channel LiDAR was mounted horizontally at the front to detect obstacles or vehicles along the path. Two 32-channel LiDARs were positioned on the port and statoward sides of the vehicle to extend the measurement coverage. These side LiDARs were installed at a downward heading of 30\({}^{\circ}\) relative to the horizontal plane to detect small objects on the surface that may pose a threat to vehicle operation. Each LiDAR had a range of 120 m, with a resolution of 0.3 cm and a precision of 1.0 to 5.0 cm. The horizontal field of view (FOV) was 360\({}^{\circ}\) with a resolution of 2048, and the vertical FOV of all LiDARs was 33.2\({}^{\circ}\). The vertical resolution varied depending on the number of channels (64 for the front LiDAR and 32 for the port and statoward LiDARs). The point cloud data collected from each LiDAR were gathered asynchronously at 10 Hz per cycle. A frequency-modulated continuous-wave (FMCW) radar with an X-band wavelength (9.3 GHz to 9.4 GHz) was mounted at the back of the vehicle. The radar had a range of 50 to 1654.8 m and rotated 1.0 to 1.3 times per second. Figure 2: Example of the dataset The camera module, as depicted in Fig. 2(c), was mounted at the front of the vehicle and equipped with two visual cameras and an infrared camera, all with 3D-printed lens hoods. The two visual cameras were configured as a stereo camera and synchronized at the hardware level using a trigger cable. Images were acquired simultaneously from both cameras at a frequency of 10 Hz. The infrared camera had a temperature measurement range of -25 to 135\({}^{\circ}\)C, and automatic thermal calibration at the hardware level was performed during data collection to compensate for device temperature variations. Thermal images were acquired at a rate of 10 Hz. An omnidirectional camera and a marine radar were installed on a vertical slide at the back of the vehicle. The slide was kept at its lowest position while the vehicle passed under low bridges and was manually raised to obtain better views. The omnidirectional camera captured six images (five horizontal views and one top view) simultaneously at a frequency of 10 Hz. The models and detailed specifications of all sensors are available on our dataset website. ### Data collection system Figure 4 shows a schematic diagram of the data collection process, which involved three computers, referred to as modules A, B, and C. Module A recorded the images captured using the omnidirectional camera, module B recorded left and right stereo images, infrared images, and point clouds of the front, port, and starboard LiDARs, while module C recorded radar images, GPS signals, and IMU data. The system times of the three computers were synchronized tightly using the Chrony (Curnow and Lichvar, 2014) library, and the system times of modules A and B were periodically synchronized with module C. The data recording initiation and termination were carried out by sending commands from module C to modules A and B. In order to avoid delays and loss of data during the simultaneous streaming of data from multiple sensors, the recording programs for each module were designed to be more efficient. To address the issue in module A, where converting the omnidirectional camera image stream to image files was slowing down the recording process, the image streams were saved as binary files and converted during post-processing. In module B, three individual processes were run simultaneously to record stereo image pairs, infrared images, and data from the three LiDARs. Multiple threads within each process were activated to prevent data loss. On the other hand, module C required only a single-threaded process for each sensor measurement, as the data size for each sensor was relatively small. Figure 4: Data collection system Figure 3: Hardware sensor configuration: a diagram of the vehicle viewed from the starboard side (a) and front side (b), a picture of the camera module (c), and the omnidirectional camera and radar mounted on the vertical slide (d). The coordinates are depicted using red (x), green (y), and blue (z) arrows. ## 4 Sensor calibration ### Intrinsic calibration of cameras For accurate estimation of the intrinsic parameters of the cameras, two structured calibration boards were used for the visual and infrared cameras. The intrinsic calibration of each camera was performed using the method described in Zhang (2000), which involves using a checkerboard pattern calibration board for the visual cameras in the stereo camera and omnidirectional camera, and a custom-made calibration board that can reflect infrared lights for the infrared camera. ### Extrinsic calibration Extrinsic calibration between the perceptual sensors was performed using either the calibration board or unstructured measurement data, taking into account the overlap between their measurement ranges and the limitations of the experimental environment. In the case of the stereo camera, the images had sufficient overlapping regions, allowing for most of the co-visible area to be covered using the calibration board. However, since the omnidirectional camera was located at the back of the vehicle, several portions of the images were occluded by the vehicle, resulting in an inadequate co-visible area for reliable calibration. The extrinsic parameters between the front LiDAR and other perceptual sensors were estimated using an in-motion calibration method, due to the vehicle's movement and the asynchronous sensor measurements. This method involved constructing a point cloud map using a LiDAR SLAM approach and estimating the LiDAR poses from the SLAM result. Next, the sensor pose in the global coordinate system was estimated by matching the sensor data to the point cloud map (refer to sections 4.2.2 to 4.2.3). The LiDAR pose when the sensor data was collected was determined by interpolating the given LiDAR poses using the measurement timestamps. Finally, the extrinsic parameters were calculated using the estimated LiDAR pose and the sensor pose in the global coordinate system. #### 4.2.1 Omnidirectional camera calibration The extrinsic calibration for the cameras in the omnidirectional camera was achieved by utilizing ArUco markers (Romero-Ramirez et al., 2018) to improve the correspondence detection between images, as the overlapping regions between them were narrow. A factor graph framework was used with camera pose nodes and 3D point nodes of the calibration board in GTSAM (Dellaert, 2012) to achieve accurate calibration. #### 4.2.2 LiDAR to LiDAR calibration The point clouds obtained from each LiDAR are shown in Figure 4(a). The initial calibration was performed by manually adjusting the rotation and translation of the port and starboard LiDARs. Then, to achieve fine calibration between the front/port LiDARs and front/starboard LiDARs, the Generalized Iterative Closest Points (GICP) method (Segal et al., 2009) was applied to each port and starboard point cloud to align them with the point cloud map generated by the front LiDAR. #### 4.2.3 LiDAR to camera calibration The extrinsic calibration between the front LiDAR and cameras was performed by solving linear PnP (perspective-n-points) (Quan and Lan, 1999) problems using carefully selected points in the images and corresponding points in the point cloud map generated with the front LiDAR. Figure 4(b) shows the calibration results for the LiDARs to the left stereo camera, while Figure 4(c) shows the results for the omnidirectional camera. ### Hand-eye calibration As there is no perceptual correlation between LiDAR to GPS and AHRs, we utilized the hand-eye calibration method (Tsai and Lenz, 1989) for extrinsic calibration between AHRs to LiDAR and GPS. #### 4.3.1 AHRs to LiDAR calibration The positional drift due to accelerometer error integration can be severe. However, the orientation can be precisely measured, as the AHRS provides attitude measurements from the direction of gravity and geomagnetic force. Due to these limitations, only the rotation component of the AHRS to LiDAR calibration is obtained using the hand-eye calibration method, and the relative translation is determined based on the ASV blueprint. The dataset contains large rotational movements near a feature-rich pier area, so orientation measurements from the AHRS and LiDAR odometry were sampled to Figure 5: Example of LiDAR data and their projection onto the cameras estimate the relative rotation by solving the hand-eye problem. #### 4.3.2 AHRS to GPS calibration The extrinsic calibration between the AHRS and GPS was performed using both the LiDAR-inertial odometry and GPS 2D pose measurements. As the GNSS-RTK provided accurate location and heading information, both translation and rotation were considered for the relative pose estimation. Near the pier, where roll and pitch motions were negligible, we sampled data and used the 2D poses from the LiDAR-inertial odometry and GPS measurements to calculate translation matrix from AHRS to GPS, \(T_{G}^{A}\). This equation relates the transformation matrix between the AHRS coordinate systems at time \(t_{i}\) and \(t_{j}\), \(T_{A_{j}}^{A_{i}}\), and the transformation matrix between GPS coordinate systems at time \(t_{i}\) and \(t_{j}\), \(T_{G_{i}}^{G_{i}}\), as shown in Fig. 6. Figure 7 shows the extrinsic calibration result, where the LiDAR point clouds were projected onto satellite maps based on GPS measurements. ### Baseline trajectory While the GNSS-RTK delivers highly accurate pose information in most areas, it is constrained in regions where the vehicle passes under bridges. Therefore, a straightforward GNSS-INS navigation system may not be dependable in these areas. To address this issue, we utilized the front LiDAR measurements in conjunction with the GNSS-RTK and AHRS measurements in an incremental smoothing and mapping (iSAM) (Kaess et al., 2008) graph SLAM framework that was implemented in GTSAM. The graph structure for the SLAM framework is depicted in Figure 8. The LiDAR-inertial odometry framework utilized in this study is based on LIO-SAM (Shan et al., 2020), with modifications made to the point cloud feature extraction method. Unlike urban environments where many objects can be detected as dense point clouds, the objects in the canal area are mainly detected as sparse point clouds. Additionally, the water surface in maritime environments does not provide sufficient information for LiDAR odometry performance, as the water absorbs the light. Therefore, a reliable feature extraction method is crucial for accurate LiDAR odometry in maritime environments. The feature extraction method in LIO-SAM, which is based on LOAM (Zhang and Singh, 2014), determines whether a point is a corner point or a surface point based on the curvature of the point and its neighboring points. However, this method often rejects points that have few neighboring points or are occluded, which can result in the rejection of points at a distance and thin pole-like objects. To improve the performance of LiDAR odometry in the canal area, we modified the point rejection algorithm by skipping occluded neighboring points and defining pole-like objects as corner features. Figure 9 illustrates the point cloud and feature extraction results of LIO-SAM and the modified algorithm. Figure 10 shows the point cloud mapping results from the canal area to the inner-port area using LIO-SAM and the modified method, overlaid on the satellite map. The GPS factor is created by inserting UTM coordinates and heading measurements obtained from the GPS, while the AHRS factor is created by inserting the orientation measurements obtained from the AHRS. However, since the direction of the true north measured by the GPS is different from the magnetic north measured by the AHRS, we only used the gravity direction estimated from the AHRS Figure 8: Factor graph model for baseline trajectory estimation Figure 6: Top view of the vehicle at time \(t_{i}\) and \(t_{j}\). The AHRS and GPS coordinate systems are shown as {A} and {G}. Figure 7: Point clouds of LiDARs and their projections on the satellite map based on GPS data Figure 9: Comparison of feature extraction results in different places. The images on the left display the original point cloud, the middle images depict the feature extraction outcomes using LIO-SAM, and the right images represent the modified feature extraction results. The green points indicate surface points, and the red points indicate corner points. orientation measurement to correct the roll and pitch of the vehicle. ## 5 The canal dataset The data used in the study was gathered from Pohang Canal, located in South Korea. The canal area consists of several regions, including a narrow canal area, an inner port area, an outer port area, and a near-coastal area, with a total length of 7.5 km. Figure 11 shows the map of the experimental location, the trajectory of the vehicle, and examples of the collected data. The images in columns (b), (c), and (d) correspond to locations 1, 2, 3, and 4, respectively, as depicted in Fig. 11. ### Experimental environment The main canal area is approximately 15 to 30 m wide and surrounded by a park and small buildings. In this region, LiDAR data produced a dense point cloud with rich geometrical features, while radar data was mostly obstructed by buildings and contained noise. The inner port area was split into two sides; on the west, there were fish markets and shops, while on the east, there were shipyards. Consequently, numerous fishing boats docked on the west side, while ships of various sizes were located on the east side. The water width in this area ranged from 50 to 100 m, and LiDARs acquired a sparse point cloud in the distance. Seawalls, passenger terminals, and storage buildings were present in the outer port area, alongside large cruise ships, cargo ships, and coast guard ships. LiDAR data hardly captured point clouds of on-land objects in this region, while radar data clearly revealed the coastline in the distance. A vast steelworks plant was located southeast of the experimental site, and it was mostly visible from the near-coastal area. The vehicle experienced relatively significant roll and pitch movements in the near-coastal area due to a sailing speed of around 10 knots, while the vehicle speed was 5 to 7 knots in the canal and port areas. The omnidirectional camera was removed during nighttime data collection due to safety concerns. Detailed explanations and characteristics of each dataset are available on our dataset website. ### Dataset structure The structure of each data sequence is depicted in Fig. 12. All sensor data files were recorded along with the time they were received. The timestamps of each sensor data were saved either in the form of text files associated with the data file and its corresponding time or as data file names. The timestamps of the stereo camera, infrared camera, omnidirectional camera, and radar data were recorded as timestamp.txt for each sensor. These timestamp files contained the sequence of each datum (which was set as the datum name) and Unix time in the tab-delimited format. The name of each LiDAR data file was set as the timestamp in nanoseconds. #### 5.2.1 GPS data The three global navigation satellite system messages received from the GNSS-RTK receiver were GNGGA, GNHDT, and GNRMC. To record the GPS data, these three messages were combined, and eleven sequential values were saved for each timestamp in the navigation/gps.txt file. The values recorded in this file included Unix time, GPS time, latitude, the hemisphere of latitude (N/S), longitude, the hemisphere of longitude (EW), heading (in degrees), GPS quality indicator, the number of satellites used, horizontal dilution of precision, and geoid height. All values were saved in the tab-delimited format. #### 5.2.2 AHRS data The AHRS recorded orientation, angular rate, and linear acceleration data, which were saved as eleven sequential values for each timestamp in the tab-delimited format in the navigation/ahrs.txt file. The values included Unix time, orientation represented as quaternion values (qx, qy, qz, qw), angular rate in the x, y, and z directions, and linear acceleration in the x, y, and z directions. #### 5.2.3 3D LiDAR data The LiDAR data from the front, port, and starboard sensors were stored in separate folders within the lidar directory. The front LiDAR data was stored in the lidar_front folder, while the port and starboard LiDAR data was stored in the lidar_port and lidar_staroboard folders, respectively. In each LiDAR folder, the imu.txt file contained inertial measurement data from the embedded IMU. The imu.txt file contained seven values for each timestamp, including Unix time, angular rates in the x, y, and z directions, and linear acceleration in the x, y, and z directions. The LiDAR point cloud data was provided in binary files and included the 3D coordinates, intensity, sensor time, reflectivity, ambient, and range value of each point. The number of points in each point cloud was equal to the product of the number of channels (64 for the front LiDAR and 32 for the port and starboard LiDARs) and the axial resolution (2048 for all LiDAR data in our dataset). The data was recorded in a sequential ring-wise manner, with the ring value of each point calculated as the quotient of point sequence and the number of points in a ring. The LiDAR's internal clock recorded the sensor time of each point, which can be used to de-skew the point cloud. Each point cloud was saved in a separate file in the points directory with the filename format <time>.bin. Detailed methods for reading these binary files in both C++ and Matlab are explained on our data website. #### 5.2.4 Radar data When the radar completed one cycle, the radar images were saved in the radar/images folder. As the rotation rate of the radar was low and inconsistent over time, the images were likely to be skewed whenever the platform was moved. Two timestamp files, namely timestamp.txt and timestamp_deg.txt, Figure 10: LIDAR-inertial SLAM result using LIO-SAM (red point cloud) and modified method (yellow) in the canal area to inner-port area. were created. The timestamp.txt file provided only the image sequence and time pairs per cycle, while timestamp.deg.txt file additionally provided the updated angle and time pairs within each image. #### 5.2.5 Stereo camera data The left and right images from the stereo camera are stored in the left_images and right_images folders respectively, within the stereo folder. Since each stereo image pair was captured Figure 11: Data acquisition map, trajectory, and examples of data Figure 12: Dataset structure synchronously, the sequence value in timestamp.txt represents both the left and right image files for the same timestamp value. The images were saved in png format with a resolution of 2048 \(\times\) 1080 pixels. #### 5.2.6 Infrared camera data The infrared images were stored in the infrared/images folder. The images were recorded in 16-bit png format with a resolution of 640 \(\times\) 512 pixels, encoding 14-bit thermal data. The temperature of each pixel can be calculated using the equation: \[t=0.04p-273.15 \tag{1}\] where \(p\) is the pixel value and \(t\) is the corresponding temperature in degrees Celsius. The data collection frequency was set to 10 Hz, but some intervals between images were longer than others due to thermal calibration. #### 5.2.7 Omnidirection camera data The six omnidirectional camera images were stored in the cam_0, cam_1, cam_2, cam_3, cam_4, and cam_5 folders within the omni folder. Similar to the stereo camera data, the sequence value in timestamp.txt indicates the corresponding images from all six cameras collected synchronously for the same time value. Each image was stored in jpg format with a resolution of 2464 \(\times\) 2048 pixels. #### 5.2.8 Calibration The calibration folder contains two files, extrinsics.json and intrinsics.json, that provide the extrinsic and intrinsic calibration parameters, respectively. The extrinsics.json file contains the parameters that relate the AHRS sensor to each of the other sensors. On the other hand, the intrinsics.json file contains the intrinsic calibration parameters for each camera, such as focal length, principal point, and distortion coefficients. #### 5.2.9 Baseline trajectory The baseline trajectory is saved as eight sequential values for each timestamp in table-delimited format. The values include Unix time, orientation as a quaternion (qx, qy, qz, qw), and translation (x, y, z). The trajectory data is stored in the navigation/baseline.txt file. ### Data sequence and known issues The dataset comprises six data sequences, consisting of four daytime sequences and two nighttime sequences, as shown in Table 1. Due to technical issues, the GPS data for sequences pohang00 and pohang02 was collected without RTK measurements, and sequence pohang01 has missing GPS data during operation. ### Data player Inspired by Jeong et al. (2019), we provide the dataset player for ROS environment. The source code and the description can be found at [https://github.com/dhchung/rosmsg_player](https://github.com/dhchung/rosmsg_player). ## 6 Conclusion In this paper, we present a multimodal maritime dataset that contains continuous onboard navigation and perceptual data acquired in various maritime environments. The dataset includes GPS data obtained using a GNSS RTK receiver with two GPS antennas and AHRS data. Additionally, to address the need for various types of perceptual sensors with different characteristics and measurement ranges in the maritime environment, the dataset includes three LiDARs, a marine radar, a stereo camera, an infrared camera, and an omnidirectional camera. The dataset has been designed to facilitate full-scale autonomous navigation research and can be utilized in various domains, including sensor fusion, structure reconstruction and assessment, and other robotic applications. ## Acknowledgements This work was supported by AVIKUS corp and the city of Pohang. We greatly appreciate their support and active participation in this study.
2304.01794
Partial measurements of the total field gradient and the field gradient tensor using an atomic magnetic gradiometer
Magnetic gradiometers have wide practical and academic applications, and two important types of field gradient observables are the total field gradient and field gradient tensor. However, measurements of the field gradient tensor have not been the focus of previous researches on atomic magnetic gradiometers. In this work, we develop an atomic magnetic gradiometer based on two separately optically pumped atomic ensembles in a Herriott-cavity-assisted atomic cell. This gradiometer shows versatile operation modes and functions, and we demonstrate them in measurements of both types of field gradient observables.
Qianqian Yu, Siqi Liu, Xueke Wang, Dong Sheng
2023-04-04T13:53:24Z
http://arxiv.org/abs/2304.01794v1
Partial measurements of the total field gradient and the field gradient tensor using an atomic magnetic gradiometer ###### Abstract Magnetic gradiometers have wide practical and academic applications, and two important types of field gradient observables are the total field gradient and field gradient tensor. However, measurements of the field gradient tensor have not been the focus of previous researches on atomic magnetic gradiometers. In this work, we develop an atomic magnetic gradiometer based on two separately optically pumped atomic ensembles in a Herriott-cavity-assisted atomic cell. This gradiometer shows versatile operation modes and functions, and we demonstrate them in measurements of both types of field gradient observables. ## I Introduction In the outdoor searches of objectives that are magnetic or magnetizable, magnetic field gradients usually originate from the targets while the background field gradient is normally negligible [1]. Therefore, the field gradients contain valuable information about the targets. Moreover, with the help of differential detections, the measured field gradient is largely free of the noises and drifts in the background field. In addition, the field gradient is relatively less sensitive to the sensor orientation compared with direct vector-field measurements [2]. For these reasons, systems measuring the field gradients, magnetic gradiometers, have been widely used in underwater explorations [3], geological surveys [4, 2, 5, 6], space science [7], magnetic anomaly navigations [8], and bio-imaging [9, 10] applications. There are two types of field gradients that can be measured by magnetic gradiometers. One is the total field gradient, \(\nabla B\), where \(B\) is the total magnitude of the field. The other one is the field gradient tensor, \(\nabla\mathbf{B}\), with \(\mathbf{B}=\sum_{i}B_{i}\hat{r}_{i}\). According to basic electromagnetic relations, there are five independent elements in the field gradient tensor in free space. Though both types of field gradients share the same aforementioned advantages in different applications, the field gradient tensor contains richer information, and the measurement results of the field gradient tensor are more convenient for geophysical interpretations [11, 1]. Currently, practical magnetic gradiometers are mainly based on superconducting quantum interference devices (SQUIDs) and fluxgates, both of which can measure vector components of the bias field. People started to pay attention to the field gradient tensor since 1970s [12, 13], and there have been great developments in this field over the past two decades [14, 15, 16, 17, 18, 19, 2]. However, each type of sensors has its own problems. While the SQUID-based gradiometers are relatively bulky and heavy due to the additional dewar to maintain the low temperature environment, the fluxgate-based gradiometers are not sensitive and stable enough. Atomic magnetometers are characterized by the high field sensitivity [20] and potential for miniaturization [21], which make them promising for magnetic gradiometer applications. When the bias field is low, atomic magnetometers can work in the spin-exchange-relaxation-free (SERF) regime. Since the SERF magnetometers have abilities to detect vector components of the field, gradiometers based on this technology can measure elements of the gradient tensor [22, 23, 24, 25]. In the more common cases where the bias field is on the order of 10 \(\mu\)T, atomic magnetometers usually work in the scalar mode, and the corresponding gradiometers have only been demonstrated to detect elements of the total field gradient [26, 27, 9, 10]. In this work, we develop an atomic magnetic gradiometer based on two separated \({}^{85}\)Rb Bell-Bloom magnetometers in a single multipass cell, and demonstrate that this sensor can be used in multiple ways to measure both the total field gradient and gradient filed tensor elements in the presence of a bias field. Following this introduction, Sec. II introduces theoretical background of this work, Sec. III describes the Bell-Bloom scalar magnetometry and two modes of the gradiometer to partially measure the total field gradient, Sec. IV describes two modes of the gradiometer measuring multiple elements of the field gradient tensor, and Sec. IV concludes the paper. ## II Theoretical background ### Notations of the field gradients Considering that the background field is \(\mathbf{B}_{0}\) and the field from the target is \(\mathbf{B}_{a}\), the total field difference between the cases with and without the target is: \[\Delta B_{s}=|\mathbf{B}_{0}+\mathbf{B}_{a}|-B_{0}, \tag{1}\] and the vector field difference is \(\Delta{\bf B}_{v}={\bf B}_{a}\). A key difference between these two parameters is that \(\Delta B_{v}\) satisfies the Laplace equation, while it is not always true for \(\Delta B_{s}\)[1]. In cases that \(B_{a}\ll B_{0}\), \(\Delta B_{s}\approx\hat{\bf B}_{0}\cdot{\bf B}_{a}\), with \(\hat{\bf B}_{0}\) as the unit vector of \({\bf B}_{0}\). Then the total field gradient is \({\mathbf{G}}=\nabla(\Delta B_{s})\), with the component of \({\mathbf{G}}\) along the \(r_{i}\) axis as \[G_{i}=\frac{\partial\Delta B_{s}}{\partial r_{i}}\approx\frac{\partial(\hat{ \bf B}_{0}\cdot{\bf B}_{a})}{\partial r_{i}}=\sum_{j}\hat{\bf B}_{0}\cdot\hat{r }_{j}B_{a,ji}, \tag{2}\] where \(B_{a,ji}=\partial B_{a,j}/\partial r_{i}\) is an element of the gradient tensor \({\bf B}_{a}\). According to Maxwell equations, in free space there are only five independent gradient tensor elements of \({\bf B}_{a}\): \(B_{a,xx}\), \(B_{a,yy}\), \(B_{a,xy}\), \(B_{a,xz}\), and \(B_{a,zy}\). The analytic functions formed by \(G_{i}\) are useful to quantitatively interpret the potential-field data in two-dimensional [28] and three-dimensional spaces [29; 30]. More sophisticated relations between \(B_{a,ij}\) and \(G_{i}\) can be found using Fourier and Hilbert transformations [31]. ### Bell-Bloom optical pumping Bell-Bloom optical pumping [32] is an efficient method to directly generate transverse atomic polarization. The dynamics of the electron spin polarization \({\mathbf{P}}\) is described by the Bloch equation [33; 34]: \[\frac{d{\mathbf{P}}}{dt}=\gamma{\mathbf{P}}\times{\mathbf{B}}+\frac{1}{Q(P)}[R_{OP}({\mathbf{ s}}-{\mathbf{P}})-R_{d}{\mathbf{P}}], \tag{3}\] where \(\gamma\) is the atomic gyromagnetic ratio, \(Q(P)\) is the nuclear spin slowing down factor, \(R_{OP}\) is the optical pumping rate, \({\mathbf{s}}\) is the photon spin of the light beam, \(R_{d}\) is the atom depolarization rate in the absence of light. For a pump beam which is amplitude modulated at a frequency \(\omega\), the optical pumping rate can be expressed as: \[R_{OP}=a_{0}+\sum_{n=1}^{\infty}a_{n}\cos(n\omega t-\alpha_{n}), \tag{4}\] where \(a_{i}\) is the corresponding coefficient of the Fourier expansion series of \(R_{OP}\). When the pump beam is on resonance with the Rb D1 transition and \(\omega\) is close to the atomic Larmor precession frequency \(\omega_{L}\), a substantial transverse atomic polarization can be built. For a special configuration that the bias field is along the \(z\) axis, and the pump beam propagates along the \(x\) direction, the component of the atomic polarization modulated at \(\omega\) can be expressed as [35; 36] \[P_{x}+iP_{y}=\frac{sa_{1}}{2[R+iQ(P)(\omega_{L}-\omega)]}e^{-i(\omega t-\alpha _{1})}, \tag{5}\] where \(R=R_{d}+a_{0}\). If the a linearly polarized off-resonant probe beam propagates along the \(y\) axis, the rotation of the probe beam polarization due to the photon-atom interaction is proportional to the real part of \(P_{y}\) in Eq. (5) [37], \[Re(P_{y})=\frac{sa_{1}}{2}\left[\frac{-R\sin(\omega t-\alpha_{1} )}{R^{2}+Q^{2}(P)(\omega_{L}-\omega)^{2}}\right.-\] \[\left.\frac{Q(P)(\omega_{L}-\omega)\cos(\omega t-\alpha_{1})}{R^{2 }+Q^{2}(P)(\omega_{L}-\omega)^{2}}\right.. \tag{6}\] For a more general case, the bias field is along an arbitrary direction, so that the angle between the bias field and the pump beam direction (\(x\) axis) is \(\psi_{x}\), and the angle between the projection of the bias field on \(y\)-\(z\) plane and the \(y\) axis is \(\theta_{yz}\). The component of the atomic polarization modulated at \(\omega\) along the \(y\) axis is [35]: \[P_{y}= \frac{sa_{1}d\sin\psi_{x}}{2}\left[\frac{-R\cos(\omega t-\alpha_{ 1}-\beta)}{R^{2}+Q^{2}(P)(\omega_{L}-\omega)^{2}}\right.\] \[+\left.\frac{Q(P)(\omega_{L}-\omega)\sin(\omega t-\alpha_{1}- \beta)}{R^{2}+Q^{2}(P)(\omega_{L}-\omega)^{2}}\right], \tag{7}\] where \(d=\sqrt{\cos^{2}\psi_{x}\cos^{2}\theta_{yz}+\sin^{2}\theta_{yz}}\), and \(\beta=\sin^{-1}(\sin\theta_{yz}/d)\). Therefore, if the driving signal for modulating the pump beam amplitude is used to demodulate the signal due to the probe beam polarization rotation, we will get Lorentzian and dispersion line shapes in the demodulation outputs with suitable phase choices. ### Effect of a half-wave plate on the probe beam For a linearly polarized probe beam with the polarization tilted from the \(x\) axis by an angle of \(\theta\), its polarization can be expressed by the Jones vector as \(P_{i}=(\cos\theta,\sin\theta)\)[38]. Suppose that it passes two polarized atomic ensembles successively, and the polarization of the probe beam is rotated by \(\alpha\) and \(\beta\) due to the interaction between photon and atoms in each ensemble, respectively. Then, the polarization of the transmitted probe beam is \[P_{o}=R(\beta)R(\alpha)P_{i}=(\cos(\theta+\alpha+\beta),\sin(\theta+\alpha+ \beta)), \tag{8}\] where \(R\) is the two-dimensional rotation matrix. If we put a half-wave plate (fast axis along the \(x\) axis) in between the two atomic ensembles, then the polarization of the transmitted probe beam is modified to \[P_{o}=R(\beta)HR(\alpha)P_{i}=(\cos(\theta+\alpha-\beta),\sin(\theta+\alpha- \beta)), \tag{9}\] where \(H\) is the Jones matrix for the half-wave plate. It can be concluded from the equation above that the probe beam differentially detects the two polarized atomic ensembles separated by this half-wave plate. ## III Partial measurement of the total field gradient The sensor used in this work is shown in Fig. 1(a). A key element of this sensor is a Herriott-cavity-assisted vapor cell [35; 36], which is made by the anodic bonding technique, and filled with enriched \({}^{85}\)Rb atoms and 150 Torr N\({}_{2}\) gases. The cavity consists of two cylindrical mirrors with a curvature of 100 mm, a diameter of 12.7 mm, and a thickness of 2.5 mm. The distance between the two mirrors is 19.3 mm, and the relative angle between their symmetrical axes is 52.2\({}^{\circ}\). This cell is placed on a three-dimensionally-printed optical platform and heated to a temperature around 85\({}^{\circ}\)C by running an ac current through ceramic heaters. This magnetometer sensor sits in the middle of five-layer mu-metal shields. Solenoid coils and two sets of orthogonal cosine-theta coils [39] inside the innermost shield are used to control the bias field, and extra coils are added to control the field gradient. Both the pump and probe beams for the sensor are generated from distributed-Bragg-reflector diode lasers, and fiber coupled to the sensor platform. The pump laser, resonant with Rb D1 transition, is power modulated by an acoustic-optical modulator (AOM) with a modulation frequency of \(\omega\) and a duty cycle of 20%, and then is split into two separated beams before being independently sent to the sensor. Both pump beams are circularly polarized and enter the cell from the \(x\) direction, with the same beam diameter of 5 mm, and a separation between the beam centers of 1.2 cm. There is a hole with a diameter of 2.5 mm in the center of the front mirror of the multipass cell, and a linearly polarized probe beam, with a beam power of 1.6 mW and a blue detuning of 50 GHz from the D1 line, enters and exits the cavity from the same hole with 21 reflections inside the cavity. The polarization of the transmitted probe beam is analyzed by a polarization beam splitter (PBS), and a pair of differential photodiode detectors (PDs). All electrical signals pass through a printed circuit board (PCB), which is attached with the sensor. A shielded cable is used to transfer most of the electrical signals to the control electronics. When one of the pump beams is blocked, the sensor works as a scalar magnetometer. The current signals from PDs are converted to voltage signals by transimpedance amplifiers (TIA), and are demodulated by a lock-in amplifier (LIA) as shown in Fig. 1 (b). As the pump beam amplitude modulation frequency \(\omega\) is scanned across the Larmor precession frequency \(\omega_{L}\), the in-phase and out-of-phase outputs from LIA show a Lorentzian and a dispersion line shape, respectively (see Fig. 2 (a)). Due to the fact that the probe beam converges and diverges many times inside the Herriott cavity, the line shapes from the probe signal normally can not be described by a function with a single line width [40]. From the experience, we find that the data can be well fitted by a sum of two Lorentzian or dispersion functions [36; 40], \[f_{1}(\omega) = \sum_{m=1}^{2}a_{m}\frac{(\frac{\Gamma_{m}}{2})^{2}}{(\omega- \omega_{0})^{2}+(\frac{\Gamma_{m}}{2})^{2}}+b_{1},\] \[f_{2}(\omega) = \sum_{m=1}^{2}c_{m}\frac{(\frac{\Gamma_{m}}{2})^{2}(\omega- \omega_{0})}{(\omega-\omega_{0})^{2}+(\frac{\Gamma_{m}}{2})^{2}}+b_{2}. \tag{10}\] Here, we can qualitatively understand that the two fitting functions correspond to dividing the beam patterns inside the cavities to two groups according to beam sizes. The fitting results of a group of in-phase and out-of-phase signals are consistent with each other. Both the amplitude of the in-phase resonant response (\(f_{1}(\omega_{0})=a_{1}+a_{2}\)) and the slope of the out-of-phase resonant response (\(|\partial f_{2}(\omega)/\partial\omega|=|c_{1}+c_{2}|\)) can be used to characterize the scalar magnetometer signal. When scanning the pump beam detuning while keeping other parameters same, we find that the largest magnetometer signal appears at a pump beam detuning around 2 GHz, as shown in Fig. 2 (b). Here, the atomic transition resonance frequency refers to the one pressure shifted compared with the case without buffer gases [41], which is the same in the rest of the paper. Since the full width of the Rb D1 transition is pressure broadened to be around 4 GHz [41] due to the buffer gas, we can neglected the hyperfine splittings of Rb \(5P_{1/2}\) state (\(\sim 0.4\) GHz). The ground hyperfine splitting is 3.0 GHz, and the transition resonance point (zero detuning point in Fig. 2(b)) corresponds to the laser frequency in between the transitions \(|5S_{1/2},F=2\rangle\rightarrow|5P_{1/2}\rangle\) and \(|5S_{1/2},F=3\rangle\rightarrow|5P_{1/2}\rangle\) Figure 1: Plot (a) illustrates the gradient magnetometer. \(\lambda/2\): half-wave plate, \(\lambda/4\): quarter-wave plate. Plot (b) shows the electrical signal processing of a Bell-Bloom scalar magnetometer. DAQ: data acquisition. In cases that the pump beam is blue detuned, the laser is prone to drive the former transition; while the pump beam is red detuned, the laser is prone to drive the latter transition. Considering the redistribution of atoms due to spin-exchange collisions and quenching effects, there are more atoms populated in the ground dark state (a stretched state of \(|5S_{1/2},F=3\rangle\)) when the lower hyperfine states are resonantly driven [42]. However, due to the light shift effect induced by the off-resonant pump beams, we still choose to keep pump beam on resonance with Rb D1 transition in normal operations. When the other pump beam is unblocked, there are two isolated Bell-Bloom scalar magnetometers inside the same cell, considering the fact that the diffusion distance of atoms within the atomic depolarization time is much less than the separation between the two magnetometers. Since both magnetometers share the same probe beam, the signals of the magnetometers add up if the pump beam polarizations are the same, and the signals are subtracted if the pump polarizations are opposite. The latter case, which is shown in the left part of Fig. 3(a), corresponds to a magnetic gradiometer. Since this gradiometer is built on the scalar magnetometers, it can only detect the total field gradients. Moreover, the separation between the pump beams is along the \(y\) direction, and the sensor using the demodulation processes in Fig. 1(b) can only measure \(G_{y}\). This conclusion is true for all bias field directions, and in this section, we demonstrate the measurements of \(G_{y}\) by setting the bias field along the \(z\) axis for convenience. As indicated by Eqs. (6) and (7), we can directly read the gradiometer signal from the out-of-phase output, which shows a linear dependence on \(G_{y}\) (See Fig. 3(b)). While the powers of the two beams should be identical in the ideal case, they are different within 10% in practice due to slight difference in various experimental conditions for the two beams, such as the difference in beam sizes and the transmission conditions of the cell window. In the experiment, we fixed the power of one pump beam, tuned the power of the other beam, and optimized the subtraction result from the gradiometer [26]. The gradient field sensitivity on \(G_{y}\) is measured to be 42 fT/cm/Hz\({}^{1/2}\) over the frequency range of 15 Hz to 40 Hz, as shown in Fig. 3(c). In this work, the sensitivity results are calculated from the power spectral density of the data using the Hanning window. From the discussions in Sec. II.3, we can realize the magnetic gradiometer with a different configuration. As shown in the right part of Fig. 3(a), both pump beams have the same polarization while a true zero-order half-wave plate is added in the center of the multipass cell [36], which helps to realize a differential detection of the two polarized atomic ensembles. This configuration has an advantage over the previous one that it can eliminate common-mode noise from the light shift, if the pump beam frequency is detuned from the resonance. In practice, the gradient field sensitivity of \(G_{y}\) in this configuration is measured to be 90 fT/cm/Hz\({}^{1/2}\) over the frequency range of 15 Hz to 40 Hz. The fact that the latter configuration shows worse sensitivity results are probably due to two facts, one is that the working temperature of the half-wave plate limits the maximum cell temperature [36], and the other one is that the quality of the half-wave plate degrades in the heated environment, and in turn the quality of the beam polarization degrades as the number of passes of the beam through the plate (22 times in maximum) increases. ## IV Partial measurement of the field gradient tensor To measure the field gradient tensor, the sensor needs to have the ability to distinguish different vector components that contribute to the total field magnitude. One convenient way to reach this goal is to add modulation fields at different directions [43; 44; 45], and use the Figure 2: Plot (a) shows the demodulation outputs as a function of \(\omega\), and the lines are fitting functions using Eq. (10). Plot (b) shows the normalized amplitude (\(|a_{1}+a_{2}|/|a_{1}+a_{2}|_{max}\)) and slope (\(|c_{1}+c_{2}|/|c_{1}+c_{2}|_{max}\)) of the magnetometer signal as a function of pump beam detuning, where the resonance point is found out by fitting an absorption profile of a linearly polarized beam. In both plots, the cell temperature is 80\({}^{\circ}\)C, the pump beam power is 2.25 mW, and the bias field is 5.0 \(\mu\)T along the \(z\) direction. frequencies or phases of the modulation fields to selectively measure and control the field gradient tensor elements. In this way, while keeping components in Fig. 1(b) as the hardware for first part of the data processes, we need extra LIAs for successive demodulations as shown in Fig. 4(a). We choose the pump beam configuration on the left of Fig. 3(a) for this application. While the pump beam powers are modulated at the Larmor frequencies, according to Eq. (7), adding a field gradient modifies \(\omega_{L}\), and in turn affects the \(\omega-\omega_{L}\) term in the denominator of the equation. However, the value of the in-phase output is independent of the sign of the additional field gradient. Therefore, as shown in Fig. 4(b), the in-phase output of the first LIA (LIA1 in Fig. 4(a)) is a symmetric function of \(B_{zy}\) with a bias field of \(\mathbf{B}=(\hat{x}+\hat{y}+\hat{z})5.8~{}\mu\)T. This result is similar to the case of a magnetometer based on a single circularly polarized beam [22], where the transmission power of the beam is a symmetric function of the transverse field. In the analogy, we can adopt the closed-loop operation scheme in Ref. [22] to measure the field gradient tensor elements. For example, we apply a modulation field with a frequency of \(\omega_{d}\) to the \(B_{zy}\) coils. The signal that is simultaneously modulated at \(\omega\) and \(\omega_{d}\) are extracted by successive demodulations, whose final out is proportional to the bias value of \(B_{zy}\). By passing this output to a proportional-integral (PI) controller with a setting point of zero and feeding back on the \(B_{zy}\) coils, Figure 3: Plot (a) shows two configurations for the atomic gradiometer. Plot (b) and (c) show the gradiometer response to \(G_{y}\) and the gradiometer sensitivity, respectively. The data for the normal cavity are taken with a bias field of 5.0 \(\mu\)T along the \(z\) axis, each pump beam power around 5.5 mW, and a cell temperature of 85 \({}^{\circ}\)C. The data for the cavity with a half-wave plate are taken with the same parameters except that the cell temperature is 80\({}^{\circ}\)C. The dashed lines in plot (b) are linear fitting results, and the two horizontal lines in plot (c) denote the average noise level in the range of 15 Hz to 40 Hz. Figure 4: Plot (a) shows signal processing of measuring elements of the field gradient tensor, where the sensor in the center of the coil system corresponds to the atomic gradiometer with the same configuration as the left one in Fig. 3(a). Plot (b) shows the out-of-phase output of LIA1 as a function of \(B_{zy}\), with a bias field of \(\mathbf{B}=(\hat{x}+\hat{y}+\hat{z})5.8~{}\mu\)T, a cell temperature of 85 \({}^{\circ}\)C, and each pump beam power around 5.5 mW. we can constantly null the bias value of \(B_{zy}\) in the place of the sensor. The feedback value from the PI output is converted to the value of \(B_{zy}\) using the current-to-field calibration factor of the gradient coil, which is measured by a fluxgate, and recorded as the gradiometer output. Using the scheme described above, we apply modulation fields on \(B_{xy}\), \(B_{yy}\) and \(B_{zy}\) coils, all of which share similar amplitudes around 10 nT/cm, but have different modulation frequencies, ranging from 175 Hz to 297 Hz. To calibrate the cross-talk between either two of the above three channels that measure the field gradient tensor elements, we operate both channels in the closed-loop mode, scan the current in the gradient field coil for one channel, and record the response of the other channel. In this way, the cross talks between channels are calibrated to be less than 0.3%. As shown in Fig. 5 (a), the closed-loop gradient field sensitivities for these three elements are 0.67 pT/cm/Hz\({}^{1/2}\), 0.89 pT/cm/Hz\({}^{1/2}\), and 0.87 pT/cm/Hz\({}^{1/2}\) over the frequency range of 5 Hz to 10 Hz. Another way to realize the same functions is to apply the modulation fields along the bias field coils, instead of the gradient field coils, while keeping the other parts unchanged. Compared with the former method, this configuration avoids gradient field modulations which reduce the atomic depolarization time. The experimental parameters are the same as in the aforementioned method, except that amplitudes of the modulation fields applied on the three-axis bias-field coils are around 50 nT. The cross talks between channels in this case are calibrated to be less than 0.2%, and the closed-loop gradient field sensitivities for \(B_{xy}\), \(B_{yy}\) and \(B_{zy}\) are improved to 0.35 pT/cm/Hz\({}^{1/2}\), 0.49 pT/cm/Hz\({}^{1/2}\), and 0.34 pT/cm/Hz\({}^{1/2}\) over the frequency range of 5 Hz to 10 Hz, as shown in Fig. 5(b). ## V Conclusion In summary, we have developed a versatile Herriott-cavity-assisted atomic gradiometer, using two Bell-Bloom optical pumping beams and a single probe beam. We demonstrated its applications in measuring some elements of the total field gradient and the field gradient tensor. Currently, the sensor sensitivities on the field gradient components are mainly limited by the noises from the pump beams, which can be eliminated using a pulsed pump-probe scheme [10; 26; 46]. Due to the two pump beams for the sensor are placed in parallel along the \(y\) axis in this paper, the sensor can only measure the field gradient elements related to the \(y\) axis. For a full measurement of the total field gradient and the gradient tensor, we can add more pairs of pump beams in parallel along the \(x\) and \(z\) directions for differential detections of atomic ensembles in these two axes, or adding more gradiometers with different orientations so that the sensitive directions of such gradiometers can cover all three axes. ## Acknowledgements This work was partially carried out at the USTC Center for Micro and Nanoscale Research and Fabrication. This work was supported by National Natural Science Foundation of China (Grant No. 11974329), and Scientific Instrument and Equipment Development Projects, CAS (NO. YJKYYQ20200043).
2308.08303
Leveraging Next-Active Objects for Context-Aware Anticipation in Egocentric Videos
Objects are crucial for understanding human-object interactions. By identifying the relevant objects, one can also predict potential future interactions or actions that may occur with these objects. In this paper, we study the problem of Short-Term Object interaction anticipation (STA) and propose NAOGAT (Next-Active-Object Guided Anticipation Transformer), a multi-modal end-to-end transformer network, that attends to objects in observed frames in order to anticipate the next-active-object (NAO) and, eventually, to guide the model to predict context-aware future actions. The task is challenging since it requires anticipating future action along with the object with which the action occurs and the time after which the interaction will begin, a.k.a. the time to contact (TTC). Compared to existing video modeling architectures for action anticipation, NAOGAT captures the relationship between objects and the global scene context in order to predict detections for the next active object and anticipate relevant future actions given these detections, leveraging the objects' dynamics to improve accuracy. One of the key strengths of our approach, in fact, is its ability to exploit the motion dynamics of objects within a given clip, which is often ignored by other models, and separately decoding the object-centric and motion-centric information. Through our experiments, we show that our model outperforms existing methods on two separate datasets, Ego4D and EpicKitchens-100 ("Unseen Set"), as measured by several additional metrics, such as time to contact, and next-active-object localization. The code will be available upon acceptance.
Sanket Thakur, Cigdem Beyan, Pietro Morerio, Vittorio Murino, Alessio Del Bue
2023-08-16T12:07:02Z
http://arxiv.org/abs/2308.08303v3
# Leveraging Next-Active Objects for Context-Aware Anticipation ###### Abstract Objects are crucial for understanding human-object interactions. By identifying the relevant objects, one can also predict potential future interactions or actions that may occur with these objects. In this paper, we study the problem of Short-Term Object interaction anticipation (STA) and propose NAOGAT (Next-Active-Object Guided Anticipation Transformer), a multi-modal end-to-end transformer network, that attends to objects in observed frames in order to anticipate the next-active-object (NAO) and, eventually, to guide the model to predict context-aware future actions. The task is challenging since it requires anticipating future action along with the object with which the action occurs and the time after which the interaction will begin, a.k.a. the time to contact (TTC). Compared to existing video modeling architectures for action anticipation, NAOGAT captures the relationship between objects and the global scene context in order to predict detections for the next active object and anticipate relevant future actions given these detections, leveraging the objects' dynamics to improve accuracy. One of the key strengths of our approach, in fact, is its ability to exploit the motion dynamics of objects within a given clip, which is often ignored by other models, and separately decoding the object-centric and motion-centric information. Through our experiments, we show that our model outperforms existing methods on two separate datasets, Ego4D and EpicKitchens-100 ("Useen Set"), as measured by several additional metrics, such as time to contact, and next-active-object localization. The code will be available upon acceptance. ## 1 Introduction Have you ever wondered how humans are able to effortlessly navigate their surroundings and perform actions based on what they see, especially in virtual reality (VR) and augmented reality (AR) environments? Such actions often involve contact with objects which are referred to as _active objects_ in the egocentric vision literature [32]. Understanding and predicting the interactions with these ob Figure 1: Our proposed model, NAOGAT, uses both features from video frames and object detections, which are combined and fed to a transformer encoder. Given object queries and encoded frame features from last observed frame, NAO decoder predicts relevant next-active-object detections. This information from NAO decoder is then passed along to the Motion decoder, which utilizes object dynamics to extract background information as object trajectories, to anticipate future frame interaction with encoder frame features (foreground motion) and next-active-object decoded features. jects are essential for enhancing the realism and interactivity of VR and AR experiences [17, 29]. For example, Fig. 1 shows a first-person video clip where someone is about to perform a specific action. From the observed frames, it is reasonable to guess that the person is going to make a contact with the glass to possibly perform a _wash_ or _fill_ action. This reasoning helps us to recognize two important cues from a video: (1) which object will be "used" (i.e., active) in the future, and (2) what possible actions can be performed with that object. The category of objects significantly influences the nature of actions performed on them [1]. For example, a _cut_ action might not be performed in this scenario if we know that _glass_ is the next-active-object. Thus, intuitively, a model can make better predictions if it is able to anticipate which object(s) in the scene is possibly engaged in the very next future, to support and drive the identification of the future action. This can enable more immersive and interactive experiences, where users can seamlessly interact with virtual objects based on anticipated actions, leading to enhanced user engagement and satisfaction. This concept is particularly relevant in the Short-Term Anticipation (STA) task, which involves predicting the next-active-object (NAO) and its position, along with the time to contact (TTC) with that object, as well as the upcoming action, for a given video clip. This task depends on the assumption that the NAO is visible or present in the last observed frame, enabling its identification and localization [15]. The task that is more frequently exploited in the egocentric vision literature is instead action anticipation [13, 40], which refers to predicting a future action involving an object interaction without necessarily requiring the object to be visible in the last observed frame. The use of object-centric cues has shown significant promise in various video understanding tasks, such as action recognition [39, 31, 18], hand-object forecasting [24, 22, 8, 30], and action anticipation [23, 34, 12]. However, egocentric action anticipation methods [12, 40, 13, 23] often overlooked such cues and mostly relied on holistic scene features and/or hand features. Indeed for STA, there exists no method explicitly considering the information that could be gained from the object and more importantly NAOs. In this paper, we propose a novel multi-modal architecture, called NAOGT (Next-Active-Object Guided Attention Transformer), that involves training a model to attend to the objects in the last observed frame based on an observed video clip, allowing it to predict the next-active-object (NAO). By incorporating the most relevant objects in the upcoming action and modeling the object dynamics of an observed clip, our model can make more accurate predictions of future actions and time to contact with those object(s) (NAO) and improve its overall performance. The experimental analysis performed on two large-scale datasets: Ego4D [15] and EpicKitchen-100 (EK-100) [4] demonstrate the favorable performance of NAOGT with respect to several other methods, and indeed prove the importance of NAO cues and object dynamics for the targeted task. Particularly, on the Ego4D dataset [15], we show a \(2.16\%\) gain in Average Precision for VERB \(+\) NAO, while a notable improvement of \(7.33\%\) is observed in estimating the TTC \(+\) NAO. The contributions of this paper can be summarized as follows: * We present a Transformer-based method, called NAOGT, for STA task, which models next-active-object anticipation as a fixed-set prediction problem based on object queries. * We propose a joint learning strategy based on fixed and learnable object queries which are extracted as ROIs from object detections and learned based on global context of video respectively: relevant detections for next-active-object guide the model to anticipate for object-specific future actions. * The proposed method also exploit the motion dynamics of objects in sampled frames to model background information, in terms of object trajectories, along with foreground motion extracted from RGB features to better represent the human-object interaction. * We provide next-active-object annotations for the EpicKitchen-100 [4] dataset in terms of NAO location _w.r.t_ last observed frame, which can be used to further advance research in egocentric video analysis and action anticipation. ## 2 Related Work Existing action anticipation methods in egocentric videos have primarily focused on utilizing features extracted from video clips, but they often overlooked the importance of objects and their interactions. Below, we discuss the existing works on the next-active-object and overall action anticipation methods in first-person videos since in this paper we aim to show that leveraging the next-active-objects would improve the action anticipation task. **Next-active-object.** In egocentric vision literature, _active_ and _passive_ objects were first time introduced by Pirsiavash and Ramanan [32], which defined the active objects as those the first-person is interacting with, while passive objects stand for the contrary. Dessalene et al. [7] used the [32] description and proposed a method to predict _next-active-objects_, however, they limited their objective only to the objects that are contacted by the first-person's _hand_. Consequently, this approach requires the hands and the next-active-objects to be visible in the frames, which might be a significant limitation in practical applications. Furnari et al. [11] utilized object tracking to detect the next-active-objects, limited to being able to predict them only in _one_ future frame. Other works [20, 24] proposed to identify the objects either in a single image (without analyzing the spatiotemporal data) or by predicting the future hand motion. Both fall behind in explicitly exploiting the future active object information to predict future action. Recently, Thakur et al. [38] proposed a transformer-based approach that applies collaborative modeling of RGB and object features to anticipate the location of the next-active-object(s) several frames ahead of the last observed frame. **Action Anticipation.** This task stands for forecasting the future actions of a person given an egocentric video clip including the past and current frames. Several approaches in this line have focused on learning the scene features with Convolutional Neural Networks (CNNs), e.g., by modeling the hand-object contact points [23]. Others aggregated the past contextual features [10, 12, 36], and a few focused on modeling the future interaction of consecutive frames [41]. With the emergence of Vision Transformers [9, 28], researchers have started to investigate the utility of transformers in their work. For instance, [13] proposed causal modeling of video features, introducing sequence modeling of frame features to decode the interactions in consecutive future frames. We et al. [40] proposed a long-term understanding of videos using Multiscale Transformers by hierarchically attending to previously cached memories. In a recent work, Zhang et al. [42] proposed fusing object information along with RGB frame features for a better video context representation in addition to investigating the im Figure 2: Our NAOGAT model first extracts feature information and object detections from a set of frames within an observed clip segment by means of a backbone network and an object detector; object detections are then transformed into object embeddings using an MLP network. The frame features are then concatenated with object embeddings to be sent to the transformer encoder after appending with spatial positional encoding. The encoder then extracts foreground motion (video memory) and global context features, which are used in two separate decoders to perform object-centric and motion-centric predictions. For object decoder, detections from the last observed frame and learnable embeddings are used to create object queries to perform fixed-set predictions for the NAO class label and its bounding box using a transformer decoder. In the last stage, we leverage Object dynamics to extract background motion in terms of object trajectories for detected objetcs in sampled frames. We then use the object decoder’s outputs with the combined frame representation of video memory and object dynamics to perform predictions for motion-related outputs, such as future action and time to contact (TTC). pact of the audio modality for action anticipation. Our work distinguishes itself from the related work by addressing action anticipation in egocentric videos with the prediction of the next-active-objects. ## 3 Method As already mentioned, the Short-Term Anticipation (STA) task aims at predicting the next human-object interaction happening after a certain (unknown) time \(\delta\), named the Time To Contact (TTC), relying on evidence up to time \(T\). Thus our model's input is a \(T\)-frames video sequence \(V=\{v_{i}\}_{i=1}^{T}\), where \(v_{i}\in\mathbb{R}^{C\times H_{\alpha}\times W_{\alpha}}\) is an RGB frame, while it must output 4 unknowns at time \(T\), following the protocol introduced in [15]: the NAO noun class (\(\hat{n}\)), its location, i.e. bounding box (\(\hat{b}\)), a verb depicting the future action (\(\hat{v}\)) and the time to contact (\(\hat{\delta}\)), which estimates how many seconds in the future the interaction with the object will begin. ### Method Overview We now introduce our model architecture, as illustrated in Fig. 2, which is designed to predict the class and location of the next-active-object and the time required to make contact with the object (the TTC \(\delta\)) as well as the future action, based on a given observed clip. The proposed method comprises 4 main modules. First, a _feature extractor_ (Fig. 2(a), Sec. 3.2) operates on frames from a sampled video clip to extract RGB and object detection features. This is followed by an _Encoder Block_ (Fig. 2(b), Sec. 3.3) that operates on the combined features to facilitate the exchange of information across frames. Following the encoder, the two separate head architectures - _NAO Block_ (Fig. 2(c), Sec. 3.4) and _Motion Block_ (Fig. 2(d), Sec. 3.5), is used to predict the next-active-object information and future action prediction respectively. Our model employs object queries from object detection in the last observed frame to locate and identify the next-active-object and is inspired by the direct set prediction problem [2], which involves predicting a fixed set of objects and modeling their relationship. In addition, we leverage object dynamics to extract background motion in a video clip. This involves incorporating object trajectories for detected objects in sampled frames within the _Motion Block_ and utilizing attended frame features from the encoder module to model the relationship with NAO priors (Fig. 2-(c)), in order to predict future action and TTC. An overview of our model architecture in shown in Fig. 1, which is described in detail in Fig. 2. In the following we describe each model component in detail, followed by training and implementation details. ### Feature extractor For a given video clip, we sample a set of \(T\) frames \(V=\{v_{i}\}_{i=1}^{T}\), where, \(v_{i}\in\mathbb{R}^{C\times H_{\alpha}\times W_{\alpha}}\), which are fed to _i)_ a backbone network for feature extraction _ii)_ a pre-trained object detector (Fig. 2-(a)). _i)_ While a number of video-based backbone architectures have been proposed [3, 9, 18, 28, 40] to extract frame-level feature representation from a given video clip, for the task of STA a suitable spatial-temporal encoder is required to be able to extract both static appearance information (_e.g.,_, object location, size) and motion cues. Video Swin Transformer [28], a recently proposed spatial-temporal transformer architecture was proposed based on shifted windows architecture of Swin Transformer [25] for the video domain. Video Swin contains just a single temporal downsampling layer and can be easily adapted to output per-frame feature maps, essential for us to localize and identify the next-active-objects at the same time reasoning on the whole sequence. Specifically, we adopted the Swin-T [28] architecture as our backbone. The frames \(V\) are given in parallel to the video swin architecture to extract frame features, \(f_{I}\in\mathbb{R}^{T\times H\times W\times C^{{}^{\prime}}}\), where \(T\), \(H\) and \(W\) denotes the temporal length of video clip, as well as height and width of the feature maps respectively. _ii)_ In addition, a Faster R-CNN [35] based object detector pre-trained on Ego4D [15], is used to extract the object detections from the sampled frames, in terms of bounding boxes coordinates and confidence score, resulting in a \((4+1)\)-dimensional vector. We limit the number of bounding boxes to be used to a fixed number \(Q\) to maintain a consistent number of detections across each frame. If there are fewer detections than \(Q\), then dummy coordinate and score values corresponding to no detection are appended. Finally an MLP processes \(5\)-d vectors into object embeddings \(f_{D}\in\mathbb{R}^{T\times Q\times D}\). Finally, visual features are also projected to a shared dimension \(D\) as of \(f_{D}\), using a 2D convolution layer with kernel size as 1, \(f_{I}\in\mathbb{R}^{T\times H\times W\times D}\). Finally, the features from each modality are then flattened and _separately_ concatenated along the temporal dimension, producing a set of \(T_{D}=\{t_{d1}\}_{i=1}^{T}\) multimodal embeddings, where \(t_{di}\in\mathbb{R}^{(H\times W+Q)\times D}\). ### Transformer Encoder In the next step, according to Fig. 2-(b), the concatenated multimodal embeddings, \(T_{D}\) are simultaneously passed to a Transformer Encoder [2] after appending spatial positional encoding. The Transformer Encoder blocks allow exchanging frame-level information within inter-frame features while maintaining the same dimension. The output of the encoder is \(Z\), where \(Z\in\mathbb{R}^{T\times(H\times W+Q)\times D}\), which is the combined features representation across frames and ob ject detections. It is split into 2 parts : 1) Global-context memory, \(z_{LT}\), where \(z_{LT}\in\mathbb{R}^{(H\times W+Q)\times D}\) extracted from last frame of \(Z\), 2) Video-only memory: \(\{z_{i}\}_{i=1}^{T}\) where \(z_{i}\in\mathbb{R}^{H\times W\times D}\), which aims to captures foreground motion cues such as _e.g., hands in first-person vision_[13]. The Global-context memory and Video-only memory are then used by the NAO and Motion blocks to find the instances that corresponds to possible next-active-object and also anticipate the future action respectively. ### NAO Block The STC task requires anticipating the location of the next-active-object wrt the _last_ frame observed by the model. Therefore, as shown in Fig. 2-(c), we only use the features corresponding to the last frame from the transformer encoder, namely \(z_{T}\), as input to the NAO block, along with \(N_{q}\) object queries for the frame of interest. We define our object queries as the regions of interest (ROIs) extracted by the object detector, i.e., a feature map for each detection. If there are no sufficient detections, _i.e.,_ the number of detections is less than \(N_{q}\), then we append learnable tokens for the rest of the queries. Our object decoder follows the standard architecture of the transformer decoder [2], transforming \(N_{q}\) embeddings of size \(D\) using multi-headed attention mechanisms. The \(N_{q}\) object queries are decoded by using \(z_{T}\) as key/value pairs in the multiple multi-head attention layers. The decoded features, \(z_{NAO}\in\mathbb{R}^{N_{q}\times D}\), are then used to predict bounding box coordinates (\(\hat{b}\)) and class labels (\(\hat{n}\)) by an additional MLP block, resulting in \(N_{q}\) final predictions for the next-active-object. The decoder's primary function is to attend to objects ( detected/learned ) in the last observed frame based on a global context of a video clip, resulting in the prediction of a possible next-act-object and its corresponding object label. ### Motion Block Object Dynamics.We propose to integrate the video frame features from the transformer encoder with the object dynamics of detected objects in the video clip, in order to better estimate the time required to approach the next-active-object predicted by our object decoder (Sec. 3.4). As shown in Fig. 2-(d), object dynamics refer to a proxy for object trajectories of background objects in the video clip. [18] previously used object dynamics to enhance the frame representation for effective motion information modeling in videos. However, their approach requires object region information and multiple stacking of the feature representation block in a transformer encoder for action recognition benchmark. In contrast, we treat object motion dynamics as a separate module for extracting object traversals. Object trajectories are the bounding box movement across the frames in sampled video clip. We interpret these trajectories as background motion because they correspond to passive motion in the scene, and combine them with transformer encoder outputs to model human-object motion features. This approach has proven useful in estimating the speed of interaction and predicting motion-centric information. The Object Dynamics block takes as input the object detection's box locations \(od_{i}\) for a frame \(i\) and outputs spatial-temporal tokens, \[\hat{o_{MD0}},\dots o_{MDT}=\textit{OMD}(od_{0},\dots,od_{T}) \tag{1}\] where \(o_{\hat{MD}i}\in\mathbb{R}^{H\times W\times D}\). In \(OMD\), initially, each object detection is expanded from \(T\times Q\times 4\) into \(T\times Q\times D\) tokens using a MLP. These tokens are flattened then used to perform self-attention operation and projected on a spatial-temporal dimension \(THW\times D\) using a bi-linear interpolation sampler operation [19] to output object trajectories for frames used in the inputs \(o_{\hat{MD}i}^{\textit{}}\). This module provides more detailed information on the object motion, _i.e.,_ background motion of frames. Motion Decoder.It was empirically observed that a single Object Decoder (Sec. 3.4) leads to the dropping of motion information across the frames, resulting in very poor performance for future action prediction. For this purpose, we decided to use a separate decoder for motion-related predictions (verb and TTC). Inspired from [13], we additionally combine frame features with the object motion dynamics features to model the foreground motion from video memory (Sec. 3.3) and background motion from object dynamics (Sec. 3.5) at the frame level. \[z_{0}^{\prime},\dots z_{T}^{\prime}=MLP(LN(z_{0},\dots,z_{T}\bigoplus o_{MD0} ^{\textit{}},\dots,o_{MDT}^{\textit{}})) \tag{2}\] Here, object motion dynamics features, \(o_{\hat{MD}i}^{\textit{}}\) are added to encoder features \(z_{i}\) along spatial and temporal dimension _T, H, W_, where \(\bigoplus\) denotes such element-wise summation. This is followed by a Layer Norm (LN) and a MLP. In addition, to influence our future action prediction based on our next-active-object prediction, we add the object decoder embeddings, \(z_{NAO}\) to the last observed frame before feeding the sequence to the decoder. \[\hat{z_{1}},\hat{z_{2}},\dots,z_{T+1}=D(z_{0}^{\prime},z_{1}^{\prime},\dots,z_ {T}^{\prime}+z_{NAO}) \tag{3}\] We implement \(D\) using the masked transformer decoder as followed in popular approaches such as [33]. We feed the modified inputs features to the masked decoder after appending with temporal positional encoding. The masking ensures that the model attends to specific parts of the input while performing the prediction for the next consecutive position. That helps our model to understand the interaction of person and the surrounding motion. The additional input of \(z_{NAO}\) helps to refine future action prediction. The design differs considerably from [13], since we model the background and foreground motion in a combined fashion with additional priors of next-active-object added to last observed frame features before the causal modeling. The decoder network \(D\) is designed to produce attentive features corresponding to the future frames using the object motion dynamics and also the next-active-object information in the last observed frame to anticipate the future action. We use the future frame feature, \(z_{T+1}\) to predict future action label \(\hat{v}\) and the TTC \(\delta\) corresponding with the next-active-object obtained from the object detector, using a feed-forward network. ### Training Let us denote \(y\) as the set of ground truth set of objects, and \(\hat{y}=\{\hat{y}_{i}\}_{i=1}^{N}\) as the set of \(N\) predictions, relating to \(N\) object queries. Based on the procedure of finding matching elements of [2], we identify the one-to-one matching for the predictions with the ground truth labels using the _Hungarian loss_ for all pairs matched. **Bounding box loss.** The major difference between us and [2] is that we aim to learn bounding boxes based on some initial guesses, rather than only performing the predictions directly. The predicted bounding boxes are regressed using a combination of L1 loss and the generalized IoU loss and are defined as : \[\mathcal{L}_{box}=\lambda_{iou}\mathcal{L}_{iou}(b_{i},\hat{b}_{\sigma(i)})+ \lambda_{L1}||b_{i}-\hat{b}_{\sigma(i)}||_{1} \tag{4}\] where \(\lambda_{iou},\lambda_{L1}\in\mathbb{R}\) are hyper-parameters. **Classification losses.** The second loss, denoted by \(\mathcal{L}_{noun}\) and \(\mathcal{L}_{verb}\), is a cross-entropy loss that supervises the prediction of labels for the next-active-object and the future action: \[\mathcal{L}_{verb/noun}(\hat{y}_{i},y_{i})=\sum_{t=0}^{N}y_{i}^{t}.\log(\hat{ y}_{i}^{t}) \tag{5}\] **Regression and feature Loss.** The regression loss, denoted by \(\mathcal{L}_{ttc}\) is the smooth L1 loss [14] and is used to train the model to regress the time to contact prediction. Finally, we also use a feature loss, \(L_{feat}\) defined below in Eq. 6 from [13] which aims at leveraging the predictive structure of the motion decoder 3.5: the decoder is basically trained to predict future frame features given frames up to time \(t\) only. \[\mathcal{L}_{feat}=\sum_{t=0}^{N}||\hat{z}_{t+1}-z_{t+1}^{\prime}||_{2}^{2}, \tag{6}\] In the end, all losses are combined to produce the overall loss: \[\mathcal{L}=\mathcal{L}_{box}+\lambda_{2}\mathcal{L}_{noun}+\lambda_{3} \mathcal{L}_{verb}+\lambda_{4}\mathcal{L}_{ttc}+\mathcal{L}_{feat} \tag{7}\] where \(\lambda_{2},\lambda_{3},\lambda_{4}\in\mathbb{R}\) are hyperparameters. ## 4 Experiments ### Datasets We used the following datasets to validate the effectiveness of our method quantitatively and qualitatively. **Ego4D**[15] is currently the largest first-person dataset available, consisting of 5 splits covering distinct tasks and a total of 3,670 hours of videos across 74 different locations. For the next-active-object prediction and STA task, we use the "forecasting split" which contains over 1000 videos and is annotated at 30 fps for the STA task. The dataset annotations include the next-active-objects in the last observed frame, which is a unique feature of that dataset _wrt._ STA. Our goal is to predict the noun class (\(\hat{n}\)), bounding box (\(\hat{b}\)), the verb depicting the future action (\(\hat{v}\)), and the Time to Contact (TTC) (\(\delta\)) for a given video clip. For comparison on the Ego4D dataset, we apply the existing methods, which are designed for action anticipation-based tasks, by confining them to only predict the next-active-object class label (\(\hat{n}\)), the verb depicting the future action (\(\hat{v}\)), and the TTC (\(\delta\)) since the compared methods are not designed to predict bounding boxes. **Epic-Kitchens-100**[4] consists of about 100 hours of recordings with over 20M frames comprising daily activities in kitchens, recorded with 37 participants. It includes 90K action segments, labeled with 97 verbs and 300 nouns (i.e. manipulated objects). Since the dataset does not provide annotation for next-active-object, we exploit the object detector provided by [5] and also the annotations provided in [38] to curate labelings composed of bounding boxes _i.e;_ locations of next-active-objects in the last observed frame. It is to be noted that, to adapt this dataset for the next-active-object detection task, it is imperative that the object, which is used in future action is visible in the last observed frame. However, based on our annotations we realized that for 12.5 \(\%\) of training data in EK-100 the next-active-object annotations are absent _i.e;_ the future active object is not visible in the last observed frame. ### Implementation Details In order to pre-process the input video clips, we randomly scale the height of the clips between 248 and 280 pixels and take 224-pixel crops for training. We sample 16 frames at 4 frames per second (FPS). We adopt the network architecture of Swin-T [26, 27] to serve as the backbone of our network to extract the video features from the sampled clip. However, we only utilize the outputs till the first-three block of the video swin transformer [25] along with down-sampling of each block to extract the _per-frame_ feature maps, which are required later to predict the bounding boxes. We also use a 3-layer multi-head transformer encoder and decoder, which operates on a fixed 256-D. We train our end-to-end model with SGD optimizer using a learning rate of \(1e-4\) and a weight decay of \(1e-6\) for 50 epochs. ### Evaluation Metrics We evaluate our models on the Ego4D [15] dataset using the evaluation metrics defined by the dataset creators for short-term anticipation tasks. These metrics include the Average Precision of four different combinations of the next-active-object-related predictions: noun class (\(\hat{n}\)), bounding box (\(\hat{b}\)), future action (\(\hat{v}\)), and time to contact (\(\delta\)). We use the top-1 accuracy to evaluate the performance of the future action (\(\hat{v}\)) and next-active-object label (\(\hat{n}\)) predictions. For bounding boxes (\(\hat{b}\)) and time to contact (\(\delta\)), the predictions are considered correct if the predicted boxes have an Intersection over Union (IoU) value greater than or equal to 0.5 and the absolute difference between the predicted and ground-truth time to contact is less than or equal to 0.25 seconds (\(|\hat{y}_{ttc}-y_{ttc}|\leq 0.25\)). In the case of combined predictions involving two or more unknowns, the prediction is deemed correct only if all the unknowns are predicted correctly. For the purpose of training, we kept the values of all \(\lambda\) as 1, except \(\lambda_{4}\) which is set to 10 following [15]. For comparison of models on the EK-100 [4] dataset, we adhere to the metric commonly used in recent action-anticipation works [12, 40, 13]. ### Comparison with State-of-the-art For Ego4D dataset [15], we compare our model with the methods restricted to only predicting the future action (\(\hat{v}\)) and TTC (\(\delta\)) of a given sample clip, since the only methods we perform a comparison with, are action anticipation methods that have not been designed to predict bounding boxes. Table 1 declares the results for Ego4D [15] dataset. We observe that our model achieves better performance than the object detector [35] that is pre-trained on Ego4D, in terms of predicting the NAO's class label and bounding box location, as evidenced by the higher \(AP_{\hat{b}}\) and \(AP_{\hat{b}+\hat{n}}\) scores. This superiority is also visually evident in Fig. 3, where the performance of our object decoder is shown to refine the detected objects for NAO and even identify objects that were not detected by the object detector. Moreover, our model outperforms all the other baseline methods across all other evaluation metrics for the STA task. In the case of EpicKitchen-100 dataset [4], we compare our proposed method against SOTA for **action anticipation task**, as described in [5, 12]. It is important to note that _the action anticipation task differs significantly from the STA task_, where the concept of next-active-object is not considered. However, we compute our own annotations to adapt the Action Anticipation task for STA-based scenarios, as discussed in Sec. 4.1. Since our model and the STA task require the identification of the next-active-object (and its visibility/presence) in the last observed frame, this is reflected in our results due to the limitations of the dataset. The results of our experiments on the EK-100 dataset are presented in Table 2. We achieve state-of-the-art performance on the "Unseen Set" which only contains a small fraction (\(6\%\)) of samples where no Next-Active-Object (NAO) is detected in the last observed frame. It is to be noted that NAO \begin{table} \begin{tabular}{|l||c c c c c c c c c|} \hline Models & \(AP_{\hat{b}}\) & \(AP_{\hat{b}+\hat{n}}\) & \(AP_{\hat{b}+\hat{n}+\hat{s}}\) & \(AP_{\hat{b}+\hat{n}+\hat{s}}\) & \(AP_{\hat{b}+\hat{s}}\) & \(AP_{\hat{b}+\hat{s}}\) & \(AP_{\hat{b}+\hat{s}}\) & \(AP_{\hat{b}+\hat{s}}\) \\ \hline \hline Slowfast [15] & 40.5 & 24.5 & 5.0 & 4.9 & 1.5 & 8.4 & 8.16 & 1.9 \\ Slowfast (with Transformer backbone) & 40.5 & 24.5 & 4.5 & 4.37 & 1.73 & 7.5 & 8.2 & 1.3 \\ AVT [13] & 40.5 & 24.5 & 4.39 & 4.52 & 1.71 & 7.12 & 8.45 & 1.15 \\ ANAACTO [38] & 40.5 & 24.5 & 4.55 & 5.1 & 1.91 & 7.47 & 8.9 & 1.54 \\ MeMVIT [40] & 40.5 & 24.5 & 4.95 & 5.89 & 1.34 & 9.27 & 10.04 & 2.11 \\ \hline Ours & **45.3** & **27.0** & **9.0** & **6.54** & **2.47** & **16.6** & **12.2** & **4.18** \\ \hline \end{tabular} \end{table} Table 1: Results of our model and other baseline methods on Ego4D [15] dataset for different output targets, bounding box (\(\hat{b}\)), next-active-object label (\(\hat{n}\)), future action (\(\hat{v}\)) and the time to contact with the object (\(\delta\)) based on their Average Precision (\(AP\)). \begin{table} \begin{tabular}{|l|c|c||c c c c c|c c c|} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Params} & \multirow{2}{*}{Init} & \multicolumn{3}{c|}{Unseen} & \multicolumn{3}{c|}{Tail} & \multicolumn{3}{c|}{Overall} \\ \cline{3-13} & & Action & Verb & Noun & Action & Verb & Noun & Action & Verb & Noun \\ \hline \hline Chance & - & - & 0.5 & 14.4 & 2.9 & 0.1 & 1.6 & 0.2 & 0.2 & 6.4 & 2.0 \\ TempAgg (RGB) [36] & - & [6] & 12.2 & 27.0 & 23.0 & 10.4 & 16.2 & 22.9 & 13.0 & 24.2 & 29.8 \\ RULSTM [12] & - & [35] & - & - & - & - & - & 7.8 & 17.9 & 23.3 \\ RULSTM [12] & - & [6] & 13.1 & 28.8 & 23.7 & 10.6 & 19.8 & 21.4 & 13.25 & 27.5 & 29.0 \\ AVT (RGB) [13] & 393 & [6] & - & - & - & - & - & - & 14.9 & 30.2 & 31.7 \\ AVT + [13] & - & [6] & 11.9 & **29.5** & 23.9 & **14.1** & 21.1 & 25.8 & **15.9** & 28.2 & 32.0 \\ MeMViT [40] & 59 & [21] & 9.8 & 27.5 & 21.7 & 13.2 & **26.3** & **27.4** & 15.1 & **32.8** & **33.2** \\ \hline Ours & 23.5 & - & **14.3** & 29.3 & **27.8** & 4.4 & 13.2 & 13.8 & 10.7 & 25.3 & 27.9 \\ \hline \end{tabular} \end{table} Table 2: Results of our model and other baseline methods on EK-100 [4] dataset on validation set. “Overall” comprises of samples combining the Unseen and Tail set plus also consisting of _seen_ samples from the training set. GAT is the lightest _w.r.t._ other compared models. However, our model's performance on the "Tail Set" is suboptimal, likely due to the fact that the NAO is not visible in the last observed frame for around \(22\%\) of the clips. This limitation causes confusion in our model, which relies on the visibility of NAO in the last frame, and impacts the overall results for the "Overall Set," which comprises the "Unseen Set", "Tail Set," and training set's "seen" samples. To investigate the impact of the "Tail Set" on the "Overall Set" accuracy, we remove clips corresponding to tail classes for which NAO is not present in last observed frame and observe improvements of \(+5.2\uparrow\) (16.9\(\%\)), \(+4.0\uparrow\) (32.4\(\%\)), and \(+7.6\uparrow\) (35.5\(\%\)) in action, verb, and noun recognition, respectively. We report additional qualitative results of our model on both dataset in our supplementary material. ### Ablation study We conducted an ablation study on Ego4D dataset to analyze the impact of different modules of the proposed method in Table 3. We evaluated the performance of our complete model in comparison to the models that omit either the Object Decoder (OD) module (Section 3.4) or the Object Motion Dynamics (OMD) module (Section 3.5 along with Object Decoder (OD) together. Our findings indicate that the Object Decoder module improves the prediction of future verbs (\(\hat{v}\)), resulting in a higher \(AP_{\hat{b}+\hat{v}}\). This suggests that having prior knowledge of the future active-object can support anticipating future action. On the other hand, the OMD module plays a crucial role in estimating the time needed to make contact with the next-active-object and initiate an action, resulting in a significant improvement in \(AP_{\hat{b}+\hat{\delta}}\) and other related metrics. Since OMD provides additional background motion information which helps the model greatly in predicting the TTC. These findings suggest that both modules are essential for accurately anticipating future actions in first-person videos. Additionally, we also investigated the impact of the backbone network on our model's performance. For this purpose, we replace our Swin-T backbone with ResNet50 [16] architecture. Using ResNet50 demonstrates a significant drop in performance across all metrics. ## 5 Conclusion We have investigated the problem of short-term action anticipation using the next-active-objects. First, we discussed the formulation of the STA task. We then presented a new vision transformer-based model, which learns to encode human-object interactions with the help of an object detector and decode the next-active-object location in the last observed frame. We then demonstrated the importance of next-active-object information to predict the future action and time to start the action using additional background information as object motion dynamics. We proved the proposed method's effectiveness by comparing it against relevant strong anticipation-based baseline methods. In future work, we will investigate the use of an object tracker with other human-centered cues such as gaze and the appearance of objects over time. We will also investigate the effect of Figure 3: The top row (a) shows the “last observed frame” and all the object detections provided by the object detector [35]. The bottom row (b) depicts the output from our motion decoder. It can be observed that our model learns from past observations and selects the best possible object(s) for the next-active-object selection in the frame. Besides, it can be seen that it is even able to identify objects which were not detected by the object detector (3\({}^{rd}\) and 7\({}^{th}\) column are the clearest examples). \begin{table} \begin{tabular}{|l||c c c c c c c c|} \hline Model & \(AP_{\hat{b}}\) & \(AP_{\hat{b}+\hat{\alpha}}\) & \(AP_{\hat{b}+\hat{\alpha}+\delta}\) & \(AP_{\hat{b}+\hat{\alpha}+\phi}\) & \(AP_{\hat{b}+\hat{\alpha}+\phi+\delta}\) & \(AP_{\hat{b}+\hat{\phi}}\) & \(AP_{\hat{b}+\phi+\delta}\) \\ \hline \hline Ours w/o _OMD_, _OD_ & 45.3 & 26.7 & 4.78 & 5.55 & 1.05 & 7.89 & 8.91 & 1.52 \\ Ours w/o _OMD_ & 45.1 & 27.1 & 4.48 & 6.2 & 1.0 & 7.34 & 10.2 & 1.44 \\ Ours (ResNet50) & 42.7 & 25.2 & 4.3 & 6.0 & 1.1 & 10.6 & 10.1 & 2.0 \\ Ours (Full) & **45.3** & **27.0** & **9.0** & **6.54** & **2.47** & **16.6** & **12.2** & **4.18** \\ \hline \end{tabular} \end{table} Table 3: Ablation study performed on ego4D [15] to investigate the effect of Backbone, Motion Dynamics (MD), and object decoder (OD) modules on the motion-based output sequences by the model. action recognition on _NAO_ identification and localization. **Limitations.** As discussed above, the proposed approach is specifically designed for Short-Term Anticipation (STA) task, where the next-active-object is assumed to be visible in the last observed frame. Therefore, when applied to a slightly different task of Action Anticipation, our model shows limitations as it relies on this assumption which does not necessarily hold true in this case. **Broad Impact.** The proposed method can be used in several real-world applications such as in robotics or virtual / augmented reality. In case the first person also interacts with other people but not only the non-living objects, then there might be issues regarding privacy preservation. In such cases, policy reviews should be further considered when using the proposed method. Leveraging Next-Active Objects for Context-Aware Anticipation in Egocentric Videos: Supplementary Material This supplementary material presents the qualitative analysis of our model, NAOGAT on Ego4D [15] and EpicKitchen-100 [4] dataset. We provide a video depicting the performance of our model when progressed over the allowed the observed segment of a video clip, which is discussed in detail in Sec. 6. In addition, we also provide some visualization for next-active-object (NAO) annotation on EpicKitchen-100 [4], depicting its location and the class label in the last observed frame for a given video clip. We also describe the annotation pipeline followed to curate the ground-truth data for next-active-object prediction for the Short-Term Anticipation task in Sec. 7. ## 6 Video We provide additional detail on performance of our model, NAOGAT, when compared with the object detections provided by the object detector pre-trained on Ego4D [15]. We notice a significant improvement in refining the object detections and also identifying objects which are not detected by the object detector to anticipate the location of NAO. The video entails the performance of NAOGAT auto-regressively when fed with a sequential progressive video clip. It can be noticed that as the video progresses, the model further refines the predictions based on past observations and predicts the next-active-object bounding box and its class label, along with future action and time to contact (TTC) with the object. The video also provide a visualization on future frames which are not observed by the model describing the time taken to contact with the next-active-object. ## 7 EpicKitchen-100 NAO dataset curation The Short-Term Anticipation (STA) task involves predicting the location (bounding box, \(\hat{b}\)) and class label, \(\hat{n}\) of the next-active-object, as well as the future action \(\hat{v}\) and the time to contact (\(\delta\)) with the NAO, for a given video clip. It is important to note that the NAO must be present and visible in the last observed frame for the task to be valid. Currently, only Ego4D [15] dataset provides the precise annotation for studying the problem. The EpicKitchen-100 dataset [4] offers valuable ground-truth data for the action anticipation [12, 13] task. The dataset includes information on future actions such as "peeling an onion," future verbs like "peel," and associated noun labels of the object involved in the action, such as "onion." This makes the dataset an excellent resource for studying and evaluating models designed to predict future actions. We consider the noun label as the NAO class label for a given clip. However, it lacks annotations for the location of NAO in the last observed frame. For this purpose, we aimed to curate our own annotation for NAO estimation following the pipeline described in Fig. 4. To curate ground-truth data for the next-active-object prediction for the Short-Term Anticipation task, we first extract the last observed frame from a given clip. Next, we use a pre-trained object detector [35] on the EK-55 dataset [5] to obtain raw object detections for the frame. We then verify if the ground-truth NAO class label is identified in the raw detections. If a match is found, the corresponding bounding box for that detection is used as the ground-truth annotation for the NAO bounding box (\(\hat{b}\)). However, if the object detector fails to identify any object with the ground-truth NAO label, we use a Hand-Object detector [37] to obtain bounding boxes for the active object [32]. This is because the hand-object detector has been shown to be state-of-the-art in identifying hand-object detection and has been used in the literature [24, 38]. In the event that the Hand-Object detector identifies an active object, we extract the Region of Interest (ROI) for the corresponding detection from the input frame. This ROI is then fed into the object detector [35] used earlier, and we take the top-3 predictions from the detector. These predictions are once again verified against the Ground-Truth NAO class label to check if they contain the NAO label. If one of the predictions satisfies the criteria, the location of the active object is used as the ground-truth annotation for the NAO location. This pipeline is used to only curate information regarding the location of NAO and not the class label of NAO for a given clip. The class label for NAO is used from the annotations provided with EK-100 for action anticipation. The final annotations for the dataset are shown in Fig. 5. ## 8 Limitations of our model for EpicKitchen-100 dataset It is important to note that EpicKitchen-100 was not curated in alignment with the definition of STA. Specifically, the dataset does not provide annotations for next-active-object, and it is not mandatory for NAO to be present in the allowed last frame observed by the model. As discussed in the main paper, our dataset curation method (described in Sec. 7) could not annotate the ground-truth data for the next-active-object in 22% of the "Test Set" of the Validation split, as there were no detected objects in those clips. Moreover, the EK-100 dataset suffers from a dataset bias, as there are _300_ class labels for objects, and similar-looking objects are often classified differently, as shown in Fig. 6. This further confuses the model's identification of objects and impedes its ability to anticipate future actions. Figure 4: Annotations pipeline for extracting next-active-object ground-truth labels for EpicKitchen-100 [4] dataset. Figure 5: NAO annotations for EK-100 as curated from the pipeline described in Fig. 4. The frames corresponds to the last observed frame for a given clip and the detection represents the next-active-object information in terms of NAO location and its class label.
2306.14286
$L^2$ to $L^p$ bounds for spectral projectors on the Euclidean two-dimensional torus
We consider spectral projectors associated to the Euclidean Laplacian on the two-dimensional torus, in the case where the spectral window is narrow. Bounds for their L2 to Lp operator norm are derived, extending the classical result of Sogge; a new question on the convolution kernel of the projector is introduced. The methods employed include l2 decoupling, small cap decoupling, and estimates of exponential sums.
Ciprian Demeter, Pierre Germain
2023-06-25T16:29:29Z
http://arxiv.org/abs/2306.14286v2
# \(L^{2}\) to \(L^{p}\) bounds for spectral projectors on the Euclidean two-dimensional torus ###### Abstract. We consider spectral projectors associated to the Euclidean Laplacian on the two-dimensional torus, in the case where the spectral window is narrow. Bounds for their \(L^{2}\) to \(L^{p}\) operator norm are derived, extending the classical result of Sogge; a new question on the convolution kernel of the projector is introduced. The methods employed include \(\ell^{2}\) decoupling, small cap decoupling, and estimates of exponential sums. Key words and phrases:Spectral projectors, \(l^{2}\) decoupling, lattice points in annuli, exponential sums 2000 Mathematics Subject Classification: 11L07, 11P21, 42B15 ###### Contents * 1 Introduction * 2 Caps decomposition of the kernel * 3 Dyadic decomposition of the kernel * 4 The conjecture on the \(L^{p}\) norm of the convolution kernel * 5 Graphical representation ## 1. Introduction ### Eigenfunctions of the Laplacian on the two-dimensional torus We consider the torus \[\mathbb{T}^{2}=\mathbb{R}^{2}/\mathbb{Z}^{2},\] on which Fourier series are given by \[f(x)=\sum_{k\in\mathbb{Z}^{2}}\widehat{f}_{k}e^{2\pi ik\cdot x},\qquad \widehat{f}_{k}=\int_{\mathbb{T}^{2}}f(x)e^{-2\pi ik\cdot x}\,dx.\] A classical question is to estimate the \(L^{p}\) norms of eigenfunctions of the Laplacian: if \(\varphi\) is such that \(-\Delta\varphi=\lambda^{2}\varphi\) on \(\mathbb{T}^{2}\), and if it is normalized in \(L^{2}\), what is the optimal bound on \(\|\varphi\|_{L^{p}}\), for \(p\geq 2\)? What should be expected is unclear (the question is asked in [4]), but one possibility is that \[\|\varphi\|_{L^{p}}\lesssim_{p}1\quad\text{if $p<\infty$, while}\quad\|\varphi\|_{L^{\infty}}\lesssim_{\epsilon}\lambda^{\epsilon}. \tag{1.1}\] This was proved for \(p=4\) by Cooke [11] and Zygmund [26]; and for \(p=\infty\), the bound \(\lambda^{\frac{C}{\log\log\lambda}}\) follows from the divisor bound in Gaussian integers. Proving optimal bounds for \(\|\varphi\|_{L^{p}}\) for any \(p\in(4,\infty)\) appears to be a very hard problem, which can be relaxed by considering spectral projectors on narrow spectral windows, to which we now turn. ### A conjecture on spectral projectors on narrow windows For \(\lambda>2\) and \(\delta<1\), the spectral projector on the range \((\lambda-\delta,\lambda+\delta)\) for the square root of the Euclidean Laplacian is given through functional calculus by the formula \[P_{\lambda,\delta}=\mathbf{1}_{(\lambda-\delta,\lambda+\delta)}(\sqrt{-\Delta}) \qquad\text{or}\qquad P_{\lambda,\delta}f(x)=\sum_{k\in\mathcal{A}_{\lambda, \delta}}\widehat{f}_{k}e^{2\pi ik\cdot x},\] where \(\mathcal{A}_{\lambda,\delta}\) is the annulus with inner radius \(\lambda-\delta\) and width \(2\delta\): \[\mathcal{A}_{\lambda,\delta}=\{x\in\mathbb{R}^{2},\,\lambda-\delta<|x|< \lambda+\delta\}.\] Two consecutive eigenvalues of \(\sqrt{-\Delta}\) close to \(\lambda\) are at least \(\sim\frac{1}{2}\lambda^{-1}\) apart. Thus, if \(\delta=\frac{1}{4}\lambda^{-1}\), bounding \(P_{\lambda,\delta}\) is equivalent to bounding eigenfunctions of the Laplacian. In the present paper, we consider the following conjecture, which focuses on the case where the spectral window is at least slightly larger than \(\lambda^{-1}\). **Conjecture A** ([15]).: _If \(p\geq 2\), the operator norm of \(P_{\lambda,\delta}\) satisfies for any \(\kappa>0\)_ \[\|P_{\lambda,\delta}\|_{L^{2}\to L^{p}}\lesssim_{\kappa,p}\lambda^{\frac{1}{2 }-\frac{2}{p}}\delta^{\frac{1}{2}}+(\lambda\delta)^{\frac{1}{4}-\frac{1}{2p}} \qquad\text{if}\quad\delta>\lambda^{-1+\kappa} \tag{1.2}\] _or in other words_ \[\|P_{\lambda,\delta}\|_{L^{2}\to L^{p}}\lesssim_{\kappa,p}\left\{\begin{array}[ ]{ll}(\lambda\delta)^{\frac{1}{4}-\frac{1}{2p}}&\text{if }p\leq 6\\ (\lambda\delta)^{\frac{1}{4}-\frac{1}{2p}}&\text{if }p\geq 6\text{, }\delta \leq\lambda^{-1+\frac{8}{p+2}}\\ \lambda^{\frac{1}{2}-\frac{2}{p}}\delta^{\frac{1}{2}}&\text{if }p\geq 6\text{, } \delta\geq\lambda^{-1+\frac{8}{p+2}}.\end{array}\right.\] _The conjecture is said to be satisfied with \(\epsilon\) loss if_ \[\|P_{\lambda,\delta}\|_{L^{2}\to L^{p}}\lesssim_{\kappa,p,\epsilon}\lambda^{ \epsilon}\left[\lambda^{\frac{1}{2}-\frac{2}{p}}\delta^{\frac{1}{2}}+(\lambda \delta)^{\frac{1}{4}-\frac{1}{2p}}\right]\qquad\text{if}\quad\delta>\lambda^{- 1+\kappa}. \tag{1.3}\] _Remark 1.1_.: The justification for this conjecture can be found in [15], where two basic examples are considered: the Knapp example, and the spherical example. They lead to the two terms on the right-hand side of (1.2). _Remark 1.2_.: Combining the conjecture with the guess (1.1), it might be the case that the estimate (1.2) is true for any \(\kappa\geq 0\) and \(p\in[2,\infty]\), with an implicit constant \(C(p,\kappa)\) which only blows up as \((p,\kappa)\to(\infty,0)\). ### Known results on Conjecture A Conjecture A is known to hold in a number of cases: * If \(\delta=1\), the conjecture corresponds to the fundamental result of Sogge [23], which holds on any Riemannian manifold (see also [1] for a recent extension to logarithmically small spectral windows for general nonpositively curved manifolds, including in particular the torus). * If \(p=4\), the conjecture was proved for the full range \(\lambda^{-1}<\delta<1\) by Bourgain-Burq-Zworski [6]. * If \(p\leq 6\), the conjecture with \(\epsilon\) loss is a consequence of the \(\ell^{2}\) decoupling of Bourgain-Demeter [9] as was observed in [15]. * If \(p=\infty\), the conjecture for the full range \(\lambda^{-1}<\delta<1\) with \(\epsilon\) loss follows immediately from the bound \(\lambda^{\epsilon}\) for the \(L^{\infty}\) norm of eigenfunctions. * If \(p=\infty\), the conjecture without \(\epsilon\) loss would be a consequence of the estimate \(N(\lambda)=\pi\lambda^{2}+O(\lambda\delta)\) for the number \(N(r)\) of integer points in the disc with radius \(r\). This corresponds to the Gauss circle problem, for which the best current bound, due to Huxley [20], allows \(\delta>\lambda^{-\frac{77}{208}+\epsilon}\), with \(\frac{77}{208}\sim 0.37\). Note however that Conjecture A is expected to hold down to \(\delta=\lambda^{-1+\kappa}\), while the the estimate \(N(\lambda)=\pi\lambda^{2}+O(\lambda\delta)\) can only be true for \(\delta>\lambda^{-\frac{1}{2}}\) For the two-dimensional Euclidean cylinder, the conjecture is identical, and it has been proved with \(\epsilon\) loss [14]. Finally, this conjecture has also been considered in higher dimensions, for which we refer to [4, 5, 7, 8, 9, 10, 15, 16, 18]. ### A new conjecture The convolution kernel \[\Phi_{\lambda,\delta}=\sum_{k\in\mathcal{A}_{\lambda,\delta}\cap\mathbb{Z}^{2 }}e^{2\pi ik\cdot x}\quad\text{is such that}\quad P_{\lambda,\delta}f=\Phi_{ \lambda,\delta}*f.\] **Conjecture B**.: _If \(p\geq 2\) and \(\kappa>0\), then if \(\delta>\lambda^{-1+\kappa}\),_ \[\left\|\Phi_{\lambda,\delta}\right\|_{L^{p}}\lesssim_{p,\kappa}\lambda^{1- \frac{2}{p}}\delta+(\lambda\delta)^{\frac{1}{2}}\] _or in other words,_ \[\left\|\Phi_{\lambda,\delta}\right\|_{L^{p}}\lesssim\left\{\begin{array}{ ll}(\lambda\delta)^{\frac{1}{2}}&\text{if }2\leq p\leq 4\\ (\lambda\delta)^{\frac{1}{2}}&\text{if }p\geq 4\text{ and }\delta<\lambda^{\frac{4}{p}-1}\\ \lambda^{1-\frac{2}{p}}\delta&\text{if }p\geq 4\text{ and }\delta>\lambda^{\frac{4}{p}-1}.\end{array}\right.\] This conjecture is interesting in two respects: first, it is partially equivalent to Conjecture A; and second, it is equivalent to questions on additive energies of subsets of \(\mathbb{Z}^{2}\). See Section 4 for more on these two points. This conjecture is based on the two following observations, which also show that the conjecture is optimal, if true. * A naive counting argument shows that, for a given \(\delta\), there exists in any interval of length \(1\) a \(\lambda\) such that \(\#\mathcal{A}_{\lambda,\delta}\cap\mathbb{Z}^{2}\gtrsim\lambda\delta\). For this choice of \(\delta\) and \(\lambda\), Holder's inequality and Parseval's theorem imply that \[\|\Phi_{\lambda,\delta}\|_{L^{p}}\gtrsim\|\Phi_{\lambda,\delta}\|_{L^{2}}= \left(\#\mathcal{A}^{\prime}_{\lambda,\delta}\right)^{\frac{1}{2}}\gtrsim( \lambda\delta)^{\frac{1}{2}}.\] * Still considering \(\lambda\) and \(\delta\) such that \(\#\mathcal{A}_{\lambda,\delta}\cap\mathbb{Z}^{2}\gtrsim\lambda\delta\), Bernstein's inequality gives that \[\|\Phi_{\lambda,\delta}\|_{L^{p}}\gtrsim\lambda^{-\frac{2}{p}}\|\Phi_{ \lambda,\delta}\|_{L^{\infty}}=\lambda^{-\frac{2}{p}}\#\mathcal{A}^{\prime}_{ \lambda,\delta}\gtrsim\lambda^{1-\frac{2}{p}}\delta.\] Note that the conjecture cannot hold all the way to \(\kappa=0\) and \(p\geq 2\), since it would imply a uniform bound on the number of lattice points on a circle, which is known to fail. ### Main results Our main results verify the conjectures for various ranges in \((p,\lambda,\delta)\). **Theorem 1.3**.: 1. _Conjecture A holds if_ \(2\leq p<6\)_, or_ \(p\geq 6\) _and_ \(\delta>\min\left(\lambda^{-\frac{1-\frac{6}{p}}{3-\frac{p}{p}}},\lambda^{- \frac{10-\frac{64}{p}}{29-\frac{1}{p}}+\epsilon}\right)\)_._ 2. _Conjecture A holds with_ \(\epsilon\) _loss if_ \(2\leq p\leq 10\)_, or_ \(\delta>\lambda^{-\frac{1}{3}}\)_, or_ \(p=\infty\)_._ This statement follows from combining Theorem 2.5 and Theorem 3.1. Turning to Conjecture B, it is a consequence of Conjecture A if \(\delta>\lambda^{-1+\frac{8}{p+2}}\); thus, the previous theorem gives the validity of Conjecture B in some range. Furthermore, combining Corollary 4.5 and Proposition 4.6 gives the following theorem. **Theorem 1.4**.: _Conjecture B holds with \(\epsilon\) loss if \(2\leq p\leq 6\) or \(\lambda>\delta^{-\frac{1}{3}}\)._ Finally, we refer to Section 5 for a graphical representation of the ranges in \((p,\lambda,\delta)\). ### Ideas of the proofs and plan of the paper #### 1.6.1. Decomposition into caps This is the first possible line of attack on Conjecture A, which is carried out in Section 2; here, caps are rectangles which optimally cover the annulus \(\mathcal{A}_{\lambda,\delta}\). We investigate estimates on functions whose Fourier support is restricted to caps containing a bounded number of lattice points, relying crucially on \(\ell^{2}\) decoupling. Combining these estimates with an estimate on caps with a given number of lattice points leads to our result in that section. #### 1.6.2. Dyadic decomposition of the kernel This approach to Conjecture A is carried out in Section 3. It relies on a dyadic decomposition of the convolution kernel of \(P_{\lambda,\delta}\), which is reminiscent of the original proof of the Stein-Tomas theorem. In the regime where this approach is useful (\(p\) large), an important threshold is the line \(\delta=\lambda^{-\frac{1}{3}}\). Reaching (slightly) smaller \(\delta\) can be achieved with the help of pointwise bounds on exponential sums, whose investigation is a classical topic in analytic number theory. #### 1.6.3. Conjecture B and small caps Section 4 is dedicated to Conjecture B. We show that it is partially equivalent to Conjecture A, and explain its connection with additive combinatorics. In order to make progress on this conjecture for \(p\leq 6\), \(\ell^{2}\) decoupling is not strong enough, the right tool proves to be small cap decoupling. Combining this tool with careful estimates on the distribution of lattice points in \(\mathcal{A}_{\lambda,\delta}\) leads to our main result in that section. ### General curves How much do our results depend on the geometry of the circle? Is it possible to replace it by an ellipse (which would correspond to general tori \(\mathbb{R}^{2}/[\mathbb{Z}e_{1}+\mathbb{Z}e_{2}]\), where \(e_{1}\) and \(e_{2}\) are non-colinear vectors in \(\mathbb{R}^{2}\)), or even by a general curve? We consider a smooth arc or a closed smooth curve, which is denoted by \(\Gamma\) and is compact; the natural generalizations of \(P_{\lambda,\delta}\) and \(\Phi_{\lambda,\delta}\) are given by \[\widetilde{P_{\lambda,\delta}}f=\sum_{k\in\mathcal{N}_{\delta}(\lambda\Gamma) \cap\mathbb{Z}^{2}}\widehat{f}_{k}e^{2\pi ik\cdot x}\quad\text{and}\quad \widetilde{\Phi_{\lambda,\delta}}(x)=\sum_{k\in\mathcal{N}_{\delta}(\lambda \Gamma)\cap\mathbb{Z}^{2}}e^{2\pi ik\cdot x},\] where \(\mathcal{N}_{\delta}(\lambda\Gamma)\) stands for the \(\delta\)-neighborhood of \(\lambda\Gamma\). Nearly all the proofs below remain valid as long as the curvature of \(\Gamma\) does not vanish, and almost all the intermediary estimates proved in this paper still hold. This is the case for the cap counting Lemma 2.1, the \(L^{4}\) estimate Lemma 2.3, the decoupling estimate Lemma 2.4, the exponential sum estimates1 in Section 3, and the small cap estimates in Section 4.3. Footnote 1: To see why these exponential sum estimates still hold, observe that the asymptotics of the Fourier transform of the superficial measure supported on \(\Gamma\) are very similar to (3.1). The only difference is that the phase function \(|\xi|\) is replaced by a function \(\phi(\xi)\), but it remains smooth and \(1\)-homogeneous, see [24], page 360. But the bound on the number of lattice points in \(\mathcal{A}_{\lambda,\delta}\) is lost! For the square torus \(\mathbb{R}^{2}/\mathbb{Z}^{2}\), the divisor bound immediately gives the estimate \(\lambda^{1+\epsilon}\delta\); but for a general torus, or a general curve, such an argument is not available to estimate \(\#\mathcal{N}_{\delta}(\lambda\Gamma)\cap\mathbb{Z}^{2}\), and this bound is much more difficult to obtain. Exponential sum bounds seem to be the only possibility, which are closely related, if not equivalent, to the Gauss circle problem. They give the expected result for \(\delta>\lambda^{-\frac{1}{3}}\) easily, but it is difficult to go significantly below this barrier. As a result, for general smooth curves (with curvature) and the associated \(\widetilde{P_{\lambda,\delta}}\) operators, * Theorem 2.5 remains true for \(\delta>\lambda^{-\frac{1}{3}}\), and a little beyond, even though we will not compute here the exact exponents. * Theorem 3.1 remains true. * Theorem 4.4 and its corollaries remain true for \(\delta>\lambda^{-\frac{1}{3}}\). However, conjectures A and B break down for general curves (with curvature) for small \(\delta\): see Remark 4.8 on the case of the parabola. ### Acknowledgement When working on this article, CD was supported by the NSF grant DMS-2055156. He thanks his student Hongki Jung for help with Figure 1. PG was supported by a start-up grant from Imperial College and a Wolfson fellowship. He is most grateful to Simon Myerson for sharing his number theoretical insight in numerous conversations. ## 2. Caps decomposition of the kernel In this section, we prove Conjecture A for some range of \((p,\lambda,\delta)\) by decomposing a function supported on the annulus (in Fourier) into a sum of functions supported on caps (in Fourier). ### Counting points and caps Recall that the annulus of inner radius \(\lambda-\delta\) and width \(2\delta\) is denoted \[\mathcal{A}_{\lambda,\delta}=\{x\in\mathbb{R}^{2},\,\lambda-\delta<|x|<\lambda+ \delta\}.\] Next, we will split the annulus into caps: \(\mathcal{A}_{\lambda,\delta}\) can be covered by a finitely disjoint collection \(\mathcal{C}\) of caps \(\theta\), of dimensions \(\delta\times(\lambda\delta)^{\frac{1}{2}}\): \[\mathcal{A}_{\lambda,\delta}\subset\cup_{\theta\in\mathcal{C}}\theta.\] The number of such caps is \(\sim\lambda^{\frac{1}{2}}\delta^{-\frac{1}{2}}\). We will be interested in the number of lattice points contained in a cap; therefore the notation \[\theta^{\prime}=\theta\cap\mathbb{Z}^{2},\qquad\text{and, more generally}, \qquad E^{\prime}=E\cap\mathbb{Z}^{2}\;\;\text{if}\;E\subset\mathbb{R}^{2}\] will be useful. The maximal cardinality of \(\theta^{\prime}\) is \(\sim\lambda^{\frac{1}{2}}\delta^{\frac{1}{2}}\); as for the average cardinality of \(\theta^{\prime}\), it is expected to be given by the area of \(\theta\), namely \(\lambda^{\frac{1}{2}}\delta^{\frac{3}{2}}\). We will now split the collection \(\mathcal{C}\) into \(\cup_{s}\mathcal{C}_{s}\cup\mathcal{C}_{0}\) as follows * If \(2^{s}\) is such that \(1+\lambda^{\frac{1}{2}}\delta^{\frac{3}{2}}\ll 2^{s}\lesssim\lambda^{\frac{1}{2}} \delta^{\frac{1}{2}}\), \(\mathcal{C}_{s}\) gathers all caps \(\theta\) such that \(\#\theta^{\prime}\sim 2^{s}\). * All remaining caps go into \(\mathcal{C}_{0}\); in other words, caps in \(\mathcal{C}_{0}\) are such that \(\#\theta^{\prime}\lesssim\lambda^{\frac{1}{2}}\delta^{\frac{3}{2}}+1\). **Lemma 2.1**.: _If \(2^{s}\gg 1+\lambda^{\frac{1}{2}}\delta^{\frac{3}{2}}\), then_ \[\#\mathcal{C}_{s}\lesssim\lambda\delta 2^{-2s}.\] Proof.: This follows along the lines of the argument for Theorem 2.17 in [9]. Since \(2^{s}\gg\lambda^{\frac{1}{2}}\delta^{\frac{3}{2}}\), which is the area of a cap, the set \(\theta^{\prime}\) is one-dimensional2. It consists of colinear points, with equal spacing. We now split \(\mathcal{S}_{s}\) into Footnote 2: Indeed, if a convex body \(K\subset\mathbb{R}^{2}\) is symmetric with respect to the origin, \(a\in\mathbb{R}^{2}\) and \(K\cap(a+\mathbb{Z}^{2})\) has dimension \(2\), then \(\#(a+\mathbb{Z}^{2})\cap K\lesssim|K|\). \[\mathcal{S}_{s,m}=\{\theta\in\mathcal{C},\;\#\theta^{\prime}\sim 2^{s},\; \text{and two consecutive points in $\theta^{\prime}$ are $\sim 2^{m}$ apart}\}.\] On the one hand, we can associate to \(\theta\in\mathcal{S}_{s,m}\) its direction \(d_{\theta}\), which is the difference between two consecutive points in \(\theta^{\prime}\). This is a vector of \(\mathbb{Z}^{2}\) with magnitude \(\sim 2^{m}\); therefore, the number of possible directions is \(\lesssim 2^{2m}\). On the other hand, the angle between \(\theta^{\prime}\) and the major axis of \(\theta\) is \(\lesssim\delta 2^{-m-s}\). If we consider a subcollection of \(\theta\in\mathcal{S}_{s,m}\) with angular separation \(\gg\delta 2^{-m-s}\), then the directions \(d_{\theta}\) are distinct. Such an angular separation can be achieved by labeling all caps in the collection \(\mathcal{C}\), starting at one cap, and following the circle; and then keeping only those caps which are in a fixed class modulo \(\sim\lambda^{\frac{1}{2}}\delta^{\frac{1}{2}}2^{-m-s}\). These arguments show that \[\#\mathcal{S}_{m,s}\lesssim\lambda^{\frac{1}{2}}\delta^{\frac{1}{2}}2^{-m-s}2^ {2m}=\lambda^{\frac{1}{2}}\delta^{\frac{1}{2}}2^{m-s}. \tag{2.1}\] Summing over \(2^{m}\lesssim\lambda^{\frac{1}{2}}\delta^{\frac{1}{2}}2^{-s}\) gives the desired result. **Lemma 2.2**.: _If \(\lambda\delta>1\),_ \[\#(\mathcal{A}_{\lambda,\delta})^{\prime}\lesssim\lambda^{1+\epsilon}\delta.\] Proof.: There are \(\sim\lambda\delta\) integers \(n\) such that \(\lambda-\delta<\sqrt{n}<\lambda+\delta\). For each such integer, there are at most \(O(\lambda^{\epsilon})\) solutions of \(x^{2}+y^{2}=n\), by the divisor bound in \(\mathbb{Z}[i]\) ### \(L^{p}\) estimates on caps with bounded numbers of points The basic idea behind the \(L^{4}\) and \(L^{6}\) estimates which are stated below is to break down our spectral projectors into projections on caps. In order to do so, we choose first a smooth partition of unity (\(\chi_{\theta}\)) associated to the collection \(\mathcal{C}\): \[\operatorname{Supp}\chi_{\theta}\subset\theta,\qquad\text{and}\qquad\sum_{ \theta\in\mathcal{C}}\chi_{\theta}=1\quad\text{on }\mathcal{A}_{\lambda,\delta}.\] and define next the Fourier multipliers \[P^{\theta}=\chi^{\theta}(D).\] **Lemma 2.3** (\(L^{4}\) estimate by the bilinear argument).: _Let \(f\) be a function on the torus whose Fourier support \(S\subset\mathcal{A}_{\lambda,\delta}\) is such that for any \(\theta\in\mathcal{C}\), \(\#\left(S\cap\theta\right)^{\prime}\leq N\). Then_ \[\|f\|_{L^{4}}\lesssim_{\epsilon}N^{\frac{1}{4}}\|f\|_{L^{2}}.\] Proof.: It essentially follows that of Proposition 2.4 in [6]. Since \(\widehat{f}\) is supported in \(\mathcal{A}_{\lambda,\delta}\), \[\|P_{\lambda,\delta}f\|_{L^{4}}=\left\|\sum_{\theta,\widehat{\theta}\in \mathcal{C}}P^{\theta}f\,P^{\widetilde{\theta}}f\right\|_{L^{2}}^{\frac{1}{2}}.\] The key geometrical observation is that, up to exchanging the roles of \(\theta_{1}\) and \(\theta_{2}\), the supports of \(P^{\theta_{1}}fP^{\widetilde{\theta}_{1}}f\) and \(P^{\theta_{2}}fP^{\widetilde{\theta}_{2}}f\) are disjoint unless \[\operatorname{dist}(\theta_{1},\theta_{2})+\operatorname{dist}(\widetilde{ \theta}_{1},\widetilde{\theta}_{2})\lesssim\lambda^{\frac{1}{2}}\delta^{\frac{ 1}{2}}.\] As a consequence, almost orthogonality followed by Holder's inequality gives the bound \[\|P_{\lambda,\delta}f\|_{L^{4}}\lesssim\left(\sum_{\theta,\widetilde{\theta}} \left\|P^{\theta}fP^{\widetilde{\theta}}f\right\|_{L^{2}}^{2}\right)^{\frac{1 }{4}}\lesssim\left(\sum_{\theta,\widetilde{\theta}}\left\|P^{\theta}f\right\| _{L^{4}}^{2}\left\|P^{\widetilde{\theta}}f\right\|_{L^{4}}^{2}\right)^{\frac{1 }{4}}\] Applying successively interpolation, the Cauchy-Schwarz inequality and Lemma 2.1, we can estimate \[\|P^{\theta}f\|_{L^{4}}\lesssim\|P^{\theta}f\|_{L^{\infty}}^{\frac{1}{2}}\|P^ {\theta}f\|_{L^{2}}^{\frac{1}{2}}\lesssim\left(\#\theta^{\prime}\right)^{\frac {1}{4}}\|P^{\theta}f\|_{L^{2}}\leq N^{\frac{1}{4}}\|P^{\theta}f\|_{L^{2}}.\] Injecting this inequality in the previous estimate, and using once again almost orthogonality, \[\|P_{\lambda,\delta}f\|_{L^{4}}\lesssim N^{\frac{1}{4}}\left(\sum_{\theta, \widetilde{\theta}}\left\|P^{\theta}f\right\|_{L^{2}}^{2}\left\|P^{\widetilde{ \theta}}f\right\|_{L^{2}}^{2}\right)^{\frac{1}{4}}\lesssim N^{\frac{1}{4}}\|f \|_{L^{2}}.\] **Lemma 2.4** (The \(L^{6}\) estimate by \(\ell^{2}\) decoupling).: _Let \(f\) be a function on the torus whose Fourier support \(S\subset\mathcal{A}_{\lambda,\delta}\) is such that for any \(\theta\in\mathcal{C}\), \(\#\left(S\cap\theta\right)^{\prime}\leq N\). Then_ \[\|f\|_{L^{6}}\lesssim_{\epsilon}\lambda^{\epsilon}\delta^{-\epsilon}N^{\frac{ 1}{4}}\|f\|_{L^{2}}.\] Proof.: It follows [15], the fundamental ingredient being the \(\ell^{2}\) decoupling of Bourgain-Demeter [9]. Writing \(f\) as the sum of its Fourier series \(f=\sum a_{k}e^{2\pi ik\cdot x}\) and changing variables to \(X=\lambda x\) and \(K=k/\lambda\), \[\|f\|_{L^{6}(\mathbb{T}^{2})}=\left\|\sum_{k\in\mathbb{Z}^{2}}a_{k}e^{2\pi ik \cdot x}\right\|_{L^{6}(\mathbb{T}^{2})}\lesssim\left(\frac{\delta}{\lambda} \right)^{\frac{1}{3}}\left\|\phi\left(\frac{\delta X}{\lambda}\right)\sum_{K \in\mathbb{Z}^{2}/\lambda}a_{\lambda K}e^{2\pi iK\cdot X}\right\|_{L^{6}( \mathbb{R}^{2})},\] where the cutoff function \(\phi\) can be chosen to have compactly supported Fourier transform. As a result, the Fourier transform of the function on the right-hand side is supported on a \(\delta/\lambda\)-neighborhood of \(\mathbb{S}^{1}\). It can be written as a sum of functions which are supported on caps \(\theta/\lambda\) with dimension \(\sim\frac{\delta}{\lambda}\times\frac{\delta^{\frac{1}{2}}}{\lambda^{\frac{1}{ 2}}}\) (recall from Section 2.1 that the collection \(\mathcal{C}\) of caps \(\theta\) provides an almost disjoint covering of \(\mathcal{A}_{\lambda,\delta}\)) : \[\phi\left(\frac{\delta X}{\lambda}\right)\sum_{K\in\mathbb{Z}^{2}/\lambda}a_{ \lambda K}e^{2\pi iK\cdot X}=\sum_{\theta}\phi\left(\frac{\delta X}{\lambda} \right)\sum_{K\in\mathbb{Z}^{2}/\lambda}\chi_{\theta}(\lambda K)a_{\lambda K}e ^{2\pi iK\cdot X}.\] By \(\ell^{2}\) decoupling, the \(L^{6}\) norm above is bounded by \[\lesssim_{\epsilon}\left(\frac{\delta}{\lambda}\right)^{\frac{1}{3}-\epsilon} \left(\sum_{\theta}\left\|\phi\left(\frac{\delta X}{\lambda}\right)\sum_{K\in \mathbb{Z}^{2}/\lambda}\chi_{\theta}(\lambda K)a_{\lambda K}e^{2\pi iK\cdot X }\right\|_{L^{6}(\mathbb{R}^{2})}^{2}\right)^{\frac{1}{2}}.\] At this point, we use the inequality \[\text{if }p\geq 2,\qquad\|g\|_{L^{p}(\mathbb{R}^{2})}\lesssim\|g\|_{L^{2}( \mathbb{R}^{2})}|\operatorname{Supp}\widehat{g}|^{\frac{1}{2}-\frac{1}{p}},\] which follows by applying successively the Hausdorff-Young and Holder inequalities, and finally the Plancherel equality. We use this inequality for \(g(X)=\phi\left(\frac{\delta X}{\lambda}\right)\sum_{K}\chi_{\theta}(\lambda K )a_{\lambda K}e^{2\pi iK\cdot X}\). Since \(\#(S\cap\theta)^{\prime}\leq N\), its Fourier transform is supported on the union of at most \(N\) balls of radius \(O(\delta/\lambda)\), giving \(|\operatorname{Supp}\widehat{f}|\lesssim N\delta^{2}\lambda^{-2}\). Thus the \(L^{6}\) norm we are trying to bound is less than \[\lesssim\left(\frac{\delta}{\lambda}\right)^{\frac{1}{3}-\epsilon}(N\delta^{2 }\lambda^{-2})^{\frac{1}{3}}\left(\sum_{\theta}\left\|\phi\left(\frac{\delta X }{\lambda}\right)\sum_{K\in\mathbb{Z}^{2}/\lambda}\chi_{\theta}(\lambda K)a_{ \lambda K}e^{2\pi iK\cdot X}\right\|_{L^{2}(\mathbb{R}^{2})}^{2}\right)^{ \frac{1}{2}}\] By almost orthogonality and periodicity of the Fourier series, this is in turn bounded by \[\lesssim\left(\frac{\delta}{\lambda}\right)^{1-\epsilon}N^{\frac{1}{3}}\left\| \phi\left(\frac{\delta X}{\lambda}\right)\sum_{K\in\mathbb{Z}^{2}/\lambda}a_{ \lambda K}e^{2\pi iK\cdot X}\right\|_{L^{2}(\mathbb{R}^{2})}\lesssim\left( \frac{\delta}{\lambda}\right)^{-\epsilon}N^{\frac{1}{3}}\|f\|_{L^{2}(\mathbb{T }^{2})},\] which is the desired estimate. ### Interpolation Interpolating between the estimates proved in the previous subsections enables us to prove the following theorem. **Theorem 2.5**.: 1. _If_ \(p\in[2,6)\) _and_ \(\lambda^{-1+\kappa}<\delta<\lambda^{-\kappa}\) _for some_ \(\kappa>0\)_,_ \[\|P_{\lambda,\delta}\|_{L^{2}\to L^{p}}\lesssim_{p,\kappa}(\lambda\delta)^{ \frac{1}{4}-\frac{1}{2p}}.\] 2. _If either_ \(p\in[6,10]\) _and_ \(\delta>\lambda^{-1}\)_, or_ \(p\in[10,\infty]\) _and_ \(\delta>\lambda^{-\frac{1}{3}}\)_,_ \[\|P_{\lambda,\delta}\|_{L^{2}\to L^{p}}\lesssim_{\epsilon}\lambda^{\epsilon} \left[\lambda^{\frac{1}{2}-\frac{2}{p}}\delta^{\frac{1}{2}}+(\lambda\delta)^{ \frac{1}{4}-\frac{1}{2p}}\right].\] Proof.: Decomposition of \(P_{\lambda,\delta}\) Recall that the Fourier multipliers \(P^{\theta}\) were defined in the previous section. They are now used to set \[P^{s}=\sum_{\theta\in\mathcal{C}_{s}}P^{\theta}\qquad\text{and}\qquad P^{0}= \sum_{\theta\in\mathcal{C}_{0}}P^{\theta}\] as well as \[P^{s}_{\lambda,\delta}=P_{\lambda,\delta}P^{s}\qquad\text{and}\qquad P^{0}_{ \lambda,\delta}=P_{\lambda,\delta}P^{0}.\] We will split \(P_{\lambda,\delta}\) into \[P_{\lambda,\delta}=P_{\lambda,\delta}^{0}+\sum_{s}P_{\lambda,\delta}^{s},\] and estimate the different summands on the right-hand side by interpolating between \(L^{2}\to L^{p}\) bounds, with \(p=4,6,\infty\). Basic bounds We learn from lemmas 2.3 and lemma 2.4 that, if \(\delta\gtrsim\lambda^{-1}\) and \(2^{s}\gg 1+\lambda^{\frac{1}{2}}\delta^{\frac{3}{2}}\), \[\|P_{\lambda,\delta}^{0}\|_{L^{2}\to L^{4}} \lesssim\lambda^{\frac{1}{8}}\delta^{\frac{3}{8}}+1\] \[\|P_{\lambda,\delta}^{s}\|_{L^{2}\to L^{4}} \lesssim 2^{\frac{s}{4}}\] \[\|P_{\lambda,\delta}^{0}\|_{L^{2}\to L^{6}} \lesssim_{\epsilon}\lambda^{\epsilon}\left[\lambda^{\frac{1}{6}} \delta^{\frac{1}{2}}+1\right]\] \[\|P_{\lambda,\delta}^{s}f\|_{L^{2}\to L^{6}} \lesssim_{\epsilon}\lambda^{\epsilon}2^{\frac{s}{8}}.\] Furthermore, lemmas 2.1 and 2.2 give the bounds \[\|P_{\lambda,\delta}^{0}\|_{L^{2}\to L^{\infty}} \lesssim\lambda^{\epsilon}(\lambda\delta)^{\frac{1}{2}}\] \[\|P_{\lambda,\delta}^{s}\|_{L^{2}\to L^{\infty}} \lesssim\lambda^{\frac{1}{2}}\delta^{\frac{1}{2}}2^{-\frac{s}{2}}\] The case \(2\leq p\leq 4\) By the basic bounds above, \[\|P_{\lambda,\delta}\|_{L^{2}\to L^{4}}\lesssim\|P_{\lambda,\delta}^{0}\|_{L^ {2}\to L^{4}}+\sum_{j}\|P_{\lambda,\delta}^{s}\|_{L^{2}\to L^{4}} \lesssim\lambda^{\frac{1}{8}}\delta^{\frac{3}{8}}+1+\sum_{1\leq 2^{s}\leq \lambda^{\frac{1}{2}}\delta^{\frac{3}{2}}}2^{\frac{s}{4}}\lesssim(\lambda \delta)^{\frac{1}{8}}.\] This is the desired bound if \(p=4\), and the case \(2\leq p\leq 4\) follows from interpolation with the trivial case \(p=2\). Bounding \(P_{\lambda,\delta}^{0}\) Interpolating between the basic bounds for \(P_{\lambda,\delta}^{0}\) bounds gives, if \(4\leq p\leq 6\), \[\|P_{\lambda,\delta}^{0}\|_{L^{2}\to L^{p}}\lesssim\lambda^{\epsilon}\left[1+ \lambda^{\frac{1}{4}-\frac{1}{2p}}\delta^{\frac{3}{4}-\frac{3}{2p}}\right],\] which is consistent with the conjecture if \(\lambda^{-1+\kappa}<\delta<\lambda^{\kappa}\). If \(6\leq p\leq\infty\), we obtain instead \[\|P_{\lambda,\delta}^{0}\|_{L^{2}\to L^{p}}\lesssim\lambda^{\epsilon}\left[( \lambda\delta)^{\frac{1}{2}-\frac{3}{p}}+\lambda^{\frac{1}{2}-\frac{2}{p}} \delta^{\frac{1}{2}}\right].\] This is consistent with the conjecture with \(\epsilon\) loss if \((\lambda\delta)^{\frac{1}{2}-\frac{3}{p}}+\lambda^{\frac{1}{2}-\frac{2}{p}} \delta^{\frac{1}{2}}\lesssim\lambda^{\frac{1}{2}-\frac{2}{p}}\delta^{\frac{1}{2 }}+(\lambda\delta)^{\frac{1}{4}-\frac{1}{2p}}\), which is the case if \(\delta>\lambda^{-\frac{1}{3}}\) or \(p\leq 10\). Bounding \(\sum_{s}P_{\lambda,\delta}^{s}\) if \(4\leq p\leq 6\). Interpolating between the basic bounds for \(P_{\lambda,\delta}^{s}\) from \(L^{2}\) to \(L^{4}\) and \(L^{2}\) to \(L^{\infty}\), \[\|P_{\lambda,\delta}^{s}\|_{L^{2}\to L^{p}}\lesssim\lambda^{\frac{1}{2}-\frac{ 2}{p}}\delta^{\frac{1}{2}-\frac{2}{p}}2^{s\left(-\frac{1}{2}+\frac{3}{p} \right)}\qquad\text{if }2^{s}\gg 1+\lambda^{\frac{1}{2}}\delta^{\frac{3}{2}}.\] Therefore, if \(p<6\), \[\sum_{1+\lambda^{\frac{1}{2}}\delta^{\frac{3}{2}}\ll 2^{s}\lesssim\lambda^{\frac{1}{ 2}}\delta^{\frac{1}{2}}}\|P_{\lambda,\delta}^{s}\|_{L^{2}\to L^{p}} \lesssim\lambda^{\frac{1}{4}-\frac{1}{2p}}\delta^{\frac{1}{4}-\frac{1}{2p}},\] which is consistent with the conjecture. Bounding \(\sum_{s}P_{\lambda,\delta}^{s}\) if \(p\geq 6\). Interpolating between the basic bounds for \(P_{\lambda,\delta}^{s}\) from \(L^{2}\) to \(L^{6}\) and \(L^{2}\) to \(L^{\infty}\), \[\|P_{\lambda,\delta}^{s}\|_{L^{2}\to L^{p}}\lesssim_{\epsilon}\lambda^{\epsilon} \lambda^{\frac{1}{2}-\frac{3}{p}}\delta^{\frac{1}{2}-\frac{3}{p}}2^{s\left(- \frac{1}{2}+\frac{5}{p}\right)}\qquad\text{if }2^{s}\gg 1+\lambda^{\frac{1}{2}} \delta^{\frac{3}{2}}.\] Summing over \(2^{j}\) gives if \(p\leq 10\) \[\sum_{1+\lambda^{\frac{1}{2}}\delta^{\frac{3}{2}}\ll 2^{s}\lesssim\lambda^{\frac{1}{ 2}}\delta^{\frac{1}{2}}}\|P^{j}_{\lambda,\delta}\|_{L^{2}\to L^{p}} \lesssim\lambda^{\epsilon}\lambda^{\frac{1}{2}-\frac{3}{p}}\delta^{\frac{1}{2 }-\frac{3}{p}}\sum_{2^{s}\lesssim\lambda^{\frac{1}{2}}\delta^{\frac{1}{2}}}2^{ s\left(-\frac{1}{2}+\frac{5}{p}\right)}\sim\lambda^{\epsilon}(\lambda\delta)^{ \frac{1}{4}-\frac{1}{2p}},\] which is consistent with the conjecture with \(\epsilon\) loss. If we assume now that \(p\geq 10\), \[\sum_{1+\lambda^{\frac{1}{2}}\delta^{\frac{3}{2}}\ll 2^{s} \lesssim\lambda^{\frac{1}{2}}\delta^{\frac{1}{2}}}\|P^{j}_{\lambda,\delta}\|_ {L^{2}\to L^{p}} \lesssim\lambda^{\epsilon}\lambda^{\frac{1}{2}-\frac{3}{p}} \delta^{\frac{1}{2}-\frac{3}{p}}\sum_{2^{s}\gg 1+\lambda^{\frac{1}{2}} \delta^{\frac{3}{2}}}2^{s\left(-\frac{1}{2}+\frac{5}{p}\right)}\] \[\sim\lambda^{\epsilon}\left[\lambda^{\frac{1}{4}-\frac{1}{2p}} \delta^{-\frac{1}{4}+\frac{9}{2p}}+(\lambda\delta)^{\frac{1}{4}-\frac{1}{2p}}\right]\] This is \(\lesssim\lambda^{\epsilon}\left[\lambda^{\frac{1}{2}-\frac{2}{p}}\delta^{ \frac{1}{2}}+(\lambda\delta)^{\frac{1}{4}-\frac{1}{2p}}\right]\) if and only if \(\delta>\lambda^{-\frac{1}{3}}\). ## 3. Dyadic decomposition of the kernel In this section, we will prove the following theorem, which validates Conjecture A for some range of \((p,\lambda,\delta)\). The idea of the proof is to decompose dyadically (in the Poisson summation formula) the kernel of the spectral projector. **Theorem 3.1**.: _For \(p\geq 6\) and \(\epsilon>0\),_ \[\|P_{\lambda,\delta}\|_{L^{2}\to L^{p}}\lesssim_{\epsilon}\lambda^{\frac{1}{2 }-\frac{2}{p}}\delta^{\frac{1}{2}}\qquad\text{if}\qquad\delta>\min\left(\lambda ^{-\frac{1-\frac{6}{p}}{3-\frac{p}{p}}},\lambda^{-\frac{10-\frac{64}{p}}{29- \frac{1}{p}}+\epsilon}\right).\] _Remark 3.2_.: The proof given below uses a pointwise bound on a two-dimensional exponential sum, which is borrowed from Muller [22] and enables us to prove the theorem for scales \(\delta\) (moderately) smaller than \(\lambda^{-\frac{1}{3}}\). The bound in [22] has the advantage of being robust and admitting a rather simple proof, by the Van der Corput method, while providing an improvement over the trivial estimate. But it is certainly not optimal; in particular, the methods recounted in Huxley [19] leading to Huxley [20] could give further improvements, though they do not seem immediately applicable to the sum under under consideration. Proof.: Decomposition of the kernel For technical reasons, it will be more convenient to consider a mollified version of the projector \(P_{\lambda,\delta}\): let \[P^{\flat}_{\lambda,\delta}e^{2\pi ik\cdot x}=\chi_{\lambda,\delta}(k)e^{2\pi ik \cdot x},\] where the function \(\chi_{\lambda,\delta}\) is a nonnegative and smooth cutoff function adapted to the annulus \(\mathcal{A}_{\lambda,\delta}\), namely \(\chi_{\lambda,\delta}(x)>c>0\) if \(x\in\mathcal{A}_{\lambda,\delta}\). To be more specific, it is defined as follows: consider the superficial measure \(d\sigma_{\lambda}\) induced by the Lebesgue measure on the circle of center \(0\) and radius \(\lambda\) (in other words, \(d\sigma_{\lambda}\) is the uniform measure on that circle with total mass \(2\pi\lambda\)). Consider furthermore a positive function \(\chi\) whose Fourier transform is compactly supported. Finally, let \[\chi_{\lambda,\delta}=\delta^{-1}\chi(\delta^{-1}\cdot)*d\sigma_{\lambda}.\] We now introduce a dyadic partition of unity, namely \(\mathcal{C}_{0}^{\infty}\), nonnegative functions \(\varphi\) and \(\psi\) such that \[\varphi(x)+\sum_{M\in 2^{\mathbb{N}}}\psi\left(\frac{x}{M}\right)=1,\] and furthermore \(\varphi\) is supported in a ball, and \(\psi\) in an annulus. Denoting \(\Phi^{\flat}_{\lambda,\delta}\) the kernel of \(P^{\flat}_{\lambda,\delta}\), it can be written by Poisson summation as \[\Phi^{\flat}_{\lambda,\delta}(x)=\sum_{k\in\mathbb{Z}^{2}}\chi_{\lambda,\delta} (k)e^{2\pi ik\cdot x}=\sum_{m\in\mathbb{Z}^{2}}\widehat{\chi_{\lambda,\delta}}( m-x),\] (where \(\widehat{\cdot}\) stands for the Fourier transform on \(\mathbb{R}\)) and further decomposed, with the help of the partition of unity, into \[\Phi^{\flat}_{\lambda,\delta}(x) =\sum_{m\in\mathbb{Z}^{2}}\varphi(m-x)\widehat{\chi_{\lambda,\delta }}(m-x)+\sum_{m\in\mathbb{Z}^{2}}\sum_{M\in\mathbb{Z}^{1}}\psi\left(\frac{m-x}{ M}\right)\widehat{\chi_{\lambda,\delta}}(m-x)\] \[=\Phi^{\flat,0}_{\lambda,\delta}(x)+\sum_{M\in\mathbb{Z}^{1}} \Phi^{\flat,M}_{\lambda,\delta}(x).\] Finally, the operators associated to these convolution kernels are denoted by \(P^{\flat,0}_{\lambda,\delta}\) and \(P^{\flat,M}_{\lambda,\delta}\). Asymptotics of \(\widehat{\chi_{\lambda,\delta}}\). Denoting \(J(\xi)\) for the Fourier transform of \(d\sigma_{1}\) (superficial measure on the unit sphere), it follows from the definition of \(\chi_{\lambda,\delta}\) that \[\widehat{\chi_{\lambda,\delta}}(\xi)=\lambda\delta J(\lambda\xi)\widehat{ \chi}(\delta\xi).\] The function \(J\) is smooth, and its asymptotic expansion is well-known \[J(\xi)\sim\frac{e^{i|\xi|}}{|\xi|^{\frac{1}{2}}}\sum_{j=0}^{\infty}a_{j}|\xi|^ {-j}+\frac{e^{-i|\xi|}}{|\xi|^{\frac{1}{2}}}\sum_{j=0}^{\infty}b_{j}|\xi|^{-j}. \tag{3.1}\] We refer to [24], Chapter VIII, for the proof of this statement and the meaning of the series in the equivalent (see in particular Proposition 3). Bounding \(P^{\flat,0}_{\lambda,\delta}\). By Young's inequality and the above expansion, \[\|P^{\flat,0}_{\lambda,\delta}\|_{L^{p^{\prime}}\to L^{p}}\lesssim\|\Phi^{ \flat,0}_{\lambda,\delta}\|_{L^{\frac{p}{2}}}\lesssim\|\widehat{\chi_{\lambda,\delta}}\|_{L^{\frac{p}{2}}}\lesssim\lambda\delta\|J(\lambda\xi)\|_{L^{\frac{ p}{2}}}\lesssim\lambda^{1-\frac{4}{p}}\delta\qquad\text{if $p>8$}.\] To treat the case \(p\in[6,8]\), we can invoke the fact that the operator on \(\mathbb{R}^{2}\) with symbol \(\chi_{\lambda,\delta}(\xi)\) has \(L^{p^{\prime}}\to L^{p}\) operator norm \(\lesssim\lambda^{1-\frac{4}{p}}\delta\), which follows from the Stein-Tomas theorem [25, 24]. As a consequence, the operator with convolution kernel \(\varphi\widehat{\chi_{\lambda,\delta}}\) has operator norm \(\lesssim\lambda^{1-\frac{4}{p}}\delta\) (indeed, the function \(\varphi\widehat{\chi_{\lambda,\delta}}\) can be written under the form \(\delta\widehat{\chi_{\lambda,1}}\) by modifying the cutoff function). Since \(\varphi\) is compactly supported, this implies the desired bound for the operator \(P^{\flat,0}_{\lambda,\delta}\), whose convolution kernel is given by the periodization of \(\varphi\widehat{\chi_{\lambda,\delta}}\). Bounding \(P^{\flat,M}_{\lambda,\delta}\). This will be achieved by interpolating between \(L^{2}\to L^{2}\) and \(L^{1}\to L^{\infty}\) bounds, in a manner which is reminiscent of the classical proof of the Stein-Tomas theorem [25]. Before doing so, we observe that the range of \(M\) can be restricted to \(M\lesssim\delta^{-1}\); this follows from the fact that \(\widehat{\chi}\) is compactly supported, and the formula for \(\widehat{\chi_{\lambda,\delta}}\) above. * To obtain the \(L^{2}\to L^{2}\) bound, we deduce from the definition of the kernel \(\Phi^{\flat,M}_{\lambda,\delta}\) and Poisson summation that \(P^{\flat,M}_{\lambda,\delta}\) is the Fourier multiplier on the torus with symbol \(M^{2}\psi(M\cdot)*\chi_{\lambda,\delta}\). Therefore, \[\|P^{\flat,M}_{\lambda,\delta}\|_{L^{2}\to L^{2}}\lesssim\|M^{2}\psi(M\cdot)* \chi_{\lambda,\delta}\|_{L^{\infty}}\lesssim M\delta\qquad\text{if $M\lesssim \delta^{-1}$}\] * To obtain the \(L^{1}\to L^{\infty}\) bound, we rely on exponential sum estimates. Reducing the asymptotics of \(\widehat{\chi_{\lambda,\delta}}\) to its leading order (lower order terms being easier to treat), we need to bound \[S_{\lambda,M,x}=\lambda^{\frac{1}{2}}\delta\sum_{n\in\mathbb{Z}^{2}}\psi\left( \frac{n-x}{M}\right)\frac{e^{i\lambda|n-x|}}{\langle n-x\rangle^{\frac{1}{2}}}.\] A first obvious bound is \[|S_{\lambda,M,x}|\lesssim\lambda^{\frac{1}{2}}\delta M^{\frac{3}{2}}.\] Furthermore, this sum can be written under the form \(\lambda^{\frac{1}{2}}\delta M^{-\frac{1}{2}}\sum_{|m|\sim M}W(m)e^{if(m)}\), which is the form considered in [22] (see also [17]). In order to apply Theorem 2 in [22], we note that \(W\) and \(f\) satisfy the required derivative bounds; as for the condition on the determinant of iterated derivatives, it is verified thanks to Lemma 3 in that same paper. This yields the bound \[|S_{\lambda,M,x}|\lesssim_{\epsilon}\lambda^{\frac{1}{2}+\omega}\delta M^{ \frac{3}{2}-(q+1)\omega+\epsilon}\quad\text{if }\lambda\geq M^{q-2+\frac{2}{Q}},\quad\text{ with}\quad\omega=\frac{2}{4(Q-1)+2Q},\quad Q=2^{q}.\] Choosing \(q=3\), this means that \[|S_{\lambda,M}|\lesssim_{\epsilon}\lambda^{\frac{6}{11}}\delta M^{\frac{29}{2 }+\epsilon}\qquad\text{if }\lambda>M^{\frac{5}{4}}.\] Overall, still under the assumption that \(\lambda>M^{\frac{5}{4}}\), \[\|P_{\lambda,\delta}^{\dot{\nu},M}\|_{L^{1}\to L^{\infty}}\lesssim\|\Phi_{ \lambda,\delta}^{\dot{\nu},M}\|_{L^{\infty}}\lesssim\min(\lambda^{\frac{1}{2} }\delta M^{\frac{3}{2}},\lambda^{\frac{6}{11}}\delta M^{\frac{29}{22}+ \epsilon})\] Interpolating between these two bounds, we find that \[\|P_{\lambda,\delta}^{\dot{\nu},M}\|_{L^{p^{\prime}}\to L^{p}}\lesssim_{ \epsilon}\min(M^{\frac{3}{2}-\frac{1}{p}}\lambda^{\frac{1}{2}-\frac{1}{p}} \delta,M^{\frac{29}{22}-\frac{7}{11p}+\epsilon}\lambda^{\frac{6}{11}-\frac{12} {11p}}\delta).\] Since \(M\lesssim\delta^{-1}\), this is \(<\lambda^{1-\frac{4}{p}}\delta\) provided \[\delta>\min\left(\lambda^{-\frac{1-\frac{6}{p}}{3-\frac{p}{p}}},\lambda^{- \frac{10-\frac{64}{p}}{29-\frac{1}{p}}+\epsilon}\right).\] Conclusion of the argument Reconstructing \(P_{\lambda,\delta}^{\dot{\nu}}\) as the sum of \(P_{\lambda,\delta}^{\dot{\nu},M}\) for \(0\leq M\lesssim\delta^{-1}\) gives the \(L^{p^{\prime}}\to L^{p}\) bound \(\lambda^{1-\frac{4}{p}}\delta\). By the classical \(TT^{*}\) argument, this implies a \(L^{2}\to L^{p}\) bound \(\lambda^{\frac{1}{2}-\frac{2}{p}}\delta^{\frac{1}{2}}\) for the operator with symbol \(\sqrt{\chi_{\lambda,\delta}}\), from which we deduce the desired \(L^{2}\to L^{p}\) bound for \(P_{\lambda,\delta}\). ## 4. The conjecture on the \(L^{p}\) norm of the convolution kernel This section is dedicated to Conjecture B, which was introduced in the introduction. We show how it is related to Conjecture A on the one hand, to additive energies on the other hand, and then use small cap decoupling to prove it with an \(\epsilon\) loss, if \(p\leq 6\) or \(\delta>\lambda^{-\frac{1}{3}}\). ### Partial equivalence of Conjecture A and Conjecture B **Lemma 4.1**.: * _If_ \(\delta>\lambda^{-1+\frac{8}{p+2}}\)_, Conjecture A for_ \((\delta,\lambda,p)\) _implies Conjecture B for_ \((\delta,\lambda,p)\)_._ * _If_ \(p\geq 4\) _and_ \(\delta>\lambda^{-1+\frac{4}{p}}\)_, Conjecture B for_ \((\delta,\lambda,p)\) _implies Conjecture A for_ \((\delta,\lambda,2p)\)_._ Proof.: Let us assume first that Conjecture A holds for \((\delta,\lambda,p)\) and \(\delta>\lambda^{-1+\frac{8}{p+2}}\). Note first that \[\|\Phi_{\lambda,\delta}\|_{L^{2}}=\|\Phi_{\lambda,\delta}\|_{L^{\infty}}^{ \frac{1}{2}}=\|P_{\lambda,\delta}\|_{L^{2}\to L^{\infty}}=(\lambda\delta)^{ \frac{1}{2}},\] where we used Conjecture A for \((\delta,\lambda,\infty)\), which is a consequence of the conjecture for \((\delta,\lambda,p)\). Then, estimating \(\|\Phi_{\lambda,\delta}\|_{L^{2}}\) by Parseval's equality, \[\|\Phi_{\lambda,\delta}\|_{L^{p}}=\|P_{\lambda,\delta}\Phi_{\lambda,\delta}\|_ {L^{p}}\leq\|P_{\lambda,\delta}\|_{L^{2}\to L^{p}}\|\Phi_{\lambda,\delta}\|_{L ^{2}}\lesssim[\lambda^{\frac{1}{2}-\frac{2}{p}}\delta^{\frac{1}{2}}][\lambda^{ \frac{1}{2}}\delta^{\frac{1}{2}}]=\lambda^{1-\frac{2}{p}}\delta,\] which proves Conjecture B. Conversely, let us assume that Conjecture B holds for \((\delta,\lambda,p)\) and \(\delta>\lambda^{-1+\frac{4}{p}}\). Then, by Young's inequality, \[\|P_{\lambda,\delta}f\|_{L^{2p}}=\|\Phi_{\lambda,\delta}*f\|_{L^{2p}}\lesssim\| \Phi_{\lambda,\delta}\|_{L^{p}}\|f\|_{L^{(2p)^{\prime}}}\lesssim\lambda^{1- \frac{2}{p}}\delta\|f\|_{L^{(2p)^{\prime}}}.\] This means that \[\|P_{\lambda,\delta}\|_{L^{(2p)^{\prime}}\to L^{2p}}\lesssim\lambda^{1- \frac{2}{p}}\delta.\] By the classical \(TT^{*}\) argument, \[\|P_{\lambda,\delta}\|_{L^{2}\to L^{2p}}=\|P_{\lambda,\delta}\|_{L^{(2p)^{\prime}} \to L^{2p}}^{\frac{1}{2}}\lesssim\lambda^{\frac{1}{2}-\frac{1}{p}}\delta^{\frac {1}{2}},\] which proves Conjecture A for \((\delta,\lambda,2p)\). ### Link to additive energies For \(p\) an even number, the \(\frac{p}{2}\)-additive energy of the set \(\Lambda\subset\mathbb{R}^{2}\) is defined as \[\mathbb{E}_{\frac{p}{2}}(\Lambda)=\#\{(x_{1},\ldots,x_{p})\in\Lambda^{p}\text { such that }x_{1}+\cdots+x_{\frac{p}{2}}=x_{\frac{p}{2}+1}+\cdots+x_{p}\}.\] If \(p\) is an even number, \[\mathbb{E}_{\frac{p}{2}}(\mathcal{A}^{\prime}_{\lambda,\delta})=\|\Phi_{ \lambda,\delta}\|_{L^{p}(\mathbb{T}^{\mu})}^{p},\] so that the conjecture can be reformulated, for even \(p\), as \[\mathbb{E}_{\frac{p}{2}}(\mathcal{A}^{\prime}_{\lambda,\delta})\sim\lambda^{p -2}\delta^{p}+(\lambda\delta)^{\frac{p}{2}}.\] How can this conjecture be interpreted in terms of additive energies? The second term on the above right-hand side comes from diagonal contributions, in other words from the universal inequality \[\mathbb{E}_{\frac{p}{2}}(\mathcal{A}^{\prime}_{\lambda,\delta})\geq|\mathcal{ A}^{\prime}_{\lambda,\delta}|^{\frac{p}{2}}\] For the first term on the right-hand side, note that there are \(\sim(\lambda\delta)^{p/2}\) possible sums of \(p/2\) elements of \(\mathcal{A}^{\prime}_{\lambda,\delta}\), and they all lie inside the ball of radius \(\sim\lambda\). Moreover, two distinct sums will have to be at least \(1\) apart from each other. As a result of this, many pairs of sums are forced to coincide, and a simple application of Cauchy-Schwarz proves the first lower bound. Additive energies of annular sets \(\mathcal{A}^{\prime}_{\lambda,\delta}\) in dimension \(2\) were considered in [21], and then by Bombieri-Bourgain [2] who could prove that \[\mathbb{E}_{3}(\mathcal{A}^{\prime}_{\lambda,0})\lesssim[\#\mathcal{A}^{ \prime}_{\lambda,0}]^{\frac{7}{2}}.\] They conjectured that the exponent \(\frac{7}{2}\) can be replaced by \(3\) (which is equivalent to the case \(p=6\) of (1.1)). Bourgain-Demeter [9] proved the conjecture for \(p=6\), \(\delta=\lambda^{-\frac{1}{3}}\) with an \(\epsilon\)-loss. Finally, the question was also considered in dimensions four and five, see [8]. ### Some progress on the kernel conjecture In this section we write \(A\lessapprox B\) if \(A\lesssim_{\epsilon}P^{\epsilon}B\) holds for all \(\epsilon>0\), where \(P\) is the scale parameter, typically denoted by \(R\) or \(\lambda\). The chief tool for proving Conjecture B in the range \(4\leq p\leq 6\) is small cap decoupling, a result first proved in [12], and further refined in [13]. Let \(I\) be a compact interval and let \(\Gamma:I\to\mathbb{R}\) be a smooth curve with \[\min_{\xi\in I}|\Gamma^{\prime\prime}(\xi)|>0. \tag{4.1}\] Let \(0<\beta\leq 1\). For \(R\geq 1\), partition its \(1/R\)-neighborhood \(\mathcal{N}_{1/R}(\Gamma)\) into tubular regions \(\Gamma\) with length \(R^{-\beta}\) and width/height \(\sim 1/R\). We will call such \(\gamma\) an \((R^{-\beta},R^{-1})\)-cap. There are \(\sim R^{\beta}\) such caps. We introduce the Fourier projection onto \(L^{2}(\gamma)\) \[f_{\gamma}(x)=\int_{\gamma}\widehat{f}(\xi)e(x\cdot\xi)d\xi.\] Given a ball \(B_{R}=B(x_{0},R)\), we define the weight \[w_{B_{R}}(x)=(1+\frac{|x-x_{0}|}{R})^{-100}.\] The following was proved in [12] for the parabola. The extension to arbitrary curves \(\Gamma\) as above is fairly standard. **Theorem 4.2** (\(l^{p}(L^{p})\) small cap decoupling, [12]).: _Assume \(f:\mathbb{R}^{2}\to\mathbb{C}\) has Fourier transform supported in \(\mathcal{N}_{1/R}(\Gamma)\). Then for \(4\leq p\leq\min(2+\frac{2}{\beta},6)\) and each ball \(B_{R}\) we have_ \[\|f\|_{L^{p}(B_{R})}\lessapprox R^{\beta(\frac{1}{2}-\frac{1}{p})}(\sum_{ \gamma}\|f_{\gamma}\|_{L^{p}(w_{B_{R}})}^{p})^{1/p}. \tag{4.2}\] When \(\beta>1/2\) this is a good substitute for \(l^{2}(L^{p})\) decoupling \[\|f\|_{L^{p}(B_{R})}\lessapprox(\sum_{\gamma}\|f_{\gamma}\|_{L^{p}(w_{B_{R}}) }^{2})^{1/2},\] which is only true if \(\beta\leq 1/2\). The reason for the failure of this inequality when \(\beta>1/2\) is the fact that there are many (more precisely \(R^{\beta-\frac{1}{2}}\)) consecutive caps \(\beta\) inside an essentially rectangular/flat cap \(\tau\) with dimensions \((R^{-1/2},R^{-1})\). Parts of our forthcoming argument need a slightly stronger (via Holder's inequality) version of (4.2). This is a particular case of Corollary 5 in [13]. **Theorem 4.3** (\(l^{q}(L^{p})\) small cap decoupling, [13]).: _Assume \(f:\mathbb{R}^{2}\to\mathbb{C}\) has Fourier transform supported in \(\mathcal{N}_{1/R}(\Gamma)\). Then for \(4\leq p\leq\min(2+\frac{2}{\beta},6)\), \(\frac{1}{q}=1-\frac{3}{p}\) and each ball \(B_{R}\) we have_ \[\|f\|_{L^{p}(B_{R})}\lessapprox R^{\beta(\frac{1}{2}-\frac{1}{q})}(\sum_{ \gamma}\|f_{\gamma}\|_{L^{p}(w_{B_{R}})}^{q})^{1/q}. \tag{4.3}\] The upper bound \(p\leq\min(2+\frac{2}{\beta},6)\) is sharp in both theorems. The value \(2+\frac{2}{\beta}\) is called the critical exponent for small cap decoupling. In our applications of (4.2) and (4.3), \(\widehat{f}\) will be supported only on a small number \(N_{active}\) of the total number \(N_{total}\sim R^{\beta}\) of caps \(\gamma\). Then (4.2) gives \[\|f\|_{L^{p}(B_{R})}^{p}\lessapprox N_{total}^{\frac{p}{2}-1}N_{active}\max_{ \gamma}\|f_{\gamma}\|_{L^{p}(w_{B_{R}})}^{p}, \tag{4.4}\] while (4.3) leads to the more favorable \[\|f\|_{L^{p}(B_{R})}^{p}\lessapprox N_{total}^{3-\frac{p}{2}}N_{active}^{p-3} \max_{\gamma}\|f_{\gamma}\|_{L^{p}(w_{B_{R}})}^{p}. \tag{4.5}\] When \(\beta\leq 1/2\), we have an even stronger estimate, as a consequence of \(l^{2}(L^{p})\) decoupling \[\|f\|_{L^{p}(B_{R})}^{p}\lessapprox N_{active}^{p/2}\max_{\gamma}\|f_{\gamma}\| _{L^{p}(w_{B_{R}})}^{p}. \tag{4.6}\] We will work with \(\Gamma:[-1/2,1/2]\to\mathbb{R}\) given by \(\Gamma(\xi)=\sqrt{1-\xi^{2}}\). In fact, for reasons of symmetry, we may as well work with the full circle \(\mathbb{S}^{1}\). We rescale (4.4) and (4.5) to allow \(\widehat{f}\) to be supported on \(\mathcal{A}_{\lambda,\delta}\), for some \(\delta\leq 1\). If \(\gamma\) are now \((\lambda(\frac{\delta}{\lambda})^{\beta},\delta)\)-caps partitioning \(\mathcal{N}_{\delta}(\lambda S^{1})=\mathcal{A}_{\lambda,\delta}\), we find that if \(\widehat{f}\) is supported on \(\mathcal{A}_{\lambda,\delta}\) then (4.4), (4.5) and (4.6) hold with \(R\sim 1/\delta\). The main result we prove is as follows. It is important to note that the difficult range \(4<p<6\) does not follow by interpolating the easier endpoint cases \(p=4,6\). **Theorem 4.4** (Square root cancellation at the critical exponent).: _Assume \(p\in[4,6]\) and \(\delta=\lambda^{\frac{4}{p}-1}\). Then_ \[\|\Phi_{\lambda,\delta}\|_{L^{p}([0,1]^{2})}\lessapprox(\lambda\delta)^{1/2}.\] Proof.: Note first that \[\delta\geq\lambda^{-1/3}, \tag{4.7}\] so that in particular \(\delta^{-1}<\sqrt{\lambda\delta}\). We cover \(\mathcal{A}_{\lambda,\delta}\) with \((\delta^{-1}/100,\delta)\)-caps \(\eta\). This choice is important for two reasons. On the one hand, it has area smaller than \(1/2\), and this forces structure on the lattice points inside \(\eta\). On the other hand, the length scale \(\sim\delta^{-1}\) of \(\eta\) is the smallest for which we get \(L^{p}\) square root cancellation via small cap decoupling. We illustrate this in Case 1 of the following four-case argument. Decompose \[\Phi_{\lambda,\delta}=\sum_{\eta}\Phi_{\eta},\] with \(\Phi_{\eta}(x)=\sum_{k\in\eta\cap\mathbb{Z}^{2}}e^{2\pi ik\cdot x}\). Case 1. We apply (the rescaled version of) (4.4) to \(f=\sum_{\eta}\Phi_{\eta}\) (so \(\gamma=\eta\) and \(f_{\eta}=\Phi_{\eta}\)) and \(R\sim 1/\delta\), with the sum restricted to those \(\eta\) containing exactly one lattice point. Let us check that \(\eta\) has the desired length \(\sim\lambda(\frac{\delta}{\lambda})^{\beta}\), for some \(\beta\) satisfying \(p\leq 2+\frac{2}{\beta}\). Solving \(\frac{1}{\delta}=\lambda(\frac{\delta}{\lambda})^{\beta}\) and using that \(\delta=\lambda^{\frac{4}{p}-1}\), leads to \(\beta=\frac{2}{p-2}\). Thus, we apply (4.4) at the critical exponent. Note that \(N_{total}\sim\lambda\delta\). We allow for the possibility that \(N_{active}\) may be comparable to \(N_{total}\), see the comment a few lines below. Note also that \(\|f_{\eta}\|_{L^{p}(w_{B_{R}})}\sim R^{2/p}\). Using first the 1-periodicity of \(f\), then (4.4), we conclude with the desired bound \[\|\sum_{\eta}\Phi_{\eta}\|_{L^{p}([0,1]^{2})}^{p}\sim R^{-2}\|\sum_{\eta}f_{ \eta}\|_{L^{p}([0,R]^{2})}^{p}\lessapprox(\lambda\delta)^{p/2}.\] The argument just presented works when considering caps \(\eta\) with \(2^{s}\lessapprox 1\) points. The counting arguments that we will use next show that most lattice points in \(\mathcal{A}_{\lambda,\delta}\) are absorbed by such caps. In particular, for at least one of these small scales \(s\), we have that \(N_{active}\gtrapprox\lambda\delta\). This shows the sharpness of the argument, and motivates the use of small cap decoupling. In the remaining cases we restrict attention to those \(\eta\) containing at least two lattice points. All lattice points inside \(\eta\) need to sit on a line we call \(l_{\eta}\). This is because the area of the triangle determined by any three lattice points is half an integer, while the area of (the convex hull of) \(\eta\) is smaller than \(1/2\). Moreover, all lattice points on \(l_{\eta}\cap\eta\) must be equidistant, with separation of consecutive points of order \(\sim 2^{m}\), for some \(m\geq 0\). Pigeonholing at the expense of a \((\log R)^{2}\) loss, we may focus on those \(\eta\) corresponding to a fixed \(m\), and also containing \(\sim 2^{s}\) lattice points, for some fixed \(s\geq 0\). It follows that \[\text{length}(l_{\eta}\cap\eta)\sim 2^{s+m}\lesssim\delta^{-1}. \tag{4.8}\] Finally, call \(\alpha\) the angle between \(l_{\eta}\) and the long axis of \(\eta\). Case 2. We restrict attention to those \(\eta\) with \(\alpha\gg\text{ecc}(\eta)\sim\delta^{2}\). We call them _active_. It is worth pointing out that \(\eta\) is essentially a rectangle. In fact, each \(\eta\) sits inside a \(((\lambda\delta)^{1/2},\delta)\)-cap (as part of \(\mathcal{A}_{\lambda,\delta}\)) that is also essentially a rectangle. We will call such caps by the letter \(\tau\). The fact that \(\tau\) is longer than \(\eta\), \((\lambda\delta)^{1/2}\geq\delta^{-1}\), is a consequence of (4.7). Since \(\alpha\) is much bigger than the eccentricity of \(\eta\), the lines \(l_{\eta}\) need to be different for all active \(\eta\) inside a fixed \(\tau\). This will force some separation between any two consecutive active \(\eta_{1}\) and \(\eta_{2}\), as follows. Pick two lattice points \(P_{1},P_{2}\in l_{\eta_{1}}\cap\eta_{1}\) with \(\text{dist}(P_{1},P_{2})\sim 2^{m}\). Pick any point \(P_{3}\in(l_{\eta_{2}}\cap\eta_{2})\setminus l_{\eta_{1}}\). Let \(d=\text{dist}(P_{3},l_{\eta_{1}})\). Since the area of \(\triangle P_{1}P_{2}P_{3}\) is at least \(1/2\), we find that \[d2^{m}\gtrsim 1. \tag{4.9}\] Since \(\alpha\gg\text{ecc}(\eta)\sim\delta^{2}\), we have that \[\alpha\sim\sin\alpha\sim\frac{\delta}{2^{s+m}}. \tag{4.10}\] On the other hand, \[\alpha\sim\sin\alpha\sim\frac{d}{|P_{1}P_{3}|}. \tag{4.11}\] Combining these two and using that \(|P_{1}P_{3}|\sim\operatorname{dist}(\eta_{1},\eta_{2})\) shows that \[\operatorname{dist}(\eta_{1},\eta_{2})\sim d\delta^{-1}2^{s+m}.\] When combined with (4.9), this leads to \(\operatorname{dist}(\eta_{1},\eta_{2})\gtrsim 2^{s}\delta^{-1}\). This suggests partitioning each \(\tau\) into \((2^{s}\delta^{-1},\delta)\)-caps \(\theta\). Each \(\eta\) will sit inside some \(\theta\), and each \(\theta\) contains at most one active \(\eta\). We call \(\theta\) active if it contains some active \(\eta\). Let now \[f=\sum_{\eta:\;active}\Phi_{\eta},\] \[f_{\theta}=\sum_{\eta\subset\theta:\;active}\Phi_{\eta}.\] We apply (the rescaled version of) (4.4) to \(f\), with the caps \(\gamma\) being the active \(\theta^{\prime}s\). We need to check that the length \(2^{s}\delta^{-1}\) of \(\theta\) may be written as \(\lambda(\frac{\delta}{\lambda})^{\beta}\), for some \(\beta\) satisfying \(p\leq\min(2+\frac{2}{\beta},6)\). However, this is immediate, since we have observed earlier that \(\delta^{-1}=\lambda(\frac{\delta}{\lambda})^{\frac{2}{p-2}}\). The small cap in this case is getting longer than in the previous case. Periodicity and (4.4) with \(R=1/\delta\) gives \[\|\sum_{\eta:\;active}\Phi_{\eta}\|_{L^{p}([0,1]^{2})}^{p}\sim R^{-2}\|\sum_{ \theta:\;active}f_{\theta}\|_{L^{p}([0,R]^{2})}^{p}\lessapprox R^{-2}N_{ total}^{\frac{p}{2}-1}N_{active}\max_{\theta:active}\|f_{\theta}\|_{L^{p}(w_{B_{R} })}^{p}.\] Note that there are \(N_{total}\sim\lambda/(2^{s}\delta^{-1})\) caps \(\theta\). Here is how we evaluate the number \(N_{active}\) of active \(\theta\). A \(\tau\) containing at least one active \(\theta\) will itself be called active. Each active \(\tau\) contains \(\lesssim\delta(\lambda\delta)^{1/2}/2^{s}\) active \(\theta\), and by (2.1), the number of active \(\tau\) is \(\lesssim(\lambda\delta)^{1/2}2^{m-s}\). We conclude that \[N_{active}\lesssim\frac{\delta(\lambda\delta)^{1/2}}{2^{s}}\frac{(\lambda \delta)^{1/2}2^{m}}{2^{s}}=\lambda\delta^{2}2^{m}2^{-2s}.\] Since each \(\theta\) contains \(\sim 2^{s}\) points, we have the trivial sharp estimate \[\|f_{\theta}\|_{L^{p}(w_{B_{R}})}^{p}\lesssim\|f_{\theta}\|_{L^{2}(w_{B_{R}}) }^{2^{s}(p-2)}\sim 2^{s(p-1)}R^{2}.\] Putting things together, we conclude with the desired bound \[\|\sum_{\eta:\;active}\Phi_{\eta}\|_{L^{p}([0,1]^{2})}^{p}\lessapprox(\frac{ \lambda\delta}{2^{s}})^{\frac{p}{2}-1}\lambda\delta^{2}2^{m}2^{-2s}2^{s(p-1)} =(\lambda\delta)^{\frac{p}{2}}(2^{m+s}\delta)2^{s(\frac{p}{2}-3)}\lesssim( \lambda\delta)^{\frac{p}{2}},\] where the last inequality follows from (4.8) and the fact that \(p\leq 6\). Case 3. We now restrict attention to those \(\eta\) with \((\delta/\lambda)^{1/2}\sim\operatorname{ecc}(\tau)\lesssim\alpha\lesssim \operatorname{ecc}(\eta)\sim\delta^{2}\). In particular, we assume \(\alpha\sim 2^{-t}\delta^{2}\), for some fixed \(t\geq 0\). The line \(l_{\eta}\) is now the same for \(\lesssim 2^{t}\) consecutive active \(\eta\). We cover each group of such \(\eta\) with a \((2^{t}\delta^{-1},\delta)\)-cap \(\sigma\), and call the common line \(l_{\sigma}\). We call \(\sigma\) active. Since \(l_{\sigma}\) crosses at least one \(\eta\), we have that \(2^{m+s}\sim\delta^{-1}\). Next, we prove separation between consecutive active \(\sigma_{1}\) and \(\sigma_{2}\), using the argument from Case 2. Pick two lattice points \(P_{1},P_{2}\) in \(\sigma_{1}\cap l_{\sigma_{1}}\) with \(\operatorname{dist}(P_{1},P_{2})\sim 2^{m}\) and pick a lattice point \(P_{3}\in(\sigma_{2}\cap l_{\sigma_{2}})\setminus l_{\sigma_{1}}\). Letting \(d=\operatorname{dist}(P_{3},l_{\sigma_{1}})\), we find as before that \(d2^{m}\gtrsim 1\). Also as before, \[2^{-t}\delta^{2}\sim\alpha\sim\frac{d}{\operatorname{dist}(\sigma_{1},\sigma_{2 })},\] so \[\text{dist}(\sigma_{1},\sigma_{2})\sim d2^{t}\delta^{-2}\sim d2^{m}2^{s+t}\delta^{ -1}\gtrsim 2^{s+t}\delta^{-1}.\] We cover each \(\tau\) with \((2^{s+t}\delta^{-1},\delta)\)-caps \(\theta\). Each \(\theta\) contains at most one active \(\sigma\), and is contained in a unique \(\tau\). We call \(\theta\) active if it contains some active \(\sigma\), and we also call active the \(\tau\) containing such \(\theta\). We decouple into caps \(\theta\). Small cap decoupling is applicable, as \(\theta\) is even longer than in Case 2. Note first that \[N_{total}\sim\lambda\delta 2^{-s-t}.\] There are \(\lesssim(\lambda\delta)^{1/2}2^{m-s}\sim 2^{2m}(\lambda\delta)^{1/2}\delta 2^{-t}\) active \(\tau\), each containing \(\lesssim\frac{\delta(\lambda\delta)^{1/2}}{2^{s+t}}\) active \(\theta\). Thus the number of active \(\theta\) satisfies \[N_{active}\lesssim\lambda\delta^{3}2^{2m-2t-s}.\] Since \(\theta\) now contains \(\lesssim 2^{t+s}\) points, we have as before \[\|f_{\theta}\|_{L^{p}(w_{B_{R}})}^{p}\lesssim\|f_{\theta}\|_{L^{2}(w_{B_{R}})} ^{2(s+t)(p-2)}\sim 2^{(s+t)(p-1)}R^{2}.\] We first make the point that (4.4) is not strong enough in this case, as it leads to \[\|\sum_{\eta:\;active}\Phi_{\eta}\|_{L^{p}([0,1]^{2})}^{p} \sim R^{-2}\|\sum_{\theta:\;active}f_{\theta}\|_{L^{p}([0,R]^{2})} ^{p}\] \[\lessapprox R^{-2}N_{total}^{\frac{p}{p-1}}N_{active}\max_{\theta: active}\|f_{\theta}\|_{L^{p}(w_{B_{R}})}^{p}\] \[\lesssim(\lambda\delta)^{p/2}(2^{m+s}\delta)^{2}2^{s(\frac{p}{2}- 3)}2^{t(\frac{p}{2}-2)}\] \[\sim(\lambda\delta)^{p/2}2^{s(\frac{p}{2}-3)}2^{t(\frac{p}{2}-2)}.\] Figure 1. Lattice points and caps inside \(\tau\) When \(p>4\), we cannot force this to be \(\lesssim(\lambda\delta)^{p/2}\), as \(t\) may be much larger than \(s\). However, using instead (4.5) we find the desired upper bound \[\|\sum_{\eta:\;active}\Phi_{\eta}\|_{L^{p}([0,1]^{2})}^{p} \sim R^{-2}\|\sum_{\theta:\;active}f_{\theta}\|_{L^{p}([0,R]^{2})}^ {p}\] \[\lessapprox R^{-2}N_{total}^{3-\frac{p}{2}}N_{active}^{p-3}\max_{ \theta:active}\|f_{\theta}\|_{L^{p}(w_{B_{R}})}^{p}\] \[\lesssim(\lambda\delta)^{p/2}2^{-(s+t)(3-\frac{p}{2})}(\delta^{2 }2^{2m-2t-s})^{p-3}2^{(s+t)(p-1)}\] \[\lesssim(\lambda\delta)^{p/2}(\delta^{2}2^{2m}2^{s})^{p-3}2^{s(5- \frac{3p}{2})}2^{t(2-\frac{p}{2})}\] \[\sim(\lambda\delta)^{p/2}2^{s(5-\frac{3p}{2})}2^{t(2-\frac{p}{2})}.\] This is \(\lesssim(\lambda\delta)^{p/2}\) if \(p\geq 4\). Case 4. When \(\alpha\ll\operatorname{ecc}(\tau)\), the points on distinct active \(\tau\) are aligned in distinct directions. Thus, there can only be \(O(2^{2m})\) active \(\tau\). We have that \(2^{s+m}\sim\delta^{-1}\), and each \(\tau\) contains \(\lesssim 2^{s}(\lambda\delta)^{1/2}\delta\) points. The desired bound follows from the \(l^{2}(L^{p})\) decoupling (4.6) into caps \(\gamma=\tau\) \[\|\sum_{\eta:\;active}\Phi_{\eta}\|_{L^{p}([0,1]^{2})}^{p} \sim R^{-2}\|\sum_{\tau:\;active}f_{\tau}\|_{L^{p}([0,R]^{2})}^{p}\] \[\lessapprox R^{-2}N_{active}^{p/2}\max_{\tau:active}\|f_{\tau}\|_{L ^{p}(w_{B_{R}})}^{p}\] \[\lesssim 2^{mp}(2^{s}(\lambda\delta)^{1/2}\delta)^{p-1}\] \[\sim(\lambda\delta)^{p/2}\frac{\delta^{-1}}{2^{s}\sqrt{\lambda \delta}}.\] This is \(\lesssim(\lambda\delta)^{p/2}\) due to (4.7). We may now prove Conjecture B in the range \(\delta\geq\lambda^{-1/3}\). **Corollary 4.5**.: _Assume \(1\geq\delta\geq\lambda^{-1/3}\). Then Conjecture B holds for all \(p\)._ Proof.: There is \(p_{\delta}\in[4,6]\) such that \(\delta=\lambda^{\frac{4}{p_{\delta}}-1}\). If \(p\leq p_{\delta}\), the result follows from Theorem 4.4 and Holder's inequality. When \(p>p_{\delta}\), it follows from the same theorem and the \(L^{\infty}\) bound \[\int|\Phi_{\lambda,\delta}|^{p}\leq\int|\Phi_{\lambda,\delta}|^{p_{\delta}}( \lambda\delta)^{p-p_{\delta}}\lessapprox(\lambda\delta)^{p_{\delta}/2}( \lambda\delta)^{p-p_{\delta}}=\frac{(\lambda\delta)^{p}}{\lambda^{2}}.\] A simple application of \(l^{2}(L^{6})\) gives the following. **Proposition 4.6**.: _Conjecture B holds for \(p\leq 6\) and \(\delta\leq\lambda^{-1/3}\)._ Proof.: We may assume \(\delta\ll\lambda^{-1/3}\). The case \(p<6\) follows from \(p=6\) and Holder's inequality. When \(p=6\) we use \(l^{2}(L^{6})\) decoupling (4.6), into \(((\lambda\delta)^{1/2},\delta)\)-caps \(\tau\). Their volume is \(\leq 1/2\), so lattice points in \(\tau\) are contained in a line. Reasoning as before, there are \(N_{active}\lessapprox\frac{\lambda\delta}{2^{2s}}\) caps \(\tau\) with \(\sim 2^{2s}\) points. Then (4.6) gives \[\int|\Phi_{\lambda,\delta}|^{6}\lessapprox\max_{s\geq 0}(N_{active})^{3}2^{5s} \lessapprox\max_{s\geq 0}2^{-s}(\lambda\delta)^{3}\sim(\lambda\delta)^{3}.\] _Remark 4.7_.: A small improvement over the results presented here, which would reach the region \(\delta<\lambda^{-\frac{1}{3}}\), \(p>6\) for Conjecture B, are possible with the help of bounds on exponential sums. We will not pursue this approach, in order to avoid further technical details. Theorem 4.4 and thus also Corollary 4.5 continue to hold true (via the same argument) if \(\mathcal{A}_{\lambda,\delta}\) are the lattice points in the \(\delta\)-neighborhood of \(\lambda\Gamma\), where \(\Gamma\) is any curve satisfying (4.1). However, Proposition 4.6 needs the fact that \(\mathcal{A}_{\lambda,\delta}\) contains \(\;\raise 1.29pt\hbox{$<$\kern-7.5pt\raise-4.73pt\hbox{$\sim$}}\;\lambda\delta\) lattice points. Its proof relies on this bound in order to guarantee that \(N_{active}\;\raise 1.29pt\hbox{$<$\kern-7.5pt\raise-4.73pt\hbox{$\sim$}}\;\lambda\delta\) for the caps \(\tau\) containing only one point. For caps with at least two points, the upper bound \(\;\raise 1.29pt\hbox{$<$\kern-7.5pt\raise-4.73pt\hbox{$\sim$}}\;\lambda\delta/2^ {\delta}\) remains true for arbitrary \(\Gamma\) as in (4.1), via the same geometric argument, that only exploits curvature. The fact that this inequality is also true for caps with one point in the case of \(S^{1}\) is an consequence of Lemma 2.2. _Remark 4.8_.: For certain \(\Gamma\), the analog of the set \(\mathcal{A}_{\lambda,\delta}\) may contain significantly more lattice points than \(\lambda\delta\), when \(\delta\) is significantly smaller than \(\lambda^{-1/3}\). One such example is the parabola \(\Gamma_{\mathbb{P}_{1}}(\xi)=\xi^{2}\). If \(\lambda^{1/2}=n\) is an integer, then \(\lambda\Gamma_{\mathbb{P}_{1}}\) contains \(\sim\lambda^{1/2}\) lattice points \((nl,l^{2}),|l|\leq n\), far more than \(\lambda\delta\) when \(\delta\ll\lambda^{-1/2}\) (this is as much as possible for a general curve, up to a subpolynomial factor, by [3]). In the case of the parabola, these points are arranged along an arithmetic progression in the horizontal direction, which leads to constructive interference on a large set: note that \[\left|\sum_{|l|\leq n}e(lnx_{1}+l^{2}x_{2})\right|\sim n\] if \(x_{1}\in\cup_{j\leq n}[j/n,j/n+1/10n^{2}],\;|x_{2}|\leq 1/10n^{2}\). Thus \[\int_{[0,1]^{2}}\left|\sum_{|l|\leq n}e(lnx_{1}+l^{2}x_{2})\right|^{p}dx_{1} dx_{2}\gtrsim\lambda^{\frac{p}{2}-\frac{3}{2}}. \tag{4.12}\] Recall that the generalized projection operator and its kernel are defined by \[\widetilde{P_{\lambda,\delta}}f=\sum_{k\in\mathcal{N}_{\delta}(\lambda\Gamma) \cap\mathbb{Z}^{2}}\widehat{f}_{k}e^{2\pi ik\cdot x}\quad\text{and}\quad \widetilde{\Phi_{\lambda,\delta}}(x)=\sum_{k\in\mathcal{N}_{\delta}(\lambda \Gamma)\cap\mathbb{Z}^{2}}e^{2\pi ik\cdot x}.\] From the inequality (4.12), it follows that \[\|\widetilde{P_{\lambda,\delta}}\|_{L^{2}\to L^{p}}\gtrsim\lambda^{\frac{1}{4 }-\frac{3}{2p}}\quad\text{and}\quad\|\widetilde{\Phi_{\lambda,\delta}}\|_{L^{ p}}\gtrsim\lambda^{\frac{1}{2}-\frac{3}{2p}},\] which shows that conjectures A and B (for the latter, at least when \(p\) is even) do not apply to the parabola for small enough \(\delta\). We refrain from making a precise conjecture for the parabola, as there might be additional examples. ## 5. Graphical representation Figure 2 represents in the coordinates \((\frac{1}{p},-\frac{\log\delta}{\log p})\) the different regions where Conjecture A is verified. The vertical coordinate gives the size of \(\delta\) which decreases, making the problem harder, as one goes up in the picture; for the bottom line \(\delta=1\), the conjecture corresponds to Sogge's theorem. The horizontal coordinate gives the size of \(p\); if \(p\leq 10\) and \(p=\infty\), the conjecture is settled (with \(\epsilon\) loss), but it appears that intermediate values of \(p\) are harder to understand. If the conjecture holds at a given point in the above diagram, then it is also true on a whole region depending on that point. These implications are summarized in the following lemma. **Lemma 5.1**.: 1. _If Conjecture A is satisfied at a point_ \((\frac{1}{p_{0}},\alpha_{0})\) _below the red curve, consider the rectangle with that point as its top right vertex. Then the conjecture holds at any point in that rectangle._ 2. _If Conjecture B is satisfied at a point_ \((\frac{1}{p_{0}},\alpha_{0})\) _above the red curve, then it holds to the right of this point, that is on the segment joining this point to_ \((\frac{1}{2},\alpha_{0})\)_._ Proof.: \((i)\) We will show that the conjecture holds at points \((\frac{1}{p},\alpha_{0})\), with \(p>p_{0}\), and also at points \((\frac{1}{p},\alpha)\), with \(\alpha<\alpha_{0}\); this will prove the assertion. * If \(p>p_{0}\), the Bernstein inequality gives \[\|P_{\lambda,\delta}f\|_{L^{p}}\lesssim\lambda^{\frac{2}{p_{0}}-\frac{2}{p}} \|P_{\lambda,\delta}f\|_{L^{p_{0}}}\lesssim\lambda^{\frac{2}{p_{0}}-\frac{2}{p }}\lambda^{\frac{1}{2}-\frac{2}{p_{0}}}\delta^{\frac{1}{2}}\|f\|_{L^{2}} \lesssim\lambda^{\frac{1}{2}-\frac{2}{p}}\delta^{\frac{1}{2}}\|f\|_{L^{2}}.\] * To deal with the case \(\alpha<\alpha_{0}\), we observe first that the classical \(TT^{*}\) argument shows that \(\|P_{\lambda,\delta}\|_{L^{2}\to L^{p}}=\|P_{\lambda,\delta}\|_{L^{p^{\prime} }\to L^{p}}\). Assume now that \[\|P_{\lambda_{0},\delta_{0}}\|_{L^{2}\to L^{p_{0}}}=C_{0}\lambda_{0}^{\frac{ 1}{2}-\frac{2}{p}}\delta^{\frac{1}{2}}\] for constants \((\lambda_{0},\delta_{0},p_{0},C_{0})\), and consider \(\delta>\delta_{0}\). Then the interval \((\lambda-\delta,\lambda+\delta)\) can be covered by \(O(\delta/\delta_{0})\) disjoint intervals \((I_{j})\) of length \(\delta_{0}\) and \[\|P_{\lambda_{0},\delta}\|_{L^{2}\to L^{p}_{0}}^{2}\leq\Big{\|}\sum_{j}P_{I_{ j}}\Big{\|}_{L^{2}\to L^{p_{0}}}^{2}=\Big{\|}\sum_{j}P_{I_{j}}\Big{\|}_{L^{p^{\prime} _{0}}\to L^{p_{0}}}\leq\sum_{j}\big{\|}P_{I_{j}}\big{\|}_{L^{p^{\prime}_{0}} \to L^{p_{0}}}\lesssim\frac{\delta}{\delta_{0}}\lambda_{0}^{1-\frac{4}{p_{0}}} \delta_{0}\lesssim\lambda_{0}^{1-\frac{4}{p_{0}}}\delta.\] \((ii)\) is a consequence of interpolation, and of the trivial bound \(\|P_{\lambda,\delta}\|_{L^{2}\to L^{2}}\lesssim 1\): if \(2<p<p_{0}\), choosing \(\theta\) such that \(\frac{1}{p}-\frac{1}{2}=\theta\left(\frac{1}{p_{0}}-\frac{1}{2}\right)\), \[\|P_{\lambda,\delta}\|_{L^{2}\to L^{p}}\lesssim\|P_{\lambda,\delta}\|_{L^{2} \to L^{2}}^{1-\theta}\|P_{\lambda,\delta}\|_{L^{2}\to L^{p_{0}}}^{\theta} \lesssim(\lambda\delta)^{\theta\left(\frac{1}{4}-\frac{1}{2p_{0}}\right)}=( \lambda\delta)^{\frac{1}{4}-\frac{1}{2p}}.\] We now turn to Conjecture B; Figure 3 depicts the different regions where it is verified. Furthermore, if the conjecture holds at a given point in this diagram, then it follows on a region depending on that point. Such implications are summarized in the following lemma. **Lemma 5.2**.: 1. _If Conjecture B is satisfied at a point_ \((\frac{1}{p_{0}},\alpha_{0})\) _below the red curve, consider the rectangle with that point as its top right vertex. Then the conjecture holds at any point in that rectangle._ 2. _If Conjecture B is satisfied with_ \(\epsilon\) _loss at a point_ \((\frac{1}{p_{0}},\alpha_{0})\) _above the red curve, then it holds with_ \(\epsilon\) _loss to the right of this point, that is on the segment joining this point to_ \((\frac{1}{2},\alpha_{0})\)_._ Proof.: This is very similar to Lemma 5.1. For \((i)\), we use Bernstein's inequality and the fact that, if \(\delta_{0}<\delta\), then \(\Phi_{\lambda,\delta}\) can be written as the sum of \(O(\frac{\delta}{\delta_{0}})\) functions of the type \(\Phi_{\lambda,\delta_{0}}\). For \((ii)\), it suffices to interpolate with the trivial bound \(\|\Phi_{\lambda,\delta}\|_{L^{2}}\lesssim\lambda^{\epsilon}(\lambda\delta)^{ \frac{1}{2}}\).
2301.10047
DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model
Speech-driven gesture synthesis is a field of growing interest in virtual human creation. However, a critical challenge is the inherent intricate one-to-many mapping between speech and gestures. Previous studies have explored and achieved significant progress with generative models. Notwithstanding, most synthetic gestures are still vastly less natural. This paper presents DiffMotion, a novel speech-driven gesture synthesis architecture based on diffusion models. The model comprises an autoregressive temporal encoder and a denoising diffusion probability Module. The encoder extracts the temporal context of the speech input and historical gestures. The diffusion module learns a parameterized Markov chain to gradually convert a simple distribution into a complex distribution and generates the gestures according to the accompanied speech. Compared with baselines, objective and subjective evaluations confirm that our approach can produce natural and diverse gesticulation and demonstrate the benefits of diffusion-based models on speech-driven gesture synthesis.
Fan Zhang, Naye Ji, Fuxing Gao, Yongping Li
2023-01-24T14:44:03Z
http://arxiv.org/abs/2301.10047v2
# DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model ###### Abstract Speech-driven gesture synthesis is a field of growing interest in virtual human creation. However, a critical challenge is the inherent intricate one-to-many mapping between speech and gestures. Previous studies have explored and achieved significant progress with generative models. Notwithstanding, most synthetic gestures are still vastly less natural. This paper presents _DiffMotion_, a novel speech-driven gesture synthesis architecture based on diffusion models. The model comprises an autoregressive temporal encoder and a denoising diffusion probability Module. The encoder extracts the temporal context of the speech input and historical gestures. The diffusion module learns a parameterized Markov chain to gradually convert a simple distribution into a complex distribution and generates the gestures according to the accompanied speech. Compared with baselines, objective and subjective evaluations Figure 1: Random samples from DiffMotion can give many distinct yet natural output gestures within and between sequences, even if the input speech audio is the same. confirm that our approach can produce natural and diverse gesticulation and demonstrate the benefits of diffusion-based models on speech-driven gesture synthesis. Project page: [https://zf223669.github.io/DiffMotionWebsite/](https://zf223669.github.io/DiffMotionWebsite/). Keywords:Gesture generation Gesture synthesis Cross-Modal Speech-driven Diffusion model. ## 1 Introduction Recently, 3D virtual human technology has become increasingly popular with the rise of the metaverse. To provide natural and relatable characters, one main task is to implement non-verbal(co-speech) gestures that look natural and match the verbal ones that communicate like humans. To date, despite motion capture systems meeting this task, they require particular hardware, space, and actors, which can be expensive. Automatic generation is the cheapest way to generate gestures, which does not require human effort at production time. Speech-driven gesture generation is one considerable potential solution. However, the primary challenge for generating relevant and well-timed gestures from input speech is the inherent cross-modal one-to-many mapping between speech and gesture. That is, it is questioning to model the connection, which is that the same utterance is often accompanied by significantly different gestures at different times, even for the same or other speakers[16]. Under the assumption of one-to-one mapping, previous rule-based or deterministic deep learning methods fail to achieve this task. The former limited to the provided gesture units results in repetitive movements, and the latter, trained by minimizing a mean square error, is prone to mean pose. Thus, the present research has shifted to probabilistic generative models(such as GANs, VAEs, and Normalizing Flow). Despite that, most synthetic gestures are still significantly less natural compared with the original motion-capture dataset[31]. Diffusion models which can generate high-quality and diverse samples have shown tremendously impressive results on various generation tasks. Nevertheless, diffusion models have yet gained little attention in speech-driven gesture synthesis tasks. This paper proposes **DiffMotion**, a novel diffusion-based probabilistic architecture for speech-driven gesture generation. The model learns on sizable sets of unstructured gesture data with zero manual annotation. Furthermore, as shown in Fig.1, our method can estimate natural, various gestures, even those not present in the dataset. Our contributions are as follows: 1. We propose DiffMotion, the first instance of the Diffusion-based generative model, to solve the cross-modal speech-driven gesture synthesis. 2. We innovatively integrated an Autoregressive Temporal Encoder and a Denoising Diffusion Probabilistic Module, which can learn the complex regularities between gestures and the accompanying speech and generate realistic, various motions which match the rhythm of the speech. 3. Experiments show that DiffMotion outperforms state-of-the-art baselines, objectively and subjectively. ## 2 Related Work Due to the research now shifting from _deterministic_ to _generative_ model, we only discuss the novel approaches for speech-driven gesture generation briefly. Ylva et al.[29] introduced GANs[10] for converting speech to 3D gesture motion using multiple discriminators. Though this approach improves significantly concerning standard regression loss training, the dataset needs to be hand-annotated. Further, they admitted the generated motions lacked realism and discontinuity because they assumed it was a gesture phase classification task. Impressively, owning to the probabilistic generative models, called _Normalizing Flows[5][6]_, which can tackle the exact log-likelihood and latent-variable inference, Alexanderson et al. [24]constructed a Glow-based[7, 18] network derived from normalizing flows and RNN that successfully modeled the conditional probability distribution of the gestures given speech as input and obtained various motions given the same speech signal. Further on, Taylor et al. [23] extended normalizing flows by combining variational autoencoder, demonstrating that the approach can produce expressive body motion close to the ground truth using a fraction of the trainable parameters. Though normalizing flows are powerful enough to capture high-dimensional complexity yet still trainable, these methods require imposing topological constraints on the transformation[5, 6, 32]. _Diffusion_ models(A survey in [27]), the more flexible architectures, use parameterized Markov chain to convert as simple distribution into complex data distribution gradually and can be efficiently trained by optimizing the Variational Lower Bound. After overtaking GAN on image synthesis[4, 9], the diffusion model has shown remarkable impressive results on various generation tasks, such as computer vision[15], and natural language processing[1]. In particular, the diffusion model has demonstrated impressive results in multi-modal modeling[2, 33], and time series forecasting [22], which owns the enormous prospect in cross-model sequence-to-sequence modeling and generation. However, it has yet earned little attention in speech-driven gesture synthesis tasks. Inspired by the discussions above, we design to construct a diffusion-based architecture to explore the capability of the diffusion model in speech-driven gesture generation. ## 3 Our Approach We present **DiffMotion** for cross-modal speech-driven gesture synthesis. The model aims to generate gesticulations conditioned on speech features and historical gestures. In this section, we first formulate the problem in Sec.3.1 and then elaborate on the DiffMotion architecture in Sec.3.2. Finally, the training and inference process is described in Sec.3.3 and Sec.3.4. ### Problem Formulation We denote the gesture features and the acoustic signal as \(x_{t}^{0}\in[x_{1}^{0},...,x_{t}^{0},...,x_{T}^{0}]\) and \(c_{t}\in[c_{1},...,c_{t},...,c_{T}]\), where \(x_{t}^{0}=\mathbb{R}^{D}\) is 3D skeleton joints angle at frame \(t\), and \(D\) indicates the number of channels of the skeleton joints. \(c_{t}\) is the current acoustic subsequence signal constructed by the sub-sequence excerpted from speech acoustic feature sequence \([a_{1},...,a_{T}]\), and \(T\) is the sequence length. Let \(p_{\theta}(\cdot)\) denote the Probability Density Function(PDF), which aims to approximate the actual gesture data distribution \(p(\cdot)\) and allows for easy sampling. Given \(t^{\prime}\) as the past frame before current frame \(t\), and \(\tau\) as the past window frames and the initial motion \(x_{1:\tau}^{0}\), we are tasked with generating the pose \(x_{t}^{0}\sim p_{\theta}(\cdot)\) frame by frame according to its conditional probability distribution given historical poses \(x_{t^{\prime}-\tau:t^{\prime}-1}^{0}\) and acoustic signal \(c_{t}\) as covariate: \[\begin{split} x_{t}^{0}\sim p_{\theta}\left(x_{t}^{0}|x_{t-\tau:t -1}^{0},c_{t}\right)&\approx p(\cdot):=p\left(x_{t}^{0}|x_{t- \tau:t-1}^{0},c_{t}\right)\\ &:=p\left(x_{1:\tau}^{0}\right)\cdot\prod\nolimits_{t^{\prime}= \tau+1}^{t}p\left(x_{t^{\prime}}^{0}|x_{t^{\prime}-\tau:t^{\prime}-1}^{0},c_ {t^{\prime}}\right)\end{split} \tag{1}\] The autoregressive temporal encoder extracts the conditional information, and the \(p_{\theta}(\cdot)\) aims to approximate \(p(\cdot)\) that is trained by denoising diffusion module. We discuss these two modules in detail in Sec.3.2. ### DiffMotion Architecture DiffMotion architecture consists of two modules: Autoregressive Temporal Encoder (AT-Encoder) and Denoising Diffusion Probabilistic Module(DDPM). The whole architecture is shown in Fig.2. **Autoregressive Temporal Encoder(AT-Encoder).** Multi-layer LSTM is adopted for AT-Encoder to encode the temporal context of speech acoustic features and past poses up to frame \(t-1\) via updated hidden state \(h_{t-1}\) : \[h_{t-1}=g\left(x_{t-\tau:t-1}^{0},c_{t},h_{t-2}\right)=\text{LSTM}_{\theta} \left(\text{concatenate}\left(\text{x}_{t-\tau:t-1}^{0},\text{c}_{t}\right), \text{h}_{t-2}\right), \tag{2}\] where \(h_{t}\) represents the LSTM hidden state evolution. \(\text{LSTM}_{\theta}\) is parameterized by sharing weights \(\theta\) and \(h_{0}=0\). Then, we use a sequence of neutral(mean) poses for the initial motion \(x_{1:\tau}\). Thus we can approximate Eq.(1) by the _Motion Diffusion_ model, and its negative log-likelihood(NLL) is : \[p_{\theta}\left(x_{1:\tau}^{0}\right)\cdot\prod\nolimits_{t^{\prime}=\tau+1}^ {t}p_{\theta}\left(x_{t^{\prime}}^{0}|h_{t^{\prime}-1}\right),\quad NLL:= \Sigma_{t^{\prime}=\tau+1}^{t}-\log p_{\theta}\left(x_{t^{\prime}}^{0}|h_{t^{ \prime}-1}\right) \tag{3}\] **Denoising Diffusion Probabilistic Module(DDPM).** The DDPM is a latent variable model[25] of the form \(p_{\theta}:=\int p_{\theta}\left(x^{0:N}\right)dx^{1:N}\), where \(x^{1},...,x^{N}\) are latent of the same dimensionality as the data \(x^{n}\) at the \(n\)-th diffusion time stage. The module contains two processes, namely _diffusion process_ and _generation process_. At training time, the diffusion process gradually converts the original data(\(x^{0}\)) to white noise(\(x^{N}\)) by optimizing a variational bound on the data likelihood. At inference time, the generation process recovers the data by reversing this noising process through the Markov chain using Langevin sampling[19]. The gesture can be generated by sampling from the conditional data distribution at each frame and then fed back to the AT-Encoder for producing the next frame. The Markov chains in the diffusion process and the generation process are: \[\begin{split}& p\left(x^{n}|x^{0}\right)=\mathcal{N}\left(x^{n}; \sqrt{\overline{\alpha}^{n}}x^{0},\left(1-\overline{\alpha}^{n}\right)I\right) \quad and\\ & p_{\theta}\left(x^{n-1}|x^{n},x^{0}\right)=\mathcal{N}\left(x^{ n-1};\tilde{\mu}^{n}\left(x^{n},x^{0}\right),\tilde{\beta}^{n}I\right),\end{split} \tag{4}\] where \(\alpha^{n}:=1-\beta^{n}\) and \(\overline{\alpha}^{n}:=\prod_{i=1}^{n}\alpha^{i}\). As shown by [9], \(\beta^{n}\) is a increasing variance schedule \(\beta^{1},...,\beta^{N}\) with \(\beta^{n}\in\left(0,1\right)\), and \(\tilde{\beta}^{n}:=\frac{1-\overline{\alpha}^{n-1}}{1-\overline{\alpha}^{n}} \beta^{n}\). ### Training The training objective is to optimize the parameters \(\theta\) that minimize the NLL via Mean Squared Error(MSE) loss between the true noise \(\epsilon\sim\mathcal{N}\left(0,I\right)\) and the predicted noise \(\epsilon_{\theta}\): \[\mathbb{E}_{x_{t}^{0},\epsilon,n}[||\epsilon-\epsilon_{\theta}\left(\sqrt{ \overline{\alpha}^{n}x_{t}^{0}}+\sqrt{1-\overline{\alpha}^{n}}\epsilon,h_{t-1 },n\right)||^{2}], \tag{5}\] Figure 2: DiffMotion schematic. The model consists of an Autoregressive Temporal Encoder and a Denoising Diffusion Probabilistic Module. Here \(\epsilon_{\theta}\) is a neural network, which uses input \(x_{t}^{0}\), \(h_{t-1}\) and \(n\) that to predict the \(\epsilon\), and contains the similar architecture employed in [21]. The complete training procedure is outlined in Algorithm 1. ``` Input: data \(x_{t}^{0}\sim p\left(x_{t}^{0}|x_{t-\tau:t-1}^{0},c_{t}\right)\) and LSTM state \(h_{t-1}\) repeat Initialize \(n\sim\text{Uniform}(1,...,N)\) and \(\epsilon\sim\mathcal{N}(0,I)\) Take gradient step on \[\nabla_{\theta}||\epsilon-\epsilon_{\theta}\left(\sqrt{\overline{\alpha}_{n}} x_{t}^{0}+\sqrt{1-\overline{\alpha}_{n}}\epsilon,h_{t-1},n\right)||^{2}\] untilconverged; ``` **Algorithm 1**Training for each frame \(t\in[\tau+1,T]\) ### Inference After training, we expect to use variational inference to generate new gestures matching the original data distribution(\(x_{t}^{0}\sim p_{\theta}\left(x_{t}^{0}|x_{t-\tau:t-1}^{0},c_{t}\right)\)). Firstly, we run the AT-Encoder over the sequence of neutral poses for the initial motion \(x_{1:\tau}\) to obtain the hidden state \(h_{t-1}\) via Eq.2. Then we follow the sampling procedure in Algorithm 2 to obtain a sample \(x_{t}^{0}\) of the current frame. The \(\sigma_{\theta}\) is the standard deviation of the \(p_{\theta}\left(x^{n-1}|x^{n}\right)\). We choose \(\sigma_{\theta}:=\tilde{\beta}^{n}\). ``` Input: noise \(x_{t}^{N}\sim\mathcal{N}(0,I)\) and state \(h_{t-1}\) for\(n=N\)to\(1\)do if\(n>1\)then \(z\sim\mathcal{N}(0,I)\) else \(z=0\) endif \(x_{t}^{n-1}=\frac{1}{\sqrt{\alpha^{n}}}\left(x_{t}^{n}-\frac{\beta^{n}}{\sqrt{1 -\overline{\alpha}^{n}}}\epsilon_{\theta}\left(x_{t}^{n},h_{t-1},n\right) \right)+\sqrt{\sigma_{\theta}}z\) endfor Return:\(x_{t}^{0}\) ``` **Algorithm 2**Sampling \(x_{t}^{0}\) via annealed Langevin dynamics During inferencing, the past poses \([x_{t-\tau-1}^{0},...,x_{t-1}^{0}]\) and acoustic features \([a_{t-\tau},...a_{t+r}]\) are concatenated and sent to the AT-Encoder for extract the context, then as a conditional information to Diffusion Module to generate current gesture(\(x_{t}^{0}\)). DiffMotion outputs one gesture in each frame and is then fed back to the generated gesture sequence. At the same time, the past pose window slides from \([t-\tau,t-1]\) to \([t-\tau+1,t]\), and the acoustic window also moved forward by one frame for the next gesture(\(x_{t+1}^{0}\)) generation, as shown in Fig.2. ## 4 Experiments We compare DiffMotion(**DM**) with previous baselines objectively and subjectively. For fair and comparable, we select baselines followed by: 1) using Trinity Gesture Dataset[30] recommended by GENEA Workshop[13]; 2) Skeleton structure is consistent with DM; 3) Joint angles represented by exponential map[8]; 4) Open source provided. A flow-based method, called StyleGestures(**SG**)[24], meets the requirement above. Meanwhile, the Audio2Getsture method (**AG**)[11] was introduced for only subjective evaluation since its output format is somewhat different from **DM** and **SG**. All experiments include the ground truth(**GT**). We focus on 3D upper body beat gesture, which makes up more than 50% of all co-speech gestures and is rhythmically connected to the accompanying speech[17]. ### Training-data Processing Trinity Gesture Dataset we train on includes 23 takes, totaling 244 minutes of motion capture and audio of a male native English speaker producing spontaneous speech on different topics. The actor's motion was captured with 20 Vicon cameras at 59.94 frames per second(fps), and the skeleton includes 69 joints. The upper-body skeleton was selected from the first spine joint to the hands and head and excluded the finger motion, keeping only 15 upper-body joints. We followed the data process method presented by [24] and obtained 20,665\(\times\)2 samples. Each with 80\(\times\)27 speech features(80 frames(4s) with 27-channel Melfrequency power spectrograms, MFPS) as input and 80\(\times\)45 joint angle features as output. The frame per second is downsampled from 60 fps to 20 fps. Each joint angle was represented by an exponential map to avoid discontinuities. ### Model Settings The LSTM in AT-Encoder consists of 2 layers with hidden state \(h_{t}\in\mathbb{R}^{512}\). The similar network set, proposed by [22], is employed for \(\epsilon_{\theta}\). The quaternary variance schedule starts from \(\beta_{1}=1\times 10^{-4}\) till \(\beta^{N}=0.1\). We set quantile=0.5 at the inference stage for catching the median value. In truth, the gestures must be prepared in advance [3, 12]. We let the control inputs \(c_{t}\) at time instance \(t\) contain not only the current acoustic features \(a_{t}\) but also the window of surrounding \(c_{t}=a_{t-\tau:t+\tau}\), where the lookahead \(r=20\) is set so that a sufficient amount of future information can be taken into account, shown in Fig.2. To avoid animation jitter, we apply Savitzky-Golay Smoothing Filter[20] to filter all joint channels of the sequence along the frame and set the window length to 31 and poly order to 4. The model is built on TorchLightning framework using a batch size of 80, Adam optimizer with a learning rate of \(1.5\times 10^{-3}\). All experiments run on an Intel i9 processor and a single NVIDIA GTX 3090 GPU. ### Objective Evaluation We quantitatively compare DiffMotion **DM** with **SG** and the **GT** in the team of realism, Time consistency, and diversity: 1) **Realism**: We adopt \(L_{1}\) distance of joint position[11] and the Percentage of Correct 3D Keypoints(PCK)[28] to evaluate the realism of the generated motion. 2) **Time consistency**: A Beat Consistency Score(BCS) metric[11] is introduced to measure the motion-speech beat correlation(time consistency). 3) **Diversity**: The diversity metric evaluates the variations among generated gestures and in a sequence. We synthesize multiple motion sequences sampled N times for the same speech input and then split each sequence into equal-length clips without overlap. Finally, we calculate the averaged \(L_{1}\) distance of the whole clips. The results are listed in Table 1. The quantitative results show that our method outperforms **SG** on realism and time consistency. On the Diversity metric, our model achieves higher score(6.25 on average) than **SG**(3.3) and **GT**(2.6). It indicates that both **DM** and **SG** have the capability to generate novel gestures that are not presented in **GT**, and our model obtains a better result than **SG**, due to the Diffusion Model can sample a wider range of samples[22]. We also report parameter counts, training time, and average synthesis time per frame in Table 2. The results show that the parameter count for **DM**(10.4M) is much less than **SG**(109.34M). Our method achieves less training time(8.02 min.) than **SG**(6.2 hour). However, with the inherent characteristics of diffusion models[27], the generation phase in **DM**(1.60\(\pm\)0.06) takes a longer time than SG(0.08\(\pm\)0.06). \begin{table} \begin{tabular}{c|c|c|c} \hline Method & L\_1\(\downarrow\) & PCK\(\uparrow\) & BCS\(\uparrow\) & Diversity\(\uparrow\) \\ \hline DM(Ours) & **11.6(10.26)** & **0.61(0.35)** & 0.79(0.80) & **6.25(6.31)** \\ SG & 16.35(15.22) & 0.41(0.50) & 0.68(0.69) & 3.3(3.6) \\ GT & 0 & 0 & **0.93** & 2.6 \\ \hline \end{tabular} \end{table} Table 1: Quantitative results. \(\uparrow\) means higher is better, \(\downarrow\) means lower is better. We perform 20 tests and report their average and best scores(in parentheses). \begin{table} \begin{tabular}{c|c|c|c} \hline Method & Param.Count\(\downarrow\) & Train.time\(\downarrow\) & Synth.time\(\downarrow\) \\ \hline DM(Ours) & **10.4M** & **8.02min** & 1.60\(\pm\)0.06s \\ SG & 109.34M & 6.2H & **0.08\(\pm\)0.06s** \\ \hline \end{tabular} \end{table} Table 2: Parameter counts, training time, and average synthesis time per frame with 95% confidence intervals. ### Subjective Evaluation The ultimate goal of speech-driven gesture generation is to produce natural, convincing motions. Considering that _objective evaluation of gesture synthesis is generally tricky and does not always translate into superior subjective quality for human observes_[24, 26], we perform subjective human perception. A question set consisting of three evaluation aspects with a 5-point Likert scale to subjectively evaluate baselines(**SG**, **AG**), **DM**, and **GT**. Three aspects are _human-likeness_, _diversity_, and _appropriateness_, respectively: 1) **Human-likeness**: whether the generated gestures are natural and look like the motion of an actual human, without accounting for the speech; 2) **Diversity**: which motion has more gesture patterns; 3) **Appropriateness**: the time consistency, that is, whether the generated gestures match the rhythm of the speech. First, we trained each model and generated 20 clips with the same speech audio as input. Each clip lasts for 18 seconds. Next, we randomly selected 3 clips generated by each model for valuation. Then, we built up the video by GENEA_visualizer[14] for user study. 30 volunteer participants were recruited, including 16 males and 14 females, aged 19-23. All of them(20 from China, 10 international students from USA, UK, etc.) are good at English. They were asked to rate the scale for the evaluation aspects. The scores were assigned from 1 to 5, representing the worst to best. Firstly, we introduced the method to all participants and showed them some example clips which not in the valuation set. After the participants fully understood the process, we started the formal experiment. All participants were Figure 3: Mean ratings with 95% confidence intervals. Asterisks indicated significant effects(*: \(p\)\(<\) 0.05, ***** instructed to wear headphones and sit in front of a computer screen. The environment was quiet and had no interference. Participants were unaware of which method each video belonged to. The order of videos was random, but each video was guaranteed to appear three times, presented and scored by the participants. One-way ANOVA was conducted to determine if the models' scores differed on the three evaluation aspects. The results are shown in Fig.3 and Table 3. The mean rating scores of **DM** we proposed are statistically significantly different from the other two baseline models and not statistically significant with **GT**. The score of human-likeness for **DM** and **GT** is 3.89\(\pm\)0.95 and 3.89\(\pm\)0.93, and the appropriateness is 3.93\(\pm\)0.93 and 3.93\(\pm\)0.92, respectively. Interestingly, there was no significant difference between **DM** (3.91\(\pm\)0.96) and **AG**(3.89\(\pm\)0.92) on diversity evaluation and obtained higher scores than **GT**(3.60\(\pm\)1.02). These results suggest that **DM** is as capable as **AG** of generating more generous gestures that are not present in ground-truth. The results reveal that the proposed method outperforms previous SOTA methods and demonstrate the diffusion-based method benefits speech-driven gesture generation tasks. ### Ablation Study We found that the length of diffusion step \(N\) is a crucial hyperparameter that can affect the quality and effectiveness of gesture generation. Despite larger \(N\) allowing \(x^{N}\) to be approximately Gaussian[25], it results in more time in the inference process. For making a trade-off between generation efficiency and effectiveness, we tune the number of epochs for early stopping(50 epochs) and evaluated \(N=1,50,100,200,500,...,1000,...,2500\) while keeping all other hyperparameters unchanged. The results are listed in Table 4. Faster generation is achieved but causes jittery when \(N<100\). A continuous motion can be achieved when \(N\geq 100\)(especially \(N=100\), we reach the best result). However, as \(N\) increases, the time consumed increases substantially. Nevertheless \(N>1000\), the model occasionally produces bizarre poses due to the diffusion step destroying the detail of the raw information[32]. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline Metric & 1 & 100 & 300 & 500 & 600 & 800 & 1000 & 1500 & 2000 & 2500 \\ \hline Synth. time & 0.004 & 0.34 & 0.97 & 1.60 & 1.91 & 2.55 & 3.23 & 4.81 & 6.43 & 8.02 \\ per frame(sec.) & \(\pm\)0.001 & \(\pm\)0.02 & \(\pm\)0.04 & \(\pm\)0.06 & \(\pm\)0.06 & \(\pm\)0.08 & \(\pm\)0.10 & \(\pm\)0.14 & \(\pm\)0.19 & \(\pm\)0.23 \\ Training Time & 8.63 & 8.63 & 8.62 & 8.51 & 8.55 & 8.52 & 8.93 & 8.84 & 8.74 & 8.67 \\ (min.) & \(\pm\)0.06 & \(\pm\)0.07 & \(\pm\)0.10 & \(\pm\)0.08 & \(\pm\)0.09 & \(\pm\)0.08 & \(\pm\)0.08 & \(\pm\)0.08 & \(\pm\)0.07 & \(\pm\)0.08 \\ Train Loss & 0.99 & 0.32 & 0.20 & 0.17 & 0.16 & 0.15 & 0.13 & 0.12 & 0.10 & 0.09 \\ Val Loss & 0.99 & 0.34 & 0.22 & 0.19 & 0.18 & 0.16 & 0.15 & 0.13 & 0.11 & 0.10 \\ \hline \end{tabular} \end{table} Table 4: Evaluation for the number of diffusion step \(N\). ## 5 Conclusion In this paper, we propose a novel framework **DiffMotion** for automatically co-speech gesture synthesis. The framework consists of an Autoregressive Temporal Encoder and a Denoising Diffusion Probability Module. The architecture can learn to obtain the complex one-many mapping between gestures and the accompanying speech and can generate 3D co-speech gesticulation that is natural, diverse, and well-timed. Experiments confirm that our system outperforms previous baselines, quantitatively and qualitatively. Still, there are some limitations. For example, the model tends to generate redundant movements that lack relaxation, and it experiences slow inference due to the inherent characteristics of DDPM. For future work, we may plan the following attempts: 1) considering the breathing space for relaxation; 2) introducing multimodal features such as semantic and affective expressions; 3) investigating a real-time system enabling users to interact with virtual human interfaces. #### Acknowledgements. This work was supported by the Key Program and development projects of Zhejiang Province of China (No.2021C03137), the Public Welfare Technology Application Research Project of Zhejiang Province, China (No.LGF22F020008), and the Key Lab of Film and TV Media Technology of Zhejiang Province (No.2020E10015).
2302.11609
Modelling gas around galaxy pairs and groups using the Q0107 quasar triplet
We examine to what extent disk and outflow models can reproduce observations of H I gas within a few virial radii of galaxies in pairs and groups. Using highly-sensitive HST/COS and FOS spectra of the Q0107 quasar triplet covering Ly$\alpha$ for z$\lesssim$1, as well as a deep galaxy redshift survey including VIMOS, DEIMOS, GMOS and MUSE data, we test simple disk and outflow models against the H I absorption along three lines-of-sight (separated by 200-500 kpc) through nine galaxy groups in this field. These can be compared with our previous results in which these models can often be fit to the absorption around isolated galaxies. Our models can reproduce $\approx$ 75$\%$ of the 28 identified absorption components within 500 km/s of a group galaxy, so most of the H I around groups is consistent with a superposition of the CGM of the individual galaxies. Gas stripped in interactions between galaxies may be a plausible explanation for some of the remaining absorption, but neither the galaxy images nor the galaxy and absorber kinematics provide clear evidence of such stripped material, and these unexplained absorbers do not preferentially occur around close pairs of galaxies. We find H I column densities typically higher than at similar impact parameters around isolated galaxies ($\approx$ 2.5$\sigma$), as well as more frequent detections of O VI than around isolated galaxies (30$\%$ of sightlines to 7$\%$).
Alexander Beckett, Simon L. Morris, Michele Fumagalli, Nicolas Tejos, Buell Jannuzi, Sebastiano Cantalupo
2023-02-22T19:18:41Z
http://arxiv.org/abs/2302.11609v1
# Modelling gas around galaxy pairs and groups using the Q0107 quasar triplet ###### Abstract We examine to what extent disk and outflow models can reproduce observations of H i gas within a few virial radii of galaxies in pairs and groups. Using highly-sensitive HST/COS and FOS spectra of the Q0107 quasar triplet covering Ly\(\alpha\) for z\(\lesssim\)1, as well as a deep galaxy redshift survey including VIMOS, DEIMOS, GMOS and MUSE data, we test simple disk and outflow models against the H i absorption along three lines-of-sight (separated by 200-500 kpc) through nine galaxy groups in this field. These can be compared with our previous results in which these models can often be fit to the absorption around isolated galaxies. Our models can reproduce \(\approx 75\%\) of the 28 identified absorption components within 500 \(\,\mathrm{km\,s^{-1}}\) of a group galaxy, so most of the H i around groups is consistent with a superposition of the CGM of the individual galaxies. Gas stripped in interactions between galaxies may be a plausible explanation for some of the remaining absorption, but neither the galaxy images nor the galaxy and absorber kinematics provide clear evidence of such stripped material, and these unexplained absorbers do not preferentially occur around close pairs of galaxies. We find H i column densities typically higher than at similar impact parameters around isolated galaxies (\(\approx 2.5\sigma\)), as well as more frequent detections of O vi than around isolated galaxies (30% of sightlines to 7%). keywords: intergalactic medium - quasars: absorption lines - galaxies: evolution ## 1 Introduction The exchange of gas between galaxies and their surroundings plays a vital role in their evolution. Cool gas that provides fuel for star formation needs to be accreted from outside the galaxy in order to explain the observed star-formation rates and gas content of galaxies (e.g. Freundlich et al., 2013; Scoville et al., 2017). Cool gas is also ejected from galaxies by stellar-feedback- or AGN-driven winds, and models suggest that these processes are important in regulating its star formation (e.g. Lehnert et al., 2013; Somerville and Dave, 2015; Salcido et al., 2020). This reservoir of gas surrounding galaxies is known as the circumgalactic medium (CGM, e.g. Tumlinson et al., 2017), and is usually defined as extending to the virial radius. This material is difficult to observe in emission due in part to its low density (although emission-line maps of gas on CGM scales have become more common in recent years, e.g. Chen et al., 2019; Fossati et al., 2019; Zabl et al., 2021; Leclercq et al., 2022), so is most commonly detected using absorption features along the lines-of-sight to background sources, often quasars (e.g. Bahcall and Spitzer, 1969; Bergeron, 1986; Bergeron et al., 1994; Weymann et al., 1998; Adelberger et al., 2005; Chen et al., 2010; Prochaska et al., 2011; Rubin et al., 2018; Pointon et al., 2019; Wilde et al., 2021; Lehner et al., 2021). These absorption-based studies often discuss large numbers of galaxy-sightline pairs, whether as targeted surveys (e.g. Tumlinson et al., 2013; Bielby et al., 2019) or by utilizing other surveys and/or archival data (e.g. Wild et al., 2008), but are usually limited to a single line-of-sight through the gas around any galaxy or group. Some studies are able to utilize quasar pairs or triplets (lensed or projected, e.g. Fossati et al., 2019; Maitra et al., 2019), bright background galaxies (e.g. Zahedy et al., 2016; Peroux et al., 2018; Chen et al., 2020; Okoshi et al., 2021) or gravitationally-lensed arcs (e.g. Lopez et al., 2018, 2020; Tejos et al., 2021; Mortensen et al., 2021), but these are not common enough to produce statistically meaningful samples of galaxy-absorber pairs. Many observations find evidence for disk-like accreting and rotating structures along the major axis of galaxies (e.g. Charlton Churchill, 1998; Steidel et al., 2002; Bouche et al., 2016; Ho et al., 2017; Zabl et al., 2019; French & Wakker, 2020), and this is generally reproduced by simulations, with simulated CGM structures aligned with the galaxy major axis (e.g Ho & Martin, 2019; Mitchell et al., 2020; DeFelippis et al., 2020; Hafen et al., 2022). Similarly, evidence for outflowing material is often found along the minor axis of galaxies (e.g. Bland & Tully, 1988; Heckman et al., 1990; Finley et al., 2017; Lan & Mo, 2018; Schroetter et al., 2019; Burchett et al., 2021; Zabl et al., 2021), which are also reproduced by simulations (e.g Nelson et al., 2019; Mitchell et al., 2020; Pandya et al., 2021). This leads to some observations finding a bimodality in the azimuthal angles of detected absorption (absorption is found primarily along the major and minor axes, e.g. Kacprzak et al., 2012; Bouche et al., 2012; Kacprzak et al., 2015), as well as differences in flow rates and metallicities along galaxy major and minor axes in simulations (e.g. Peroux & Houk, 2020). However, gas flows around galaxy groups are likely far more complex. For example, in the nearby M81/M82 group, H i observations reveal a conical outflow structure on small scales, but clear distortion on larger scales due to the other group galaxies (e.g. Sorgho et al., 2019). Tidal interactions are expected to produce a significant fraction of Ly\(\alpha\) absorbers in groups (e.g. Morris & van den Bergh, 1994), and likely contribute to other observed ions (e.g. Chen & Mulchaey, 2009). Some studies have identified possible tidal material in absorption, but this is usually difficult to distinguish from other origins (e.g. Chen et al., 2014; Guber et al., 2018). Ram-pressure stripping is also likely to affect the state of the CGM and any intra-group gas during interactions (e.g. Fumagalli et al., 2014; Fossati et al., 2019). This removal of gas from galaxies and their CGM affects not only the gas itself, but also leads to 'quenching' of star formation in group galaxies (e.g. Peng et al., 2010; Wetzel et al., 2013; Jian et al., 2017; Kuschel et al., 2022) There have been numerous case studies focusing on the gas in individual galaxy groups, using a variety of absorption features, but also emission lines in more recent cases. Some of these find material that can be associated with a particular galaxy, including some likely outflows and accretion (e.g. Peroux et al., 2017; Johnson et al., 2018), but also material that cannot be associated with a galaxy, and therefore forms an intra-group medium (e.g Bielby et al., 2017; Epinat et al., 2018), as well as suggestions of tidal material (e.g. Kacprzak et al., 2010; Chen et al., 2019). Where multiple absorption/emission components can be identified, material consistent with both disk/outflow structures and tidal/intra-group material can sometimes be seen in the same galaxy group (e.g. Nateghi et al., 2021; Leclercq et al., 2022). Statistical studies utilizing large samples of galaxy groups are also used to determine the dominant processes in these systems. Bordoloi et al. (2011) found that Mg ii absorption near galaxy groups could be due to a superposition of the halos around individual galaxies. In contrast, Nielsen et al. (2018) found that a superposition model could not match the absorber kinematics, and preferred a model in which most cool gas was associated with the group itself rather than any member galaxy. More recently, Dutta et al. (2020) found Mg ii absorption more extended around galaxy groups than isolated galaxies, but also a clear dependence on galaxy properties, suggesting that both the galaxy halos themselves and the interactions between galaxies contribute to the absorption. If material stripped from galaxies contributes substantially to the gas content in groups, then we may expect gas in these groups to have a higher metal content (having passed through the galaxy and been enriched by cycles of star formation, e.g. Oppenheimer et al., 2016). However, this is not found by Pointon et al. (2019). The distribution of gas in groups therefore remains unclear. Emission from the CGM remains difficult to detect due to a low density, which usually limits such studies to scales of \(\lesssim\)100 kpc (e.g. Burchett et al., 2021; Leclercq et al., 2022). There remains inconsistency in defining galaxy groups, which are subject to different algorithms, linking lengths, and detection limits in different studies. This makes direct comparisons between observations, or with simulations, difficult (e.g. Oppenheimer et al., 2021). The power of absorption studies is limited by the presence of sufficiently bright background sources, so in most cases the gas around any galaxy group is probed by only a single line-of-sight and we cannot measure gas properties elsewhere in the group. It is this final difficulty which we seek to address in this paper. We study the Q0107 field, a quasar triplet at z \(\sim\) 1 with separations of \(\sim\) 1 arcminute (\(\approx\) 400 kpc at z = 0.5), with basic quasar properties given by Table 1. The use of multiple lines-of-sight through a densely-surveyed field helps to constrain the gas structures and properties on CGM scales. This field has been utilized in several previous studies, including early studies of the size scales of Ly\(\alpha\) absorbers (e.g. Dinshaw et al., 1997; D'Odorico et al., 1998; Young et al., 2001), analysis of the coincidences between absorption in the different lines-of-sight (e.g. Perry et al., 2006; Crighton et al., 2010), construction of the 2D 2-point correlation functions of absorption in both H i and O vi (Tejos et al., 2014, hereafter T14, Finn et al., 2016), and detailed radiative transfer modelling of a small number of absorbers (e.g. Muzahid, 2014; Anshul et al., 2021). Improvements to the available data, such as higher-resolution spectra of the three quasars, high-resolution imaging from the Hubble Space Telescope, and IFU observations of the field, have enabled our recent works extending these results. In Beckett et al. (2021), hereafter denoted as Paper 1, we examined some statistical properties of the relationship between gas and galaxies in this field. We found a bimodality in the azimuthal angle distribution of detected galaxy-absorber pairs, likely evidence for the existence of disk and outflow structures along the projected major and minor axes of galaxies, extending to \(\approx\) 300 kpc. A higher incidence of O vi absorption along the minor axis, and a preference for the velocity of H i absorption near the major axis to be aligned with the galaxy kinematics, support this hypothesis. We investigated this further in Paper 2 (Beckett et al., 2022), in which we attempted to fit simple disk and outflow models to isolated galaxies in our sample, finding that many absorbers (13 of 26 within 500 \(\,\mathrm{km\,s^{-1}}\), or 12 of 20 within 300 \(\,\mathrm{km\,s^{-1}}\), and impact parameters up to 600 kpc) could be fit using such models, whilst remaining consistent with the observations of the other sightlines. This work continues to use the disk and outflow models from Paper 2, but extends coverage to galaxy pairs and groups in our sample. By combining disk/outflow models, we can examine the hypothesis in which most absorption in groups results from the CGM of the individual galaxies. We also search for signs of tidal material based on the kinematics of the gas relative to nearby galaxies. In Section 2 we summarize the galaxy and quasar data used in this study, discussed more extensively in Paper 1 and references therein, as well as selection of the sub-sample of galaxy groups considered in this work. Section 3 summarizes the toy models used in our attempts to reproduce the observed absorption (covered in more detail in Paper 2), whilst Section 4 describes in detail the absorption around each galaxy group, and the process of attempting to fit our models to that absorption. We then discuss the overall results in Section 5, and finally summarize in Section 6. Throughout this work we quote physical sizes and distances unless otherwise stated, and use the Planck 2018 flat \(\Lambda\)CDM cosmology (Planck Collaboration, 2020), with \(\Omega_{m}\) = 0.315 and \(H_{0}\) = 67.4 \(\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\). ## 2 Data This paper uses the same absorber and galaxy catalogues as our previous two papers covering this field, which utilise the galaxy catalogues compiled by T14. We therefore only briefly describe the observations here; these are described in more detail in Paper 1 and T14. ### IGM data Spectra of the three quasars were taken to cover the Ly\(\alpha\) transition of H i along each line-of-sight from \(z=0\) to the redshift of the quasar. These use the spectrographs on the Hubble Space Telescope, with the G130M and G160M gratings on the Cosmic Origins Spectrograph (COS, Green et al., 2012, program GO-11585, PI: Neil Crighton) and the G190H and G270H gratings on the Faint Object Spectrograph (FOS, observations detailed in Young et al., 2001), covering a wavelength range of 1135-3277 A. This includes a range of metal ions in addition to Ly\(\alpha\). QSO-C was not observed using the G130M and G270H gratings, with these made unnecessary by a sub-damped Lyman-\(\alpha\) system at z \(\approx\) 0.56 obscuring any flux below 1420 A, and a lower QSO redshift respectively. Details of the observations and properties of the observed spectra are listed in Table 2 of Paper 1. Our absorption line catalogue was produced by T14, and also used by Finn et al. (2016) and our earlier papers. It contains 430 absorption systems, of which 272 are H i. The process of producing this catalogue is detailed in T14. The signal-to-noise ratio of the COS spectra (using results from Keeney et al., 2012), as well as the H i column density distribution of our sample, imply a 3\(\sigma\) detection limit of \(\sim 10^{13}\mathrm{cm^{-2}}\). The FOS detection limit is slightly higher, at \(\sim 10^{13.5}\mathrm{cm^{-2}}\). We note that Ly\(\alpha\) absorbers in the IGM are usually found with Doppler widths \(>20\)\(\,\mathrm{km\,s^{-1}}\)(e.g Dave et al., 2010), so most are resolved in the COS spectra (Ly\(\alpha\) below z \(\approx\) 0.45). Absorption in the FOS spectra is often unresolved and more likely to be blended with other features, but for Ly\(\alpha\) these issues can often be mitigated using Ly/\(\beta\) or higher-order Lyman transitions that remain in the COS wavelength range. Where these mitigations are not possible, absorbers may have a large Doppler parameter that appears as a broad-Ly\(\alpha\) absorber (BLA). Whether such a BLA appearing around our galaxies may indicate high temperatures or is unresolved narrower absorption is discussed in each case (Section 4 and Appendix A). ### Galaxies Our sample of galaxy groups is drawn from the spectroscopic galaxy catalogue described in Paper 1, which builds on that used in T14. This forms a highly heterogeneous galaxy survey, with VIMOS, DEIMOS, GMOS and CFHT-MOS observations of differing area, depth and completeness (referred to as MOS observations throughout), supplemented by MUSE fields centered on the A and B lines-of-sight. Additionally, high-resolution imaging from the Hubble Space Telescope is used to provide position angle and inclination estimates for some galaxies. These observations, the methods used to combine them and ensure consistency, and the resulting survey properties, are discussed in detail in Paper 1 (their Section 2). In this work we only provide a brief summary of this information. #### 2.2.1 Mos The multi-object spectrograph on the Canada-France-Hawaii Telescope (CFHT-MOS) (Le Fevre et al., 1994) was used to obtain spectra for 29 galaxies in the Q0107 field, described in Morris & Jannuzi (2006), whilst VIMOS (LeFevre et al., 2003) took 935 spectra (ESO programs 086.A-0970, PI:Crighton; and 087.A-0857, PI: Tejos). 642 galaxies in this field were observed using DEIMOS (Faber et al., 2003, program A290D, PIs: Bechtold and Jannuzi), with 210 spectra added by GMOS (Davies et al., 1997, program GS-2008B-Q-50, PI: Crighton). Galaxies appearing in more than one of these surveys were used to ensure consistent wavelength calibration across all MOS surveys (and therefore consistent redshifts in the resulting catalogue). As DEIMOS has the best resolution, the redshifts of VIMOS and GMOS objects were adjusted to match the DEIMOS frame, requiring a systematic shift of \(\Delta z\sim\) 0.0008 for the VIMOS data, and 0.0004 for the GMOS data see Section 3 of T14). #### 2.2.2 Muse GTO observations (ESO program ID 094.A-0131, PI Schaye) cover \(1^{\prime}\times 1^{\prime}\) fields of view around QSOs A and B. These provide galaxy kinematics for galaxies near to the lines-of-sight, and are also slightly deeper than the MOS surveys. The MUSE data cover the wavelength range from 4750 to 9350 A with FWHM of \(\approx\)2.7 A, and offer seeing of 0.96'' for QSO-A, and 0.82'' for QSO-B. We reduced the data using a pipeline similar to many recent papers (e.g Fumagalli et al., 2016, 2017; Fossati et al., 2019; Lofthouse et al., 2020; Bielby et al., 2020), primarily using ESO routines, but also utilizing the CUBEX package (Cantalupo et al., 2019) to apply a renormalization between the different IFUs, stacks and slices, and an improved, flux-conserving sky subtraction. Objects were detected using the SExtractor software (Bertin & Arnouts, 1996), applied to the white-light image. Summing the total flux within the SExtractor aperture produced the 1D spectra, for which redshifts were estimated using MARZ (Hinton et al., 2016). #### 2.2.3 Combined Galaxy Catalogue Combining the spectra from multiple instruments into a single galaxy catalogue for this field requires matching the astrometry of galaxies observed by multiple instruments, so that duplicates can be removed. We then checked other galaxy properties to ensure these were consistent across the different observations. We confirmed accurate astrometry for each instrument by comparing the brightest objects in the MOS and MUSE catalogues \begin{table} \begin{tabular}{c c c c c} \hline \hline Object & RA (J2000) & Dec (J2000) & Redshift & R-mag \\ \hline Q0107-025 A & 01:10:13.14 & -2:19:52.9 & 0.960 & 18.1 \\ Q0107-025 B & 01:10:16.25 & -2:18:51.0 & 0.956 & 17.4 \\ Q0107-0232 (C) & 01:10:14.43 & -2:16:57.6 & 0.726 & 18.4 \\ \hline \end{tabular} \end{table} Table 1: Co-ordinates, redshifts and R-band magnitudes of the three quasars, taken from Crighton et al. (2010) with the SDSS source catalogue (Albareti et al., 2017). Coordinates from the MOS catalogues matched the SDSS results within 0.5", whilst the MUSE results required a shift of \(\approx\)1". After this shift, we matched objects appearing within 1" in both catalogues, finding no additional matches within 2". Magnitudes obtained by integrating the MUSE spectra were compared with the results from the MOS surveys, and found to be consistent within the estimated uncertainties. We also compared redshifts to ensure that all of our galaxy samples are in the same frame as the absorption features. Across a large sample of galaxy-absorber pairs, any velocity offsets should average to zero, with no systematic offset. We found that the corrections applied when producing the T14 catalogue (described in Section 2.2.1) had already removed any systematic shift, with precision better than 10 km s\({}^{-1}\), whilst the MUSE data required a shift of \(\approx\) 30 km s\({}^{-1}\) to match the MOS redshifts. We utilize the duplicate observations, where a single galaxy was observed by the same instrument on multiple occasions, to estimate the uncertainties in our redshift measurements. We calculated the velocity differences between each pair of duplicate spectra, and used the standard deviation of the velocity differences to estimate the uncertainty in the redshifts estimated from each instrument. These are given in Table 5 of Paper 1, and vary between 30 km s\({}^{-1}\) (for well-determined redshifts in DEIMOS) and 190 km s\({}^{-1}\) (for single-feature detections in VIMOS). Our MUSE observations contain no duplicates. As MUSE has a higher resolution than GMOS, but lower than DEIMOS, we take the GMOS values as conservative estimates of the velocity uncertainties for MUSE galaxies with the same confidence flags. #### 2.2.4 HST imaging High-resolution imaging obtained using the ACS instrument on the Hubble Space Telescope (Ryon, 2019) is available for this field in the F814W band (Program GO-14660, PI Straka). This imaging enables a much clearer view of any distortions of the galaxies due to interactions, as well as much improved measurements of position angles and inclinations than are possible with the lower-resolution ground-based imaging available previously. This image does not cover the full field of our MOS surveys, so these improved position angle and inclination estimates are only available for a subset of galaxies in our catalogue, as shown in Figure 1. We modelled galaxies in both our redshift catalogue and the HST image using GALFIT (Peng et al., 2002). This uses chi-squared minimization to produce a best-fitting model of a galaxy's morphology. We used the SExtractor results as initial guesses in attempting to fit a Sersic disk to each galaxy. Where necessary, additional model components were introduced until a reasonable fit was found. This more advanced modelling, taking account of the point-spread function of the image, reduced the average uncertainty on both inclination and position angle measurements by a factor of \(\approx\) 3 relative to the SExtractor results. We excluded galaxies for which the fit clearly failed to converge to a reasonable result, but included galaxies that have large uncertainties on position angle due to being near to face-on. This produced a list of 109 galaxies for which position angle, inclination and redshift measurements were found. ### Sample of galaxy groups Paper 2 considered only isolated galaxies, defined as those with no detected companion within 500 kpc and 500 km s\({}^{-1}\), making it unlikely that the region within a galaxy's virial radius overlaps with that of another detected galaxy. We also additionally imposed the constraint that no other galaxy must lie within 1 Mpc of any of the lines-of-sight within the 500 km s\({}^{-1}\) window. In this work we consider galaxies that are not isolated, and therefore for the purposes of this work a group is any set of galaxies with pairwise separations smaller than 500 kpc and 500 km s\({}^{-1}\), or any set of multiple galaxies within 1 Mpc of at least one of the three lines-of-sight in a 500 km s\({}^{-1}\) window. This definition of 'group' requires only two galaxies, and also has a larger 'linking length' (maximum distance between galaxies for them to be considered in the same group) than many similar studies that define group galaxies (e.g. Bordoloi et al., 2011; Nielsen et al., 2018; Fossati et al., 2019). However, we do use a smaller window than some other studies considering isolated galaxies (e.g. COS-halos, Tumlinson et al., 2013). Therefore all galaxies in our sample are considered as 'group' or 'isolated', but we include groups with larger galaxy separations than other works. These 'group' galaxies make up the majority of our full sample, so it is impractical to model the absorption around all of these (more than 50 groups). As our models rely on position angle and inclination measurements obtained from the HST imaging, we selected our sample from groups that have at least two galaxies lying in the HST field (of which there are 18). Because of the time-consuming nature of modelling group absorption, we manually select a sample of nine groups that meet this restriction, and intentionally span a large range in group properties including redshift as well as number and mass of group galaxies. With the difficulty in determining our uncertainties described in Section 3, the expected gains from modelling the remaining groups would not substantially improve the strength of our conclusions. Our range of redshifts and stellar masses is intended to be comparable to that seen in our isolated sample from Paper 2, and therefore includes some groups beyond z \(\approx\) 0.73, where only the A and B sightlines are available. The selection was made 'blindly' with respect to the nearby absorption, in order to avoid biasing our results. Similarly, we do not preferentially select groups containing galaxies in the MUSE fields, as these are biased towards low-mass, star-forming galaxies and only a small subset of these have useful kinematics. We also check for any additional bias introduced through this selection, by comparing our nine selected groups with the nine groups not selected for this work. If the largest group in the field (G-202) is removed, the selected and not-selected samples have similar average properties in terms of galaxy number (selected and non-selected samples with a median of 4 galaxies within 1 Mpc and 500 km s\({}^{-1}\)), total galaxy mass (10\({}^{11.1}\) and 10\({}^{11.2}\)\(M_{\odot}\) respectively), and closest impact parameter (120 and 135 kpc). However, our selected sample is slightly biased towards lower redshifts (0.52 vs 0.65). Any alternative sample of groups selected to cover the redshift range seen in Paper 2 would therefore tend to have similar galaxy properties. We note that the definition of 'group' does vary substantially across our large redshift range, such that a group at low redshift would be seen as an isolated galaxy is it were at a higher redshift, with any satellite galaxies going undetected. However, this does not strongly affect our sample, as most of our groups have at least two members that could be detected up to z \(\approx\) 1. G-517 is the only group likely to be classified as isolated if it lay at a different redshift, whilst G-383 (consisting of two faint galaxies) could appear isolated in a small redshift range but would more likely go undetected entirely. The properties of the resulting group sample are listed in Table 2, with individual galaxy properties shown in Table 3, and their locations on the sky illustrated in Figure 1. Only galaxies within the HST field are shown (although we discuss the presence of more distant galaxies when modelling the absorption within these groups). We note that star-formation rates of galaxies in groups and clusters tend to be lower than field galaxies for similar stellar masses (e.g. Larson et al., 1980; Wetzel et al., 2013). This is found for our full sample and group definitions used in Paper 1, with a K-S test on specific star-formation rates yielding a difference of \(\approx 2.5\sigma\). The sample of group galaxies used in this work and the sample of isolated galaxies from Paper 2 are not large enough for a statistical comparison to produce meaningful results, but we do find our group sample to have slightly lower average sSFRs and lower proportion of SF galaxies (using our template classification detailed in Paper 1). The stellar masses and SFRs of galaxies in our sample are shown in Figure 2, which can be directly compared to the similar figure in Paper 2. These two subsamples span a similar range in mass and SFR. Both our Paper 2 and Paper 3 subsamples are biased towards low-mass and star-forming galaxies, as we focus on galaxies near to the sightlines, where the increased depth and easier detection of emission-line galaxies using MUSE has a larger impact on the sample. ## 3 Models We use the same three basic models as in Paper 2, namely a power-law halo, bi-conical outflow, and rotating disk. In that work we include a full description of the model parameters, the process of using the galaxy observations and model parameters to generate synthetic spectra, and the process of optimizing for the best-fit models (Section 3 and Appendix A therein). To briefly summarize, the strength of absorption caused by a spherical halo with power-law density profile is determined by its index \(\alpha\) and the distance between the galaxy and sightline, whilst its velocity is constant and may be offset from the galaxy by \(v\,\delta\). An outflow has a constant radial velocity \(v_{out}\) and has non-zero density only within a polar angle \(\theta_{out}\). We assume a constant outflowing flux, leading to a density profile of \(r^{-2}\) within the cone of the outflow. A hollow cone, such that density is zero close to the galaxy minor axis (within \(\theta_{in}\)), is allowed for, but in most cases is not required. Disks have an exponential profile with scale heights \(h_{r}\) and \(h_{z}\), alongside radial and azimuthal velocity components \(v_{r}\) and \(v_{\phi}\) respectively. All three model types also have an allowed thermal or turbulent velocity \(v_{t}\) that broadens the absorption profile (we do not attempt to distinguish between thermal and turbulent velocities). In addition to these free parameters, the measurements of galaxy position angle and inclination (from the GALFIT models applied to the HST image) are required as inputs to these models, alongside the chosen galaxy orientation (\(S_{Nr}\) and \(S_{Wr}\), described in Paper 2). Both the direction of galaxy rotation and the direction along the minor axis that points away from the observer may be unconstrained, although the MUSE kinematics and the direction of any spiral arms can be used, if visible, to constrain these before modelling the absorption. We note that none of the models in this paper are constrained by the direction of galaxy spiral arms, and that only in the case of G-536 do we rule out any models based only on the galaxy kinematics. Model spectra are generated by combining 10pc segments along each line-of-sight. Each segment has a density determined by the distance and orientation between the sightline segment and the galaxy as well as the density profile of the model, and a line-of-sight velocity determined by the model outflow/infall/rotation velocities projected into the direction of the line-of-sight. Combining these yields a column density, and therefore optical thickness, as a function of velocity or wavelength. The model absorption profile is calculated by combining the optical thickness of all model components included (i.e. the total result from all disk and outflow models around galaxies in the group), converting this to a transmission spectrum, and then convolving with the instrumental line-spread function. As we discuss in Paper 2, creating an automated routine to find the best-fit parameters of these models is complex and time-consuming, so is not considered for a small sample within a single field. Instead we consider each possible model (halo, disk or outflow for each galaxy with inclination and position angle measurements), and determine which absorption components could be fit by such a model without producing excess absorption in the synthetic spectrum over that seen in the observations, for any of the three lines-of-sight. This primarily concerns eliminating model combinations that do not produce velocity offsets in the correct direction, or match the relative column densities that would be produced in the different sightlines. We then iteratively adjust the model parameters to produce a reasonable fit, and finally find the combination of mod \begin{table} \begin{tabular}{c c c c c c c c c c} \hline Group Ref & z & N gas & total M\({}_{\bullet}\) & b\({}_{min}\) & Min Pair & Lum Ratio & Det Lim (MOS) & Det Lim (MUSE) & Det Lim (Line-only) \\ & & & \(\log_{10}\)(M\({}_{\odot}\)) & (kpc) & (kpc) & & (\(L_{\star}\)) & (\(L_{\star}\)) & (M\({}_{5}\)/yr) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) \\ \hline G-202 & 0.202 & 25 & 11.9 \(\pm\) 0.2 & 103 & 17 & 2.6 & 0.004 & 0.003 & 0.004 \\ G-238 & 0.238 & 4 & 11.5 \(\pm\) 0.2 & 236 & 42 & 1.5 & 0.006 & 0.004 & 0.005 \\ G-383 & 0.383 & 2 & 9.7 \(\pm\) 0.2 & 38 & 5 & 1.5 & 0.018 & 0.011 & 0.016 \\ G-399 & 0.399 & 5 & 11.1 \(\pm\) 0.2 & 122 & 71 & 3.3 & 0.020 & 0.013 & 0.018 \\ G-517 & 0.517 & 2 & 10.3 \(\pm\) 0.2 & 95 & 320 & 29.5 & 0.04 & 0.02 & 0.03 \\ G-536 & 0.536 & 6 & 11.5 \(\pm\) 0.4 & 120 & 4 & 1.4 & 0.04 & 0.03 & 0.04 \\ G-558 & 0.558 & 7 & 11.1 \(\pm\) 0.5 & 148 & 53 & 1.2 & 0.04 & 0.03 & 0.04 \\ G-876 & 0.876 & 4 & 11.1 \(\pm\) 0.4 & 75 & 26 & 2.1 & 0.13 & 0.08 & 0.12 \\ G-907 & 0.907 & 2 & 10.7 \(\pm\) 0.5 & 170 & 594 & 1.7 & 0.15 & 0.09 & 0.13 \\ \hline \end{tabular} \end{table} Table 2: Summary of group properties for our selected sample of galaxy groups. Columns are as follows: (1) Group reference used throughout the paper; (2) group central redshift; (3) number of detected galaxies within 500 \(\,\mathrm{km\,s^{-1}}\)of the central redshift and 1 Mpc of at least one QSO line-of-sight; (4) total stellar mass of detected galaxies; (5) closest impact parameter of any galaxy to any line-of-sight; (6) minimum separation between two detected galaxies; (7) r-band luminosity ratio between brightest two galaxies; (8)-(10) estimated galaxy detection limits in the MOS and MUSE surveys, showing continuum estimates in terms of \(L_{\star}\) and SFR limits on emission-line-only detections. els that reproduces the maximum number of observed absorption components. This method identifies the various models capable of reproducing each observed absorption feature, although it does not provide a quantitative measure of the best fit (which would likely depend substantially on the priors chosen on the parameter space). Throughout this work we therefore list the different model combinations found to produce a reasonable approximation of the observed spectra. ## 4 Absorption in Galaxy Groups We now apply these models to the absorption in galaxy groups, and attempt to reproduce the H i absorption components visible in the QSO spectra at the group redshift. Although Ly\(\alpha\) is usually preferred, in some cases it is saturated, blended, or lies in the lower-resolution FOS spectra, so Ly\(\beta\) provides better constraints on the models. Below we discuss our preferred combination of models for three of these groups, including the model parameters and the reasons for rejecting alternative combinations. The remaining six groups are discussed in Appendix A. ### G-238 Group G-238 consists of two star-forming galaxies (29214 and 26677) appearing in the HST field at z \(\approx\) 0.24. These are the two nearest galaxies to the lines-of-sight at this redshift, although there are several others at larger impact parameters. The observations are detailed in Table 4 and Figure 3. QSO-A features two absorption components at this redshift, \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline Group & Galaxy & RA & Dec & r-band & Luminosity & M\({}_{\bullet}\) & M\({}_{h}\) & SF Flag & SFR & Line & Kinematics \\ & \({}^{\circ}\) & \({}^{\circ}\) & & (\(L_{\star}\)) & log\({}_{10}\)(M\({}_{\odot}\)) & log\({}_{10}\)(M\({}_{\odot}\)) & & (M\({}_{\odot}\)/yr) & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) \\ \hline G-202 & B-22 & 17.5600 & -2.3173 & 22.86 \(\pm\) 0.03 & 0.02 \(\pm\) 0.01 & 8.4 \(\pm\) 0.2 & 10.9 \(\pm\) 0.3 & non-SF & 0.06 \(\pm\) 0.04 & H\(\alpha\) & Poor \\ & 25962 & 17.5589 & -2.3548 & 21.46 \(\pm\) 0.01 & 0.07 \(\pm\) 0.01 & 8.7 \(\pm\) 0.1 & 11.0 \(\pm\) 0.3 & SF & 0.5 \(\pm\) 0.3 & H\(\alpha\) & No \\ & 31704 & 17.5453 & -2.2958 & 22.58 \(\pm\) 0.02 & 0.03 \(\pm\) 0.01 & 8.2 \(\pm\) 0.1 & 10.8 \(\pm\) 0.3 & SF & 0.08 \(\pm\) 0.05 & H\(\alpha\) & No \\ & 31787 & 17.5556 & -2.3002 & 21.35 \(\pm\) 0.01 & 0.08 \(\pm\) 0.01 & 9.7 \(\pm\) 0.1 & 11.5 \(\pm\) 0.2 & SF & 0.5 \(\pm\) 0.2 & H\(\alpha\) & No \\ & 32497 & 17.5567 & -2.3000 & 18.46 \(\pm\) 0.01 & 1.10 \(\pm\) 0.01 & 11.0 \(\pm\) 0.1 & 13.1 \(\pm\) 0.5 & non-SF & 2.3 \(\pm\) 1.9 & H\(\alpha\) & No \\ & 32778 & 17.5458 & -2.2893 & 19.59 \(\pm\) 0.01 & 0.39 \(\pm\) 0.01 & 10.2 \(\pm\) 0.1 & 11.8 \(\pm\) 0.3 & SF & 1.5 \(\pm\) 0.5 & H\(\alpha\) & No \\ G-238 & 26677 & 17.5604 & -2.3491 & 20.59 \(\pm\) 0.01 & 0.22 \(\pm\) 0.01 & 9.7 \(\pm\) 0.2 & 11.4 \(\pm\) 0.3 & SF & 1.3 \(\pm\) 0.4 & H\(\alpha\) & No \\ & 29214 & 17.5395 & -2.3244 & 21.38 \(\pm\) 0.01 & 0.11 \(\pm\) 0.01 & 8.9 \(\pm\) 0.1 & 11.1 \(\pm\) 0.3 & SF & 0.4 \(\pm\)0.1 & H\(\alpha\) & No \\ G-383 & A-48 & 17.5535 & -2.3299 & 23.96 \(\pm\) 0.09 & 0.03 \(\pm\) 0.01 & 9.6 \(\pm\) 0.2 & 11.5 \(\pm\) 0.3 & non-SF & \(<\)0.04 & H\(\alpha\) & No \\ & A-49 & 17.5534 & -2.3296 & 25.90 \(\pm\) 0.51 & 0.02 \(\pm\) 0.01 & 8.7 \(\pm\) 0.6 & 11.1 \(\pm\) 0.4 & non-SF & \(<\)0.02 & H\(\alpha\) & No \\ G-399 & A-56 & 17.5605 & -2.3287 & 25.17 \(\pm\) 0.27 & 0.01 \(\pm\) 0.01 & 8.5 \(\pm\) 0.5 & 11.0 \(\pm\) 0.3 & SF & 0.05 \(\pm\) 0.01 & H\(\alpha\) & No \\ & A-69 & 17.5620 & -2.3255 & 22.06 \(\pm\) 0.01 & 0.19 \(\pm\) 0.01 & 9.8 \(\pm\) 0.2 & 11.6 \(\pm\) 0.3 & SF & 0.4 \(\pm\) 0.2 & H\(\alpha\) & Poor \\ & B-7 & 17.5672 & -2.3244 & 21.37 \(\pm\) 0.01 & 0.36 \(\pm\) 0.01 & 10.6 \(\pm\) 0.2 & 12.3 \(\pm\) 0.5 & non-SF & \(<\)0.1 & H\(\alpha\) & No \\ G-517 & B-34 & 17.5780 & -2.3143 & 25.39 \(\pm\) 0.26 & 0.02 \(\pm\) 0.01 & 7.6 \(\pm\) 0.6 & 10.7 \(\pm\) 0.4 & SF & \(<\)0.04 & [O ii] & No \\ & B-43 & 17.5643 & -2.3119 & 21.50 \(\pm\) 0.01 & 0.59 \(\pm\) 0.01 & 10.3 \(\pm\) 0.2 & 12.0 \(\pm\) 0.4 & SF & 3.8 \(\pm\) 1.5 & [O ii] & Yes \\ G-536 & A-36 & 17.5598 & -2.3322 & 21.30 \(\pm\) 0.01 & 0.77 \(\pm\) 0.01 & 10.9 \(\pm\) 0.2 & 12.9 \(\pm\) 0.7 & SF & 6 \(\pm\) 3 & [O ii] & Yes \\ & A-37 & 17.5599 & -2.3320 & 22.30 \(\pm\) 0.06 & 0.31 \(\pm\) 0.02 & 10.6 \(\pm\) 0.2 & 12.4 \(\pm\) 0.6 & SF & 1.3 \(\pm\) 0.4 & [O ii] & Poor \\ & A-40 & 17.5649 & -2.3308 & 25.61 \(\pm\) 0.27 & 0.02 \(\pm\) 0.01 & 9.4 \(\pm\) 0.4 & 11.4 \(\pm\) 0.3 & SF & 0.03 \(\pm\) 0.02 & [O ii] & No \\ G-558 & A-72 & 17.5538 & -2.3253 & 23.66 \(\pm\) 0.13 & 0.10 \(\pm\) 0.01 & 9.7 \(\pm\) 0.5 & 11.6 \(\pm\) 0.4 & SF & 0.7 \(\pm\) 0.2 & [O ii] & Yes \\ & A-75 & 17.5555 & -2.3238 & 25.02 \(\pm\) 0.29 & 0.03 \(\pm\) 0.01 & 8.3 \(\pm\) 0.4 & 10.9 \(\pm\) 0.4 & SF & 0.5 \(\pm\) 0.1 & [O ii] & Yes \\ G-876 & A-32 & 17.5580 & -2.3325 & 23.61 \(\pm\) 0.05 & 0.30 \(\pm\) 0.01 & 10.6 \(\pm\) 0.2 & 12.4 \(\pm\) 0.6 & SF & 7 \(\pm\) 2 & [O ii] & Yes \\ & A-38 & 17.5573 & -2.3318 & 23.88 \(\pm\) 0.13 & 0.24 \(\pm\) 0.03 & 10.8 \(\pm\) 0.3 & 12.6 \(\pm\) 0.8 & SF & 4.7 \(\pm\) 1.1 & [ with the redder absorber featuring O vi detected at a significant level. No significant absorption is detected in B or C, but some weak absorption in C could be hidden by molecular lines from the sub-DLA. Both of the absorbers are at very similar velocities to the two galaxies, but a simple halo model requires a steep density profile (much steeper than \(r^{-2}\)), otherwise it would produce absorption in B that is inconsistent with observations. QSO-A also lies close to the minor axis of 26677 and the major axis of 29214. A model consisting of an outflow and disk around the two galaxies respectively can approximately produce the results seen in the observations. These galaxies are reasonably well-separated, lying outside of each others' virial radii, so it is not surprising that we see no clear sign of interaction in the absorption. The absorption at impact parameters \(\approx\) 250 kpc from both galaxies lies outside the virial radius, but within the impact parameter range that exhibits a bimodal position angle distribution. We suggest in Paper 1 that this is likely due to disk and outflow structures similar to the models used here. This combination of models is also supported by the detection of O vi at a redshift matching the redder Ly\(\alpha\) component that we identify as a likely outflow. The other absorber has O vi to H i ratio less than 1/5 of this, better fitting accretion from the IGM. We also note that neither a disk around galaxy 26677 nor an outflow around 29214 would intersect any of the lines of sight at small distances, so both of these structures may also exist around these galaxies. ### g-399 A-56, A-69 and B-7 are the three galaxies within the HST image at z \(\approx\) 0.399. The state of the gas causing absorption in the sightlines at this redshift is modelled in Anshul et al. (2021); both sightlines feature transitions from multiple metal ions. The details are given in Table 5 and illustrated in Figure 4. Note that the absorption features in QSO-C are identified with transitions from different redshifts. B-7 is the largest galaxy in this group, and the closest to QSO-B, but is non-star-forming. The other two galaxies within the HST field are star-forming galaxies but also less massive. A-69 lies on the edge of the MUSE field, with the edge of the field running approximately along the major axis. There may be a velocity gradient across the galaxy, but this is not clear. A-56 lacks a well-determined orientation, as it is indistinguishable from a point source in the HST image. An outflow around A-69 is capable of producing two absorption components in A, but not the factor of \(\approx\)10 difference in column density between the two components and the much stronger absorption in B. However, combining this outflow with a disk can reproduce the observed absorption in A whilst remaining consistent with B and C. An outflow with 30\({}^{\circ}\) half-opening angle and 160 km s\({}^{-1}\) velocity, alongside an extended H i disk with 160 km s\({}^{-1}\) rotation returns an approximate match. An outflow around B-7 is ruled out, as the large velocity offset required to match the absorption in A alongside the opening angle required to cover LOS-A would produce a substantially broader absorption feature. No disk or outflow around any of the galaxies could produce the absorption in B without substantially exceeding the observed levels of absorption in A or C. The high column density in B is metal-enriched, exhibiting absorption from a range of metal ions, and has a line-of-sight velocity between the two larger galaxies in this group. This may suggest that this material has been stripped from one of the galaxies, or that an outflow from one of the galaxies has been distorted by interaction with the CGM of the other such that it can no longer be fit by our toy models. Anshul et al. (2021) find that the absorption in QSO-A is consistent with solar metallicity in both components, and use a two Figure 1: The layout of the surveys used in this study. The background image was taken with the Kitt Peak 4-metre Telescope. The solid green square shows the region covered by HST imaging, whilst the smaller magenta squares show the MUSE fields centered on QSOs A and B. The quasars are shown by red circles, with A the southernmost and C the northernmost. The galaxies within the HST field that are analysed in this work are spread across the two panels, with z \(<\) 0.5 galaxies in the left panel, and higher-redshift galaxies in the right panel. These are coloured by group and labelled by the group identifier listed in Table 2. phase model with a low-ionization phase traced by H i and C iii (among other ions) and a higher-ionization phase traced primarily by O vi. The low-ionization phase is consistent with photoionization in the stronger component, and the O vi in both components is consistent with collisionally ionized, T \(\gtrsim 10^{5}\) K gas. This would appear to be consistent with the O vi resulting primarily from outflowing material in both components. They note that the material observed would likely have been ejected from the central galaxy \(>600\) Myr ago. It therefore seems possible that the material we have modelled as a rotating disk has been recycled from this outflow, but only gas that has cooled efficiently is seen in this disk. In this scenario, the stronger absorption component consists of both the proposed cool disk and a warm outflow; the cool disk dominates this absorption component for the low-ionization phase, but less so for the O vi. This would explain the similar metallicity of both phases and components. Stripped material near the minor axis could also reproduce the observations in place of this possible outflow. Anshul et al. (2021) also find multiple phases in the absorption in B, with a broad Ly\(\alpha\) component alongside the O vi and a narrower component matching the C iii in a lower-ionization phase. They find that the low-ionization phase is consistent with photoionized gas at \(\sim\)1/10th solar metallicity, and that the O vi could be produced either by diffuse hot gas at similar metallicity, or by cooler collisionally-ionized gas with near-solar metallicity. They suggest that these could respectively trace either the diffuse, hot CGM or intra-group medium, or the interface between a low-ionization cloud and the hot-CGM (that cloud possibly originating from an outflow). If this absorption in B is due to a hot and diffuse CGM or intra-group medium, it must either be distorted or patchy, as a spherical distribution cannot match the ratio of absorption strengths in the two sightlines. Similarly, our biconical outflow models cannot reproduce this ratio, suggesting (if an outflow exists) either a substantial change in outflow rate with time, a very patchy medium, or distortion due to the interaction between the two galaxies. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline z & Galaxy & Lum (\(L_{\star}\)) & Inc & LOS & Imp (kpc) & Azimuth & log(N H i) & b ( km s\({}^{-1}\)) & \(\Delta\)v ( km s\({}^{-1}\)) & Other ions \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\ \hline 0.238 & 26677 & 0.22 & 67\({}^{\circ}\)\(\pm\) 1\({}^{\circ}\) & A & 262 & 81\({}^{\circ}\)\(\pm\) 1\({}^{\circ}\) & 13.99 \(\pm\) 0.20 & 43 \(\pm\) 7 & -10 \(\pm\) 100 & \\ & & & & A & 262 & 81\({}^{\circ}\)\(\pm\) 1\({}^{\circ}\) & 13.75 \(\pm\) 0.35 & 54 \(\pm\) 21 & +60 \(\pm\) 100 & O vi \\ & & & & B & 502 & 69\({}^{\circ}\)\(\pm\) 1\({}^{\circ}\) & (None, limit \(\sim\)12.8) & & & \\ & & & & C & 934 & 81\({}^{\circ}\)\(\pm\) 1\({}^{\circ}\) & (None, limit \(\sim\)12.9) & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ (22676) & 0.32 & & A & 859 & & & & & & \\ & & & & B & 1069 & & & & & \\ & & & & C & 1519 & & & & & \\ (33195) & 0.04 & & A & 919 & & & & & & \\ & & & & B & 616 & & & & & \\ & & & & C & 495 & & & & & \\ \hline \end{tabular} \end{table} Table 4: Summary of galaxy and absorber properties for group G-238. Any additional galaxies and metal absorbers have velocities shown relative to the first galaxy (26677). with columns as follows: (1) Group redshift; (2) Galaxy ID; (3) Galaxy luminosity (as a multiple of \(L_{\star}\); (4) Galaxy inclination; (5) Line-of-sight identifier; (6) Impact parameter between galaxy and line-of-sight at the group redshift; (7) Azimuthal angle between galaxy major axis and the line-of-sight. (8) Absorber column density; (9) Absorber Doppler parameter, (10) Velocity offset between galaxy and absorber (Any) additional galaxies and metal absorbers have velocities shown relative to galaxy 26677); (11) Any detected metal ions at the same redshift as this H i absorber. Figure 2: Stellar mass vs star-formation rate for galaxies in our sample. Faded points show the overall galaxy sample (identical to Figure 5 from Paper 1), whilst galaxies detailed in this work are bold and labelled with the galaxy MOS or MUSE ID as given in Table 7. Galaxies identified as star-forming are shown in blue, with non-star-forming galaxies in red. The grey dashed line indicates an sSFR of 0.02 Gyr\({}^{-1}\), an approximate match to the SF/non-SF designations that were made using template fitting. Masses are estimated using Equations 1 and 2 from Johnson et al. (2015) and star-formation rates estimated using the H\(\alpha\) or [OIII] (372 Å) luminosities. These measurements are detailed in Paper 1. The objects marked with triangles are upper limits where no clear emission line is detected. Note that the small number of non-star-forming galaxies with apparent extremely high SFRs are due to fringing effects in the VIMOS data (see T14 Section 3.1) being fit as emission lines. These erroneous measurements do not affect our results. Figure 3: Details of the absorption and galaxy environment around group G-238. The upper two rows of panels illustrate the galaxies and the models used, as follows. **Upper-Left:** HST image of the galaxy, with projected major axis shown in red. **Upper-Middle:** Velocity map based on emission lines detected in MUSE, scaled and rotated to match the HST image. Only spaxels with a 3\(\sigma\) line detection are shown. (Note that both galaxies in this group lie outside the MUSE fields, **Upper-Right:** Wide view showing the location of the three QSOs (cyan, grey and orange connected dots representing QSOs A, B and C). The locations of other galaxies at this redshift are indicated by grey crosses; the estimated galaxy virial radius by a dashed black circle; and schematics of models proposed to fit the absorption by red and blue ellipses for outflows and disks respectively. **Central Panel:** The galaxies at this redshift, with the galaxies shown in the upper panels bolded, and others faded. Each galaxy is shown three times, showing the impact parameter between that galaxy and each of the three QSOs. The horizontal bar denotes the velocity uncertainty derived from the galaxy redshift measurement. **Bottom panel:** The transmission of the quasar spectra at the wavelength of Ly\(\alpha\) at the group redshift. The observed QSO spectra are shown as solid lines, with dashed lines giving the synthetic spectra produced by our models. Vertical lines passing through both panels show detected H i absorption components. The model shown in the lower panel is a disk with rotation velocity \(\sim 100\) km s\({}^{-1}\) around galaxy 29214, and an outflow with opening angle 20\({}^{\circ}\) and velocity 120 km s\({}^{-1}\). Additional absorbers identified by red ticks are mostly molecular lines at the redshift of the sub-DLA at z \(\approx 0.56\), except for the weak absorption in LOS-B (5), which is Ly\(B\) from z \(\approx 0.47\). Figure 4: Details of the absorption and galaxy environment around group G-399. The layout is identical to that shown in Figure 3, with kinematics measured from the H\(\alpha\) emission line seen in the MUSE data, and the model shown in the lower panel combines an outflow with 25\({}^{\rm th}\) half-opening angle and 160 \(\rm km\,s^{-1}\) velocity with a disk with 150 \(\rm km\,s^{-1}\) rotation and 50 \(\rm km\,s^{-1}\) infall, both around galaxy A-69. All five absorption components labelled with red ticks are molecular lines associated with the sub-DLA at z\(\sim\)0.56. Note that galaxy A-69 lies at the edge of the MUSE field, so the velocity map is truncated approximately along the dashed grey line. ### G-876 A-32 is an \(\approx 0.3L_{*}\) galaxy at z \(\sim 0.88\), inclined at \(\sim 30^{\circ}\). It is paired with A-38, a galaxy of similar magnitude that is less than 30 kpc away. Neither galaxy shows any signs of morphological distortion in the HST image or any kinematic signatures of interaction between the two galaxies; both galaxies show a velocity gradient along their major axis that is likely due to rotation (with velocity \(\approx 100\) km s\({}^{-1}\)). This system is detailed in Table 6 and Figure 5. A-38 is redshifted by \(\sim\)50 km s\({}^{-1}\) relative to A-32, and is therefore at the same redshift as the absorption in sightline A, whilst sightline B is blueshifted by 100 km s\({}^{-1}\) relative to this galaxy. This also exhibits a \(\sim 100\) km s\({}^{-1}\) velocity gradient. Sightline A lies at a distance of \(\sim\)100 kpc along the major axis of A-32 and 75 kpc at \(\sim 20^{\circ}\) to the major axis of A-38, with sightline B at \(\sim\)600 kpc along the minor axis. Both show absorption, with H i column densities \(\sim 10^{15.8}\) and \(10^{15.4}\) cm\({}^{-2}\) and Doppler parameters 30 and 20 km s\({}^{-1}\) respectively. Sightline A features O iii and O iv absorption at this redshift, whilst sightline B does not show any metal absorption. However, the detection limit for these ions allows for the gas seen in LOS-B to have similar O iii and O iv column densities to that in A. Note that as both Ly\(\alpha\) and Ly\(\beta\) lie in the FOS gratings at this redshift, the Doppler parameters are not resolved; higher-order lines appearing in COS constrain the Doppler widths but reveal no additional structure. These galaxies lie beyond the redshift of QSO-C, so no absorption can be detected in the third sightline. The relatively similar column densities alongside a large difference in impact parameter prevents any of our models from simultaneously matching both absorbers. That the absorption in A is close to the major axis of both galaxies, whilst B is near the minor axis of both galaxies, would suggest that these could be a disk and outflow respectively. If the absorption in sightline A is associated with A-32, it is co-rotating and could be part of an extended disk with rotation velocity \(\approx\)100 km s\({}^{-1}\) (depending on any infall component). If associated with A-38, the absorption in A must have comparable rotation and infall velocities, in order to produce absorption with no clear line-of-sight velocity offset. Alternatively, a power-law halo around A-38 could reproduce this lack of velocity offset. An outflow from either galaxy can also reproduce the absorption in B. In order to reproduce the velocity offset, these putative outflows would require velocities of \(\approx\)140 km s\({}^{-1}\) (from A-32) or \(\approx\)210 km s\({}^{-1}\) (from A-38). We note that A-38 cannot produce both a disk matching A and an outflow matching B, as the need for the disk velocity offsets to 'cancel' fiese the galaxy orientation, whilst an outflow matching B requires the opposite orientation. This still leaves several possible combinations of models that can reproduce the observations. A reasonable fit is shown in Figure 5, and combines an outflow with velocity 140 km s\({}^{-1}\)and opening angle 40\({}^{\circ}\) and a disk with rotation velocity 100 km s\({}^{-1}\), both originating from A-32. We also note that galaxy 26501 is substantially brighter than either A-32 or A-38, so may be contributing to the absorption in LOS-B. Furthermore, the velocity difference between the two galaxies and the location of each galaxy near the major axis of the other suggest orbital angular momentum with similar alignment to the rotation of both galaxies. Therefore the absorption that could be identified as a disk around one galaxy may also be tidal material resulting from their interaction, or larger-scale accretion into the group, rather than associated with one of the galaxies. ## 5 Discussion The model fits shown above can now be discussed in the context of our previous works, especially the sample of isolated galaxies considered in Paper 2, as well as results from the literature. We summarize the best-fitting results for each galaxy group in Table 7. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline z & Galaxy & Lum (\(L_{\star}\)) & Inc & LOS & Imp (kpc) & Azimuth & log(N H i) & b ( km s\({}^{-1}\)) & \(\Delta\)v ( km s\({}^{-1}\)) & Other ions \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\ \hline 0.399 & A-56 & 0.01 & \(35^{\circ}\pm 35^{\circ}\) & A & 122 & \(76^{\circ}\pm 45^{\circ}\) & 13.15 \(\pm\) 0.09 & 15 \(\pm\) 6 & -100 \(\pm\) 40 & C iii, O vi \\ & & & & A & 122 & \(76^{\circ}\pm 45^{\circ}\) & 14.14 \(\pm\) 0.03 & 35 \(\pm\) 2 & -10 \(\pm\) 40 & C iii, O vi \\ & & & & B & 325 & \(38^{\circ}\pm 45^{\circ}\) & 16.77 \(\pm\) 0.02 & 24.9 \(\pm\) 0.5 & -90 \(\pm\) 40 & C\({}_{1}\), N iii, Si iii, O iii, O vi \\ & & & & C & 921 & \(11^{\circ}\pm 45^{\circ}\) & (None, limit \(\approx\)13.0) & & & \\ A-69 & 0.19 & \(77^{\circ}\pm 2^{\circ}\) & A & 186 & \(82^{\circ}\pm 2^{\circ}\) & & & (-40) & \\ & & & & B & 254 & \(74^{\circ}\pm 2^{\circ}\) & & & & \\ & & & & C & 857 & \(44^{\circ}\pm 2^{\circ}\) & & & \\ B-7 & 0.36 & \(82^{\circ}\pm 1^{\circ}\) & A & 285 & \(70^{\circ}\pm 1^{\circ}\) & & (-150) & \\ & & & & B & 205 & \(12^{\circ}\pm 1^{\circ}\) & & & \\ & & & & C & 846 & \(1^{\circ}\pm 1^{\circ}\) & & & \\ (26721) & 0.2 & & & A & 937 & & & (+80) & \\ & & & & B & 1318 & & & & \\ & & & & C & 1626 & & & & \\ (34572) & 1.2 & & & A & 1251 & & & (+210) & \\ & & & & B & 889 & & & & \\ & & & & C & 280 & & & \\ \hline \end{tabular} \end{table} Table 5: Summary of galaxy–absorber group at z \(\sim\) 0.399. Any additional galaxies and metal absorbers have velocities shown relative to the first galaxy (A-56). Columns are identical to those in Table 4. ### Model success Firstly, our disk and outflow models provide a plausible fit for 21 of the 28 detected H i components within 500 \(\,\mathrm{km\,s^{-1}}\) and 500 kpc of our sample galaxies. This 75% success rate is somewhat higher than the \(\approx\)50-60% found for the isolated galaxy sample in Paper 2 (13 of 26 within 500 \(\,\mathrm{km\,s^{-1}}\) and 12 of 20 within 300 \(\,\mathrm{km\,s^{-1}}\)), a \(\approx\)2\(\sigma\) difference under binomial statistics. We note that our'success rate' represents an upper limit on the fraction of detected H i absorbers that originate in the disk, halo and outflow structures described by our models. A successful fit to our models only tentatively identifies an absorber as a possible disk/halo/outflow. Several alternative origins for apparently successful fits are discussed below. It may be expected that for group galaxies with gravitational interactions, such forces would distort any structure in the CGM and reduce the rate at which these models can reproduce the absorption at large scales. This is seen in, for example, the M81 group, where 21 cm emission is seen tracing both a disk-like and biconical outflow structure around M82 on small scales, but distorted in the direction of M81 on larger scales (e.g. Sorgho et al., 2019). Such distortions would be expected to reduce the success rate of our models in Figure 5: Details of the absorption and galaxy environment around group G-876. The layout is identical to that shown in Figure 3, with kinematics measured from the [O ii] emission line seen in the MUSE data, and the model shown in the lower panel combines an outflow with velocity 160 \(\,\mathrm{km\,s^{-1}}\)and opening angle 40\({}^{\circ}\) and a disk with rotation velocity 100 \(\,\mathrm{km\,s^{-1}}\). galaxy groups, although in some circumstances our disk and outflow models could appear successful even with these distortions present. Our higher success rate for group galaxies over isolated galaxies could suggest that galaxy interactions are not having a significant impact on our sample. However, there are several effects that could counter any impact of group interactions. Firstly, the larger number of galaxies near to each absorber increases the number of free parameters available for our models, and therefore the likelihood of obtaining a reasonable fit even if the models do not reflect the true physical state of the gas (or reflect the state of the gas poorly due to effects such as intermittent outflows and changes in ionization state, as well as distortions due to group interactions). However, the groups for which some H i absorption cannot be fit by our models appear to be those with the largest number of galaxies, whereas for those with 2-4 galaxies we have been able to find a plausible fit to all identified absorbers (a difference of \(\approx 2\sigma\) in mean N\({}_{gals}\)). This suggests that the increased number of free parameters, largest for groups with many galaxies, is not the primary reason for our high success rate. We also search for other differences between the properties of the groups for which all absorbers were fit, and those with some absorption that could not be fit. No significant differences were found in their redshifts, total stellar masses, impact parameters to the nearest or most massive galaxy, ratios of stellar masses or luminosities of the brightest two galaxies, or the projected separations of the tightest pair of galaxies in each group. These tests therefore provide little indication of the likely origins of unexplained absorbers. We do find that individual absorbers that were not consistent with any of our models have higher H i column densities than those we could successfully fit, and discuss these in more detail in Sections 5.4 and 5.5. Second, the larger number of galaxies, and resulting smaller impact parameters between the lines-of-sight and the nearest galaxy, may also contribute to the success of our models. With the known correlation between impact parameter and column density (e.g. Werk et al., 2014; Wilde et al., 2021), this contributes to generally higher H i column densities in our groups (median H i column densities of \(10^{14.5}\) and \(10^{13.8}\)cm\({}^{-2}\) in absorbers associated with this work and our Paper 2 sample respectively). These higher column densities also make successful fits more likely, as weak absorption produced by the toy models can more easily be 'hidden' under other absorbers, and column densities are less constrained once absorption begins to saturate. The smaller impact parameters also increase the likelihood of probing any inflowing or outflowing structures closer to the galaxy than any distortions caused by interactions within the group. Third, it is possible that some absorption resulting from group interactions could be fit by our models regardless. This could include material along the plane of a galaxy interaction masquerading as an accreting disk, and stripped material near the galaxy minor axis appearing consistent with an outflow. These false fits are unlikely to occur where multiple absorbers are reproduced by a single model, due to the extra constraints, but may contribute to some of our model fits based on a single absorber. Studies of cool CGM gas using Mg ii absorption have come to differing conclusions on the origin of the more extended absorption in galaxy groups. Bordoloi et al. (2011) suggest that a superposition of CGM absorption from the constituent galaxies can reproduce the observed results, whereas Nielsen et al. (2018) prefer a model in which absorbers are generally associated with the intra-group medium rather than any individual galaxy. Using samples extending to slightly higher redshifts, Fossati et al. (2019) suggest that this is material originating primarily from tidal interactions between group galaxies, whilst Dutta et al. (2020) support a model with contributions from individual halos and tidal interactions. Our models cannot directly test for tidal material (although we do discuss this briefly in Section 5.4), but do produce a superposition of possible structures sometimes found in the CGM of isolated galaxies. However, this differs from the superposition model tested in Bordoloi et al. (2011), Nielsen et al. (2018) and Dutta et al. (2020) in that the covering fraction of these CGM structures in groups is not constrained to be the same as that around isolated galaxies. Our high model success rate provides some support for a superposition model for groups using our definition and sample selection, but does not rule out a contribution from intra-group and tidal material. The apparent difference between this conclusion and that of Nielsen et al. (2018) may in part result from our larger linking lengths, so our group sample contains some galaxy pairs with larger separations, which are less likely to be affected by group interactions. Our sample also features galaxies at large impact parameters that may explain weak absorbers seen in group environments, which are not satisfactorily explained under their superposition model. Dutta et al. (2020) found a similar result, ruling out superposition as the origin for their strongest and weakest absorbers. Nielsen et al. (2018) found that combining the expected galaxy-absorber velocity offsets (as observed around isolated galaxies) with the galaxy-galaxy \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline z & Galaxy & Lum (\(L_{\star}\)) & Inc & LOS & Imp (kpc) & Azimuth & log(N H i) & b (km s\({}^{-1}\)) & \(\Delta\nu\) (km s\({}^{-1}\)) & Other ions \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\ \hline 0.876 & A-32 & 0.30 & \(55^{\circ}\pm 3^{\circ}\) & A & 97 & \(19^{\circ}\pm 4^{\circ}\) & \(15.78\pm 0.30\) & \(30\pm 5\) & \(40\pm 20\) & O iv (0), O iii (-10) \\ & & & & B & 597 & \(79^{\circ}\pm 4^{\circ}\) & \(15.45\pm 0.47\) & \(22\pm 3\) & -50 \(\pm 20\) & \\ & & & & B & 588 & \(16^{\circ}\pm 5^{\circ}\) & & & (+50) & \\ & & & & & & & & & \\ & (26501) & 0.80 & & A & 879 & & & & (+30) & \\ & & & & B & 1062 & & & & \\ & (29799) & 0.38 & & A & 1197 & & & & (+180) & \\ & & & & B & 755 & & & & \\ \hline \end{tabular} \end{table} Table 6: Summary of galaxy–absorber group G-876. Any additional galaxies and metal absorbers have velocities shown relative to the first galaxy (A-32). Note that this group is beyond the redshift of QSO-C, so no absorption could be detected. Columns are identical to those in Table 4. offsets seen in groups produced a model absorber-absorber velocity distribution significantly wider than observed. However, possibly due to including faint MUSE galaxies near the lines-of-sight, our groups exhibit smaller galaxy-galaxy velocity differences than their sample, reducing this inconsistency. We also note that our sample includes galaxies fainter than those included in the Bordoloi et al. (2011) sample. Their lack of faint galaxies likely leads to some interacting galaxies being classified as isolated, therefore making the absorption profiles of group and isolated galaxies more similar. Our high success rate at reproducing absorption in our sample of galaxy groups suggests that this superposition of disk and outflow structures, similar to those sometimes found around isolated galaxies, may explain a substantial fraction of absorption found around these groups. However, the increase in parameter space due to the larger number of galaxies, effects due to the higher column densities and smaller impact parameters, and absorption from other sources (e.g. intra-group and tidal material) mis-identified as disk/outflow material, are all likely to contribute to the higher success rate found \begin{table} \begin{tabular}{c c c c c c} \hline Group & Galaxy & Lum (\(L_{\star}\)) & SF-Class & Kinematics & Model(s) \\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline G-202 & B-22 & 0.02 & SF & Poor & (4/5 absorbers) B-22 outflow, \(\theta\)=15\({}^{\circ}\), v=250 km s\({}^{-1}\), extent \(>\) 100 kpc \\ & 25962 & 0.07 & non-SF & No & AND 31787 outflow, \(\theta\)=20\({}^{\circ}\), v=50 km s\({}^{-1}\), extent \(>\) 400 kpc \\ & 31704 & 0.03 & SF & No & AND 32778 disk, height ratio \(\approx\)10, v\({}_{\phi}\)= 120 km s\({}^{-1}\), v\({}_{\nu}\)= 0 km s\({}^{-1}\), extent \(>\) 400 kpc \\ & 31787 & 0.08 & SF & No & (other options possible, e.g. 25962 disk, but less likely) \\ & 32497 & 1.1 & non-SF & No & \\ & 32778 & 0.4 & SF & No & \\ G-238 & 26677 & 0.2 & SF & No & (2/2 absorbers) 26677 outflow, \(\theta\)=20\({}^{\circ}\), v=120 km s\({}^{-1}\), extent \(>\) 260 kpc \\ & 29214 & 0.1 & SF & No & AND 29214 disk, height ratio \(\approx\)40, v\({}_{\phi}\)= 100 km s\({}^{-1}\), v\({}_{\nu}\)= 30 km s\({}^{-1}\), extent \(>\) 240 kpc \\ & & & & (29214 disk and halo possible instead, with similar disk parameters) \\ G-383 & A-48 & 0.03 & non-SF & Poor & (3/3 absorbers) A-48 outflow, \(\theta\)=10\({}^{\circ}\), v=600 km s\({}^{-1}\), extent \(>\) 40 kpc \\ & A-49 & 0.01 & SF & Poor & AND A-49 disk, height ratio \(\approx\)100, v\({}_{\phi}\)= 150 km s\({}^{-1}\), v\({}_{\nu}\)= 40 km s\({}^{-1}\), extent \(>\) 400 kpc \\ & & & & (A-48 disk with \(v_{\phi}\)= 320 km s\({}^{-1}\), v\({}_{\nu}\)= 0 km s\({}^{-1}\)also possible) \\ & & & & (some parameters very uncertain due to edge-on galaxy pair) \\ G-399 & A-56 & 0.01 & SF & No & (2/3 absorbers) A-69 outflow, \(\theta\)=25\({}^{\circ}\), v=160 km s\({}^{-1}\), extent \(>\) 180 kpc \\ & A-69 & 0.19 & SF & Poor & AND A-69 disk, height ratio \(\approx\)100, v\({}_{\phi}\)= 150 km s\({}^{-1}\), v\({}_{\nu}\)= 50 km s\({}^{-1}\), extent \(>\) 200 kpc \\ & B-7 & 0.36 & non-SF & No & (strongest absorber not fit by any toy model) \\ G-517 & B-34 & 0.12 & SF & No & (No detected absorption) \\ & B-43 & 0.6 & SF & Yes & (Upper limit logN =13.3) \\ G-536 & A-36 & 0.8 & SF & Yes & (4/7 absorbers) A-36 outflow, \(\theta\)=55\({}^{\circ}\), v=140 km s\({}^{-1}\), extent \(>\) 120 kpc \\ & A-37 & 0.3 & SF & Poor & AND A-40 disk, height ratio \(\approx\)100, v\({}_{\phi}\)= 110 km s\({}^{-1}\), v\({}_{\nu}\)= 60 km s\({}^{-1}\), extent \(>\) 400 kpc \\ & A-40 & 0.02 & SF & No & (other absorption may be due to stripping in galaxy interactions, A-37 disk also possible) \\ G-558 & A-72 & 0.10 & SF & Yes & (2/3 absorbers) A-72 outflow, \(\theta\)=40\({}^{\circ}\), v=150 km s\({}^{-1}\), extent \(>\) 500 kpc \\ & A-75 & 0.03 & SF & Yes & OR A-75 outflow, \(\theta\)=50\({}^{\circ}\), v=70 km s\({}^{-1}\), extent \(>\) 350 kpc \\ & A-77 & 0.04 & SF & No & AND \\ & & & & & A-75 disk, height ratio \(\approx\)70, v\({}_{\phi}\)= 90 km s\({}^{-1}\), v\({}_{\nu}\)= 50 km s\({}^{-1}\), extent \(>\) 200 kpc \\ & & & & & OR A-75 halo, \(v_{\phi}\) = 0, \(\alpha\)\(\geq\) 2 \\ & & & & (sub-DLA not fit, absorber is likely due to a galaxy that is not detected) \\ G-876 & A-32 & 0.30 & SF & Yes & (2/2 absorbers) A-32 disk, height ratio \(\approx\)20, v\({}_{\phi}\)= 100 km s\({}^{-1}\), v\({}_{\nu}\)= 0 km s\({}^{-1}\), extent \(>\) 100 kpc \\ & A-38 & 0.24 & SF & Yes & OR A-38 disk, height ratio \(\approx\)10, v\({}_{\phi}\)= 100 km s\({}^{-1}\), v\({}_{\nu}\)= 100 km s\({}^{-1}\), extent \(>\) 80 kpc \\ & & & & & AND \\ & & & & & & \\ & & & & & & \\ & & & & & & \\ & & & & & & \\ & & & & & & \\ \end{tabular} \end{table} Table 7: Summary of galaxies in groups and the best-fitting toy models (ordered by redshift). Only galaxies with estimated position angles are shown in this table. Columns are as follows: (1) Group identifier; (2)-(5) Galaxy IDs, luminosities, star-formation class derived from template fitting and presence of kinematics, as given in Table 3; (6) Brief description of model combinations found to produce a reasonable fit. fitting these models to gas around groups than around isolated galaxies. ### Model parameters We briefly discuss the model parameters that produce the best fit for the absorption near these galaxy groups. In most cases these are similar to those found near isolated galaxies in Paper 2, where we discuss the parameters in more detail, but some differences are highlighted here. #### 5.2.1 Outflows Table 8 lists the possible outflows that can reproduce some of the Ly\(\alpha\) absorption seen near these groups of galaxies. The possible outflows have model parameters that are generally consistent with our results from Paper 2, with a range of half-opening angles extending to \(\approx\) 60deg and velocities mostly between 50 and 250 \(\,\mathrm{km\,s^{-1}}\) (excepting the proposed A-48 outflow, for which our best estimate is 600 \(\,\mathrm{km\,s^{-1}}\) but much lower velocities of \(\sim\)200 \(\,\mathrm{km\,s^{-1}}\) lie within the 1\(\sigma\) range, as the galaxy is very close to edge-on). The larger number of possible outflows over our Paper 2 result is due to a larger number of galaxies in the vicinity of each absorber, so each absorber is more likely to be consistent with at least one possible outflow model. There is also very little difference in the extents of these outflows (median 200 kpc in groups, 180 kpc isolated; mean 280 and 270 kpc), whether or not these are normalized to the virial radius (mean extent \(\approx\)1.9 \(r_{vir}\) and median \(\approx\)1.5 \(r_{vir}\) for both samples). The median impact parameter to our putative outflows is similar to the median impact parameter between detected H i and the nearest galaxy (190 kpc). This supports the existence of outflows extending to at least the virial radius from isolated and group galaxies, more similar to those in the FIRE simulations with cosmic rays included (e.g. Hopkins et al., 2021), than other models suggesting outflows rarely extend past the virial radius. Using our best-fit results, seven of the eleven possible outflows would exceed escape velocity were the galaxy isolated, comparable to the two of four outflows exceeding escape velocity in Paper 2. Although the very small sample in Paper 2 provided some suggestion of a correlation between sSFR and \(v/v_{esc}\), we do not find a significant correlation using the larger sample here (\(\approx\) 1\(\sigma\)). Schrotter et al. (2019) note that the outflows they find using Mg ii absorption tend to only exceed escape velocity from low-mass galaxies (\(\mathrm{M_{\bullet}/M_{\odot}}\lesssim 10^{9.6}\)). Whilst our two model outflows from galaxies with smaller masses than this do indeed appear to exceed escape velocity, several of our higher-mass galaxies also exhibit outflows exceeding their escape velocities. These outflows are probed at scales larger than the virial radius, whereas our two outflows around large galaxies probed on scales \(<r_{vir}\) do not achieve escape velocity. This appears to support the continued acceleration at large radii seen in the simulations by Hopkins et al. (2021) with cosmic rays included, although, as they discuss, only the material near or above the escape velocity would be capable of reaching these scales in the absence of additional acceleration. In Paper 2, we noted that the model outflows in Schrotter et al. (2019) did not feature opening angles as wide as the 45-60deg found in our isolated sample, possibly due to their stellar masses tending to be larger and resulting in an increased collimation effect. Our group sample includes a larger number of high-mass galaxies, especially at high redshifts, and our model half-opening angles remain large for these galaxies. However, our models suggest narrower outflows provide a better fit to the galaxies at lower redshift, which also tend towards lower masses, opposite to the trend expected due to collimation. This trend may be weaker than it appears, as some of the higher-redshift objects (in the A-16 and A-32 groups) are unresolved and the absorber width does not help to constrain the outflow opening angle. If those relying on unresolved absorption are removed, our total (group and isolated) sample of outflows has opening angles much more similar to those seen in Schrotter et al. (2019), but still shows a trend of wider outflows at higher masses and redshifts. It is unclear whether the redshift or mass difference is contributing to the difference in opening angle, but there are observations suggesting that outflows at higher redshifts are closer to isotropic with respect to the galaxy (e.g. Chen et al., 2021). #### 5.2.2 Disks We show the best-fitting parameters for absorption fit by our disk model in Table 9. The successful disk-like structures also appear similar to our results from Paper 2, with scale height ratios from 10 to \(\sim\) 300, and extents ranging from less than 100 kpc to more than 1 Mpc. The average extents (\(\approx\) 300 kpc) are also similar. However, we find substantially smaller circular velocities than around isolated galaxies (by \(\approx\) 270 to 120 \(\,\mathrm{km\,s^{-1}}\)median, or 3 \(v_{vir}\) to 1), and also produce a better fit using smaller infall velocities for our group models. These infall velocities are often better constrained than for isolated galaxies (for which we generally assumed an infall velocity of 0.6\(v_{vir}\)), as with more high-column density absorption in groups it is more likely that multiple absorbers can be fit by a single model. Given the small sample size, especially around isolated galaxies, this \(\approx\) 1.5\(\sigma\) difference in circular velocity may simply be due to the galaxy properties and sightline configurations that feature in the sample (although the extents and impact parameters are similar). The assumed infall velocity for isolated galaxies, and its effect on the rotational velocities required to fit the observed absorption, may also contribute, but changing this assumption will increase the rotation velocity for some models and decrease it for others. A real difference between circular velocities of accreting material onto isolated and group galaxies may be detected here, but it is difficult to demonstrate when using absorption at only one or two locations around each galaxy. For example it is possible that, rather than seeing absorption due to gas falling onto one of the individual galaxies within our groups, we are seeing material falling into the group halo, which would likely extend further than the halo of an isolated galaxy of the same stellar mass as one of the group galaxies. Interaction between infalling material and group gas could slow its apparent velocity and contribute to the observed difference (as a similar interaction between the CGM and galaxy ISM appears to cause velocity lag in the ISM, Bizyaev et al., 2022). Alignment between the group galaxies and large-scale structure (e.g. Tempel and Libeskind, 2013; Zhang et al., 2015; Hirv et al., 2017) could lead to accretion at larger scales looking similar to accretion onto an individual galaxy (possibly supported by several of our groups featuring multiple galaxies with similar position angles, most noticeably G-383 and G-876). Accretion from large-scale structure into a galaxy group may also have been observed by Bielby et al. (2017). #### 5.2.3 Azimuthal angle dependency We also briefly note that attempting to identify disks and outflows purely by cutting in position angle and inclination (as in e.g. Bordoloi et al., 2011; Zabl et al., 2019; Schroetter et al., 2019) produced results inconsistent with our models when applied to our Paper 2 sample of isolated galaxies, although our models were more likely to support disks and outflows near the major and minor axes respectively. We apply this same cut here, and show the results in Figure 6. All of our model outflows lie in the region that would be identified as such using these geometric cuts, whilst our model disks spread over both regions. These results reinforce our conclusions from Paper 2, showing that these geometric cuts are useful but are unlikely to produce pure disk or outflow samples. ### Correlations between group size and absorption Whether the gas around galaxy groups results primarily from a superposition of the CGM of the individual galaxies, or from stripped material, increasing the number of galaxies in a group is expected to increase the column densities of gas seen around the group. It is therefore unsurprising that we find a correlation between the number of galaxies in a group, and the number of H i absorption features identified within a 500 \(\,\mathrm{km\,s^{-1}}\) window. We also find a strong correlation between the column density of an absorber, and the number or total mass of nearby galaxies (using any combination of a 500 or 300 \(\,\mathrm{km\,s^{-1}}\) window in redshift and 300, 500 or 1000 kpc in impact parameter), with Spearman co-efficients suggesting > 95% confidence in all of these cases. These correlations persist if the sample is split to only absorbers in the COS spectra (z \(\lesssim\) 0.45) and those in the FOS spectra. This relationship between group size and absorption is therefore present across our 0 \(<\) z \(<\) 1 redshift range and is robust to the changing galaxy and absorber detection limits across this range. The total stellar mass of nearby galaxies to each H i absorber (using a 500 \(\,\mathrm{km\,s^{-1}}\) velocity window and 1000 kpc impact parameter cut) is shown in Figure 7. There is no significant difference between the number or total mass of nearby galaxies surrounding absorbers identified as disks and those identified as outflows using \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline Galaxy & z & M\({}_{\star}\) & M\({}_{\mathrm{H}}\) & SFR & sSFR & b\(max\) & Extent & r\({}_{vir}\) & \(\theta_{out}\) & \(\nu_{ext}\) \\ & & \(\log_{10}(\mathrm{M_{\odot}})\) & \(\log_{10}(\mathrm{M_{\odot}})\) & (\(\mathrm{M_{\odot}/yr}\)) & Gyr\({}^{-1}\) & (kpc) & (kpc) & (\(\mathrm{kpc}\)) & (\(\mathrm{kpc}\)) & (\(\mathrm{km\,s^{-1}}\)) & (\(\mathrm{km\,s^{-1}}\)) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) \\ \hline B-22 & 0.202 & 8.4 \(\pm\) 0.2 & 10.9 \(\pm\) 0.3 & 0.06 \(\pm\) 0.04 & 0.22\({}^{+0.19}_{-0.10}\) & 100 & 110 \(\pm\) 10 & 90 \(\pm\) 20 & 15 & 250 & 110 \(\pm\) 30 \\ 31787 & 0.202 & 9.7 \(\pm\) 0.1 & 11.5 \(\pm\) 0.2 & 0.5 \(\pm\) 0.2 & 0.08 \(\pm\) 0.04 & 390 & 460 \(\pm\) 10 & 130 \(\pm\) 30 & 20 & 50 & 60\({}^{\circ}\)\(\pm\) 30 \\ A-48 & 0.383 & 9.6 \(\pm\) 0.2 & 11.5 \(\pm\) 0.3 & 0.04 \(\pm\) 0.01 & 40 & 40 \(\pm\) 10 & 120 \(\pm\) 30 & 10 & 600 & 210 \(\pm\) 50 \\ A-69 & 0.399 & 9.8 \(\pm\) 0.2 & 11.6 \(\pm\) 0.3 & 0.4 \(\pm\) 0.2 & 0.06 \(\pm\) 0.03 & 190 & 200 \(\pm\) 10 & 130 \(\pm\) 30 & 25 & 160 & 130 \(\pm\) 40 \\ A-36 & 0.536 & 10.9 \(\pm\) 0.2 & 12.9 \(\pm\) 0.7 & 6 \(\pm\) 3 & 0.08 \(\pm\) 0.055 & 120 & 140 \(\pm\) 10 & 360 \(\pm\) 190 & 55 & 140 & 650 \(\pm\) 200 \\ A-72 & 0.558 & 9.7 \(\pm\) 0.5 & 11.6 \(\pm\) 0.4 & 0.7 \(\pm\) 0.2 & 0.13\({}^{+0.20}_{-0.08}\) & 430 & 530 \(\pm\) 20 & 120 \(\pm\) 60 & 40 & 150 & 60\({}^{\circ}\)\(\pm\) 30 \\ A-75 & 0.558 & 8.3 \(\pm\) 0.4 & 10.9 \(\pm\) 0.4 & 0.5 \(\pm\) 0.1 & 2.3 \({}^{+1.3}_{-0.7}\) & 410 \(\pm\) 20 & 80 \(\pm\) 30 & 50 & 70 & 30\({}^{\circ}\)\(\pm\) 10 \\ A-32 & 0.876 & 10.6 \(\pm\) 0.2 & 12.4 \(\pm\) 0.6 & 7 \(\pm\) 2 & 0.16 \(\pm\) 0.11 & 600 & 740 \(\pm\) 30 & 210 \(\pm\) 100 & 40 & 160 & 50 \(\pm\) 30 \\ A-38 & 0.876 & 10.8 \(\pm\) 0.3 & 12.6 \(\pm\) 0.8 & 4.7 \(\pm\) 1.1 & 0.08 \(\pm\) 0.055 & 590 & 730 \(\pm\) 40 & 250 \(\pm\) 160 & 35 & 210 & 160 \(\pm\) 100 \\ B-19 & 0.907 & 10.4 \(\pm\) 0.5 & 12.2 \(\pm\) 0.8 & 1.1 \(\pm\) 0.3 & 0.05\({}^{+0.10}_{-0.02}\) & 170 & 240 \(\pm\) 20 & 170 \(\pm\) 110 & 60 & 80 & 250 \(\pm\) 160 \\ A-16 & 0.907 & 10.4 \(\pm\) 0.5 & 12.2 \(\pm\) 0.9 & 2.3 \(\pm\) 0.6 & 0.09\({}^{+0.10}_{-0.05}\) & 180 & 230 \(\pm\) 30 & 170 \(\pm\) 120 & 60 & 150 & 250 \(\pm\) 170 \\ \hline \end{tabular} \end{table} Table 8: Model outflow properties around galaxies for which outflows can reproduce some of the observed absorption components. Column descriptions: (1)-(4) are from Table 3; (5) model scale height ratio (i.e. relative disk thickness); (6) maximum impact parameter at which this structure is detected (7) maximum observed disk extent (the 3-d distance from galaxy to the point on the line-of-sight where it intersects the disk plane, for the sightline with largest impact parameter with detected absorption); (8) virial radius; (9) virial velocity; (10) model circular velocity; (11) model radial/infall velocity. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline Galaxy & z & M\({}_{\star}\) & M\({}_{\mathrm{H}}\) & b\({}_{\mathrm{r}}/\)h\({}_{\mathrm{z}}\) & b\({}_{\mathrm{max}}\) & Extent & r\({}_{vir}\) & v\({}_{vir}\) & v\({}_{\phi}\) & v\({}_{r}\) \\ & & \(\log_{10}(\mathrm{M_{\odot}})\) & \(\log_{10}(\mathrm{M_{\odot}})\) & (kpc) & (kpc) & (kpc) & (kpc) & (\(\mathrm{km\,s^{-1}}\)) & (\(\mathrm{km\,s^{-1}}\)) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\ \hline 32778 & 0.202 & 10.2 \(\pm\) 0.1 & 11.8 \(\pm\) 0.3 & 20 & 410 & 420 \(\pm\) 10 & 170 \(\pm\) 60 & 130 \(\pm\) 50 & 150 & \(<\) 20 \\ 29214 & 0.238 & 8.9 \(\pm\) 0.1 & 11.1 \(\pm\) 0.3 & 40 & 240 & 240 \(\pm\) 10 & 100 \(\pm\) 20 & 70 \(\pm\) 20 & 100 & 30 \\ A-49 & 0.383 & 8.7 \(\pm\) 0.6 & 11.1 \(\pm\) 0.4 & 200 & 410 & 410 \(\pm\) 10 & 90 \(\pm\) 50 & 80 \(\pm\) 40 & 170 & \(<\) 20 \\ A-69 & our toy models (using any of the above cuts). As in Paper 2, we find no significant difference between the column densities of disk and outflow absorption. We do find a small difference in Doppler widths (outflows with median \(\approx\)50 km s\({}^{-1}\), disks \(\approx\)40 km s\({}^{-1}\)), but large uncertainties due to unresolved absorption mean that this is also not significant. The absorbers within these groups that could not be fit using either model have higher H i column densities and exist in more massive groups, although the significance is again limited by a small sample size. If these consist of material that has been ejected from galaxies and forms an intra-group medium, this would likely be more prevalent in denser, more massive groups with more interactions between galaxies, as is found here. ### Galaxy interactions Tidal interactions between galaxies in groups are expected to strip gas out of the galaxies, which should be detectable (e.g. Morris and van den Bergh, 1994). Streams of stars and gas attributed to tidal effects are seen around nearby galaxies (e.g. Majewski et al., 1996; Ibata et al., 2001; Foster et al., 2014; Sorgho et al., 2019), and there are suggestions of observed tidal material around more distant galaxies using emission lines (e.g. Epinat et al., 2018; Johnson et al., 2018). However, identifying this material in absorption is challenging. By considering absorption that could not be reproduced using our toy models, and by looking at close pairs of galaxies most likely to be interacting, we attempt to identify tidal material in our observations. Although not included in the main sample due to the difficulty of fitting position angle and inclination measurements, the Q0107 field features a clear ongoing galaxy merger, denoted B-39 and illustrated in Figure 8. We do not attempt to fit our disk/outflow models to this merger. Additionally, the absorption at this redshift is blended with Ly\(\beta\) from z\(\approx\)0.72, making it difficult to identify associated Ly\(\alpha\) components. The VPFIT results suggest two weak components in sightline A, one close to the merger systemic redshift and another \(\approx\)120 km s\({}^{-1}\) redward, and one stronger component in sightline C, with a small blueshift relative to the systemic redshift. Based on the directions and velocities of the stellar arms/streams seen in the Figure, absorption from an extension of these streams would have the opposite velocity offset to that observed in both of these lines-of-sight. However, given the lack of other nearby galaxies, it seems likely that at least some of the absorption observed can be attributed to the interaction between these galaxies, possibly on a previous pass between these galaxies such that their orbits have changed the line-of-sight velocity relative to the large-scale absorption. Simulations suggest stripped gas may be found opposing the direction of motion, depending on the orbital configuration (Rodriguez et al., 2022), so tidal material cannot be ruled out. Several absorbers around the galaxy groups we consider in this work can not be reasonably approximated by our disk/outflow/halo toy models. If a large proportion of these absorbers are due to tidal material, these would likely be found more frequently near to close galaxy pairs. However, we do not find any significant difference between the closest pair separations of groups with unexplained absorption, and those for which all absorbers can be fit by our models. Despite the lack of evidence for absorption due to interactions between galaxies across the sample as a whole, we can still look individually at these unexplained absorbers and consider whether tidally-stripped material can provide a reasonable explanation. The unexplained absorption in G-202 has a velocity offset close to our cut-off of 500 km s\({}^{-1}\), and is therefore likely associated with other galaxies (its high column density making a physical association with at least one galaxy likely, see e.g. T14 and Paper 1). Figure 6: Azimuthal angle against inclination for galaxy-absorber pairs in our sample. Absorbers identified as probing a possible outflow are shown in red, with disks in blue. We also shade the regions used to identify the ‘primary’ disk and outflow subsamples in MEGAFLOW in blue and red respectively (Zabl et al., 2019; Schroetter et al., 2019). Figure 7: Column density against total galaxy stellar mass found within 500 km s\({}^{-1}\)and 1 Mpc of H i absorbers. Faded points show our full sample of detected H i absorption, whilst bold points show absorbers within 500 km s\({}^{-1}\)of the galaxy groups considered in this work. The bold points are also coloured by the best-fitting model type, with bold grey points showing absorption that could not reasonably be fit by our models. Faded lines show the median for each model type in both measurements. However, B-22 does appear distorted, with a tail of extended emission visible in the direction of LOS-B. Tidal material is therefore a possible explanation for some of the absorption in this sightline, but this absorption is reasonably approximated by our toy models. The sub-DLA near G-558 could not be fit using our models, and likely has an undetected host galaxy much closer to the line-of-sight than the detected group of galaxies. Strong, metal-enriched absorption in G-399 lies at a velocity between the two large galaxies in this group. As discussed in Anshul et al. (2021), this contains a warm component as well as a cooler gas phase. This is consistent with intra-group material, lying within \(r_{vir}\) of the largest group galaxy, and could be tidal in origin. The galaxies A-36 and A-37 are close together (< 10 kpc, with A-40 a more distant companion), and appear distorted in the HST image, suggesting that they are interacting. The unexplained absorption components all lie blueward of all three galaxies, but have a much larger velocity offset than the velocity difference between the galaxies. This would suggest that this absorption does not identify material ejected by the interactions between these two galaxies. This absorption may be associated with the more massive, but distant, galaxies with a smaller velocity offset. Therefore, in the four galaxy groups with unexplained absorption, stripped material appears a reasonable hypothesis for two, only one of which contains a nearby pair of galaxies that is likely interacting. Stronger evidence for tidal material could come from absorption in the direction of visible stellar streams, or material that lies between the interacting galaxies in velocity and in projection on the sky. This is not the case for the groups in this study, so whilst tidally-stripped gas may be the most likely explanation for a small fraction of absorbers, we do not have good evidence of its detection in this work. ### Metal absorption Around the twelve isolated galaxies in Paper 2, only two of the H i absorbers also featured metal absorption around the same galaxy. However, many of the absorbers detected here in galaxy groups do feature metal lines, most commonly C iii and O vi, despite similar coverage and signal-to-noise ratio. (No other ions are detected more than three times near our group redshifts, so we primarily discuss C iii and O vi here.) This is consistent with our results from Paper 1, in which O vi detections only appear at impact parameters < 350 kpc from non-group galaxies1, but are often more extended in group environments (e.g. Johnson et al., 2015; Tchernyshyov et al., 2022). Footnote 1: Note that this uses the friends-of-friends algorithm to discriminate between group and non-group galaxies as described in Paper 1. This differs from the definitions of ‘isolated’ and ‘group’ galaxies used in Paper 2 and this work in that the linking lengths used in Paper 1 vary with redshift in order to correct for the changing detection limit with redshift, and exclude MUSE galaxies in order to achieve consistent detection limits across the field. The average linking lengths are comparable to the 500 km s\({}^{-1}\) and 500 kpc used here. Figure 9 shows the column densities of C iii and O vi associated with H i absorbers around our galaxy groups, coloured by the type of model that provides the best fit to that absorber. In both cases there is no significant difference in either the H i or metal column densities between the two models. This could suggest that, if our simplified models are indeed indicative of outflow cones and accreting disks around these galaxies, the accreting material does not have significantly different metal content than the outflowing material. However, it is possible that both the metal content and ionization state differ substantially between disks and outflows, such that the C iii and O vi column densities appear similar. The similar H i Doppler parameters of the disk and outflow absorbers suggest similar gas temperatures, but the metal ions likely trace different phases. We would therefore need a more detailed analysis of more metal species in order to confidently determine the metal content of these structures. With its lower ionization stage, it is unsurprising that C iii correlates more strongly with H i than O vi. We find with > 99% confidence (using a Spearman rank with non-detections included or excluded) that the column densities of H i and C iii are correlated, although we note that this correlation is not significant if absorbers that could not be fit by our toy models are excluded. There is not a significant correlation with O vi column density. The CGM/IGM is known to be multi-phase (e.g. Tripp et al., 2008; Werk et al., 2014; Concas et al., 2019; Anshul et al., 2021; Nateghi et al., 2021), and hence O vi likely probes a warmer phase of material than H i and C iii, with warm outflows and halos often proposed (e.g Werk et al., 2016; Oppenheimer et al., 2016; Ng et al., 2019). In isolated systems, accreting material would be expected to have a lower metallicity than that in outflows, due to enrichment of outflows through stellar evolution within the galaxy (e.g. Peroux and Howk, 2020). However, observations have often failed to detect a relationship between azimuthal angle and metallicity (e.g. Peroux et al., 2016; Kacprzak et al., 2019), possibly in part due to the mixing by azimuthal angle discussed in Paper 2 (and briefly in Section 5.2.3 of this work). Our sample from Paper 2 has too few metal detections to test this effect, but the numbers of O vi absorbers found along the major and minor axes using our full galaxy sample (Section 4.2 of Paper 1) do suggest higher metal content for minor-axis absorbers. It seems possible that, due to interactions between galaxies, metal enriched gas is more common around group galaxies, and therefore accreting material in group environments has metal content more similar to outflowing material than gas accreting onto isolated galaxies. However, other studies have found suggestions that low-metallicity gas can be masked by the presence of higher-metallicity gas in the line-of-sight (e.g. Pointon et al., 2019), which would allow our H i to be dominated by low-metallicity accretion despite strong metal detections with a different origin (e.g. warm halo, stripped material, intra-group gas). We also note several high-column-density, metal-enriched absorbers that are not identified as disks or outflows. However, one of these is the sub-DLA in LOS-C, which likely has an undetected host galaxy and may not be strongly affected by the larger-scale environment. Another is close to 500 km s\({}^{-1}\) offset from the galaxy group considered, and is therefore more likely to be associated with other galaxies at smaller velocity offsets. The third lies within the velocity ranged spanned by the nearby group, and could plausibly form part of an intra-group medium. ## 6 Summary & Conclusions In this study we use multiple lines-of-sight to constrain the origins of H i absorption seen around 9 pairs and groups of galaxies at z\(<\)1. By fitting simple disk, halo and outflow models around these galaxies we attempt to determine to what extent the absorption features seen in galaxy groups could be attributed to a superposition of the CGM around the individual galaxies, an extended intra-group medium, or material stripped from the CGM of the galaxies by interactions Figure 8: Details of the absorption and galaxy environment around the galaxy merger B-39 at z \(\sim\) 0.45. The layout is identical to that shown in Figure 3, but no model is shown as the position angle and inclination of these galaxies could not be measured. The galaxy kinematics are measured from the [O iii] emission line seen in the MUSE data. Tick marks (1) and (2) show the Lyman n6 transition from z\(\approx\)0.88; whilst (3)-(6) show Ly\(\beta\) from z\(\approx\)0.72. Figure 9: Column densities of H i, C iii and O vi absorption found within 500 \(\,\mathrm{km\,s^{-1}}\)of our group galaxies. Only metals within 30 \(\,\mathrm{km\,s^{-1}}\)of detected H i are included. Upper limits where metals are not detected are shown by downward-pointing triangles. Points are coloured by best-fitting model type (disk, outflow or none). The median column densities of absorbers with detected metals are shown along each axis. within the group. We model the absorption around nine galaxy groups in the Q0107 field, probed by three lines-of-sight, and find that: 1. Our disk and outflow models reproduce a slightly larger fraction of the identified H i absorbers where multiple galaxies are present (\(\approx\)75%) than around the isolated galaxies we consider in Paper 2 (\(\approx\)60%). This supports a model for which a superposition of absorption from the group galaxies results in the observed spectra. Sample variance and a larger parameter space (due to multiple nearby galaxies) appear to be plausible contributors to this higher success rate. 2. Our 'group' sample includes systems with larger separations and lower masses than many previous works considering absorption around group and isolated galaxies. The larger number of galaxies also leads to a smaller impact parameter between absorption and the closest galaxy. Both effects may also contribute to our high success rate and resulting preference towards the superposition model. 3. The best-fitting parameters for our model outflows are generally similar to those seen for isolated galaxies in our previous work (Paper 2, submitted), including several that appear to extend beyond the virial radius. However, a larger number of putative outflows have smaller opening angles; these galaxies preferentially have lower masses and redshifts than those with apparent wider outflows. This means our overall sample has parameters more similar to Schroetter et al. (2019), who use similar models for Mg ii absorption. 4. Our model disks have smaller rotation velocities than those around isolated galaxies (120 to 270 \(\,\mathrm{km\,s^{-1}}\), \(\approx 1.5\sigma\)), despite similar impact parameters. This may be due to interactions between infalling material and the extended group halo, but could also be due to the galaxy/sightline configurations sampled in this field. In order to constrain the rotation velocities, we have also had to assume an infall velocity in some cases, which may be distorting the results. 5. We do not detect any difference in the column densities of absorbers identified as likely disks and outflows, and their Doppler widths do not differ significantly. This matches our results from isolated galaxies in Paper 2. There does not appear to be a difference in the number or total stellar mass of galaxies in groups hosting these possible disks and outflows, although high-column-density absorbers do tend to be seen in denser galaxy environments (both in galaxy number and mass). 6. Absorption that cannot be fit by our models does not occur more frequently in groups with a close pair of galaxies, and therefore does not provide evidence of material stripped by galaxy interactions. Tidally-stripped material is a plausible explanation for two of the four groups hosting these unexplained absorbers, but is not supported by additional evidence such as the absorption matching the velocity offsets between galaxy pairs or lying in the directions of possible tidal stellar features seen in the HST images. 7. There is no significant difference between the C iii or O vi contents of absorbers identified as disks and outflows. Whilst outflowing material is generally expected to be warmer and more metal-enriched, material accreting onto a group galaxy may have been recently stripped or ejected from another nearby galaxy, and thus be more enriched than accretion from the IGM onto an isolated galaxy. We do not find a significant correlation between H i and O vi column densities, which fits expectations of these ions tracing different gas phases. This paper, alongside our previous works using the galaxy and QSO data covering the Q0107 field, demonstrates some advantages of using multiple lines-of-sight probing the gas around galaxies. In addition to being a highly-efficient way to build large samples of galaxy-absorber pairs, which we utilized to detect both a bimodality in the azimuthal angle of these pairs and a preference for aligned angular momentum between gas and galaxies in Paper 1, these sightlines also provide some constraints on the possible CGM structures seen in absorption, and therefore the origins of absorbing material. Through kinematic and column density considerations, we are able to rule out simple disk and outflow models for some absorption seen in the three sightlines, and tentatively identify some absorption as likely originating from disk or outflow structures. However, the large number of model parameters available when considering possible models in galaxy groups makes this identification yet more uncertain. Searching for significant correlations between model parameters and galaxy/group properties will require a larger sample of such observations, probing a wider space of sightline configurations and group properties. Constraining more complex models encompassing a wider range of physical processes (entrainment of gas, changing ionization states, etc.) and therefore with a larger range of free parameters, will require a larger number of sightlines for each galaxy or group. Whilst relatively few configurations of sightlines can be used to probe the CGM/IGM in this way using current instruments, and such sensitive Ly\(\alpha\) measurements at low redshift are only possible using the HST/COS instrument, future instruments will enable these techniques to be used across a much larger number of systems, especially at higher redshifts. CGM tomography at higher redshifts is included in the science cases for several instruments on the ELT (e.g. Maiolino et al., 2013; Evans et al., 2015; Marconi et al., 2021), which will be able to use much fainter objects as background sources at these higher redshifts, and BlueMUSE (Richard, 2019) will provide efficient observations of galaxies surrounding such sightlines whilst extending Ly\(\alpha\) coverage to lower redshifts than most other ground-based observations. Our work studying the Q0107 system therefore serves as a useful test case for techniques that can be applied to much larger samples (especially at higher redshifts) in the near future. ## Acknowledgements Firstly, we thank the anonymous referee for their insightful comments that have improved the quality of this paper. This work is based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. We also make use of observations collected at the European Southern Observatory under ESO programmes 086.A-0970, 087.A-0857 and 094.A-0131; at the W.M. Keck Observatory under programme A290D; and at the Gemini Observatory under programme GS-2008B-Q-50. We thank Matteo Fossati for providing the MARZ templates used for estimating redshifts and their uncertainties. We also thank Jill Bechtold for leading the effort to obtain the Keck/DEIMOS data. We thank the contributors to SCIPY2, MATPLOTLIB3, ASTROPY4, and the PYTHON programming language, the free and open-source community and the NASA Astrophysics Data system5 for software and services. Footnote 5: [https://ui.adsabs.harvard.edu/](https://ui.adsabs.harvard.edu/) Footnote 6: [http://www.dirac.ac.uk/](http://www.dirac.ac.uk/) Footnote 7: [http://archive.stsci.edu/](http://archive.stsci.edu/) Footnote 8: [https://koa.ipac.caltech.edu/cgi-bin/KOA/mph-KOAlogin](https://koa.ipac.caltech.edu/cgi-bin/KOA/mph-KOAlogin) This work also made use of the DiRAC system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility9. Footnote 9: [http://archive.eso.org/eso/eso_archive_main.html](http://archive.eso.org/eso/eso_archive_main.html) AB acknowledges the support of a UK Science and Technology Facilities Council (STFC) PhD studentship through grant ST/S505365/1. SLM acknowledges the support of STFC grant ST/T000244/1. SC gratefully acknowledges support from Swiss National Science Foundation grants PP00P2_163824 and PP00P2_190092, and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme grant agreement No 864361. MF also acknowledges funding from the ERC under the Horizon 2020 programme (grant agreement No 757535). This work has been supported by Fondazione Cariplo, grant No 2018-2329. ## Data Availability The raw data from the Hubble Space Telescope may be accessed from the MAST archive10, and that from the WM Keck Observatory from the Keck Observatory Archive11. That from the European Southern Observatory may be accessed from the ESO Archive12, and that from the Gemini Observatory may be accessed from the Gemini Science Archive13. The relevant program IDs are given in Section 2 of this paper. The derived data generated in this research will be shared on reasonable request to the corresponding author. Footnote 10: [http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/gsa/](http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/gsa/) Footnote 10: [http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/gsa/](http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/gsa/)
2304.04834
Raman-probing the local ultrastrong coupling of vibrational plasmon-polaritons on metallic gratings
Strong coupling of molecular vibrations with light creates polariton states, enabling control over many optical and chemical properties. However, the near-field signatures of strong coupling are difficult to map as most cavities are closed systems. Surface-enhanced Raman microscopy of open metallic gratings under vibrational strong coupling enables the observation of spatial polariton localization in the grating near-field, without the need for scanning probe microscopies. The lower polariton is localized at the grating slots, displays a strongly asymmetric lineshape, and gives greater plasmon-vibration coupling strength than measured in the far-field. Within these slots, the local field strength pushes the system into the ultrastrong coupling regime. Models of strong coupling which explicitly include the spatial distribution of emitters can account for these effects. Such gratings form a new system for exploring the rich physics of polaritons and the interplay between their near- and far-field properties through polariton-enhanced Raman scattering (PERS).
Rakesh Arul, Kishan Menghrajani, Marie S. Rider, Rohit Chikkaraddy, William L. Barnes, Jeremy J. Baumberg
2023-04-10T19:43:09Z
http://arxiv.org/abs/2304.04834v1
# Raman-probing the local ultrastrong coupling of vibrational plasmon-polaritons on metallic gratings ###### Abstract Strong coupling of molecular vibrations with light creates polariton states, enabling control over many optical and chemical properties. However, the near-field signatures of strong coupling are difficult to map as most cavities are closed systems. Surface-enhanced Raman microscopy of open metallic gratings under vibrational strong coupling enables the observation of spatial polariton localization in the grating near-field, without the need for scanning probe microscopies. The lower polariton is localized at the grating slots, displays a strongly asymmetric lineshape, and gives greater plasmon-vibration coupling strength than measured in the far-field. Within these slots, the local field strength pushes the system into the ultrastrong coupling regime. Models of strong coupling which explicitly include the spatial distribution of emitters can account for these effects. Such gratings form a new system for exploring the rich physics of polaritons and the interplay between their near- and far-field properties through polariton-enhanced Raman scattering (PERS). polariton, plasmonic grating, vibrational strong coupling, Raman spectroscopy ## 1 Introduction The interaction of light with ensembles of resonant two-level systems within a cavity of sufficient finesse can reach the strong coupling regime[1]. This manifests as a Rabi splitting of the original resonance into upper and lower polariton modes, besides uncoupled 'dark' states. Under strong coupling, many properties of the coupled emitter-photon system change, including chemical reactivities[2] and single photon nonlinearities[3]. Strong coupling has previously been achieved within single antenna-single emitter structures[4, 5, 6], high-finesse optical[3, 7] and microwave resonators[8], and more recently in the mid-infrared (IR) regime with vibrational strong coupling enabled by plasmonic metasurfaces[9, 10, 11, 12]. The IR regime holds several advantages for strong coupling including large dipole oscillator strengths, easier cavity fabrication, but especially for direct access to molecular vibrations relevant to material properties. Typically, the energy and momentum of polaritons is probed through far-field scattering and emission spectroscopies. However, the requirement for high-finesse cavities implies that little light escapes outside the cavity resonances, limiting optical interrogation of near-field variations in polariton modes, which is expected to show rich physics. Probing the near-field of systems under strong coupling, which previously required challenging scanning near-field optical microscopies[13], constitutes an open problem in understanding light-matter interactions on sub-wavelength scales[14]. Surface plasmon polaritons (SPPs) at a metal-dielectric interface lie outside the light-line and cannot be excited by incident far-field radiation due to the momentum-mismatch. By structuring a metallic surface with a periodic grating, the optical wavevector acquires multiples of \(k_{g}=2\pi m_{m}/\Lambda\), for grating period \(\Lambda\) and refractive index \(n_{m}\) of the medium around the grating. At a metal-dielectric interface SPPs can thus be multiply scattered and form Bloch waves. Far-field radiation may then couple to the plasmonic grating-modes, and in the near-field can couple to molecular transitions (\(\omega_{0}\)) such as vibrational IR absorptions. When an IR-active vibration is simultaneously Raman-active, polariton modes are also expected to appear in the Raman spectra in the ultrastrong coupling regime [15, 16, 17] (Fig. 1c). Despite such predictions, dark modes dominate the Raman scattering, preventing conclusive demonstrations of vibro-polaritonic Raman in microcavities [18, 19, 20] or open metasurfaces [12, 21], although phonon-polaritons can be seen in bulk crystal Restrahlen bands [22, 23]. Here, we show that surface-enhanced Raman (SERS) microscopy of micron-scale open gratings experiencing infrared vibrational strong coupling, which we term polariton-enhanced Raman spectroscopy (PERS), enables us to detect the localization and energy of polaritons in the near-field of the grating. This also eliminates the need for perturbing tip-based scanning probe microscopies. PERS spectra of gratings under vibrational strong coupling are measured using a confocal Raman microscope (Fig. 1a) with a tightly focused (100x objective, 0.9 NA) 532 nm excitation laser (see SI Methods). The IR transmission spectra of the gratings display a clear anti-crossing of the polariton modes at normal incidence, when the \(\Lambda=4.7\)\(\upmu\)m grating mode strongly couples with the C=O stretch of PMMA (\(\omega_{0}=1732\) cm\({}^{-1}\)) at normal incidence (Fig. 1b). This matches a rigorous coupled wave analysis (RCWA) model (Fig. 1b). Fitting the Hopfield model yields a Rabi splitting \(\Omega_{\text{Rabi}}\)= 95 cm\({}^{-1}\), which exceeds the linewidths of the original plasmonic grating mode (45 cm\({}^{-1}\)) and absorption at \(\omega_{0}\) (SI Section S2). Figure 1: **Strong coupling in open gratings**. (**a**) Raman microscopy (pump 532 nm) of grating (period \(\Lambda\)=4.7 \(\upmu\)m, slot width 1 \(\upmu\)m, height \(h\)=0.1 \(\upmu\)m, with 1 \(\upmu\)m thick PMMA layer). (**b**) Grating IR reflectance spectra in experiment (expt, left) and theory (RCWA, right). (**c**) Raman scattering from ground state \(|g\rangle\) to excited state \(|e\rangle\) or PERS to lower \(|LP\rangle\) and upper \(|UP\rangle\) polariton states. (**d**) PERS spectra of slot and mesa in grating compared to flat regions. Left: Optical image of grating. Middle: Background-corrected Raman spectra. Right: Raman spectra of slot and mesa after subtracting flat spectrum. Open gratings allow Raman spectral mapping as a function of lateral position and height. This reveals three distinct Raman signatures: from the flat (reference) region, from the middle of the grating mesa, and from the grating slot (Fig. 1d). The flat region shows expected PMMA peaks, which are enhanced on the gold-coated mesa due to increased reflectivity from the gold. In the slots however, there is a clear extra contribution in the 1000-1600 cm-1 region. Removing the background contribution reveals the spectral signature of this extra contribution (Fig. 1d right, brown), which is attributed to the lower polariton \(|LP\rangle\) state (see Fig. 1c). The extracted \(|LP\rangle\) spectrum is localized within the slot and displays an asymmetric shape peaked at \(\sim\)1600 cm-1 (\(\omega_{LP}\)). The possibility of molecular damage, as observed elsewhere[18], is discounted through intensity-dependent measurements that reveal different peaks associated with molecular damage while the original PMMA peaks remain unchanged (Sl Section S4). No signal from the upper polariton mode \(|UP\rangle\) is observed however. A broad electronic Raman scattering (ERS) or photoluminescence background is also present in the slot PERS spectra (Sl Fig. S3c) due to excitation of edge plasmon modes. ### Spatial Raman mapping of LP The observed \(|LP\rangle\) signature is enhanced and localized in the slot region of the \(\Lambda\) = 4.7 \(\upmu\)m grating, while the PMMA molecular band \(\omega_{0}\) is only weakly enhanced (Fig. 2b,c,d, and Sl Section S3). The \(|LP\rangle\) signature is seen to be localized within the slots when focussing \(z\)=1 \(\upmu\)m above the grating surface and resolved to be at the slot edges when focussing exactly on the surface (\(z\)=0 \(\upmu\)m). This is more clearly visible in the laterally-averaged plots (Fig. 2e,g). The extracted energy separation between the peak of the \(|LP\rangle\) and \(\omega_{0}\) at each location in the slots reveals an average near-field coupling strength \(g\) of \(174\pm 30\) cm-1, much larger than measured in the far field through IR transmission (47.5 cm-1). To further confirm this assignment of \(|LP\rangle\) polariton-enhanced Raman scattering (PERS), the dependence on grating mode detuning from \(\omega_{0}\) is examined (Fig. 3). Changing the periodicity of the grating from 4 to 14 \(\upmu\)m shifts this detuning (SI Section S5), as the position of the anti-crossing moves to higher wavevector, away from normal incidence (SI Fig. S10). This manifests as a reduced spectral weight and asymmetry in the \(|LP\rangle\) PERS spectrum for \(\Lambda\) = 6 \(\upmu\)m (Fig. 3a). For \(\Lambda\) = 13.8 \(\upmu\)m, it is the second-order grating mode which now couples to \(\omega_{0}\), however no clear PERS signal is observed, perhaps due to the greater sensitivity of the higher-order mode to fabrication imperfections. Only for anti-crossings at near normal-incidence is the strong \(|LP\rangle\) PERS seen. Averaging the spectral area of the \(|LP\rangle\) mode while scanning across the grating (Fig. 3b) confirms that it is always localized at the slots. The dependence of the intensity of \(|LP\rangle\) PERS signals can be attributed to the different local field enhancements of the Raman excitation laser within the slots depending on the grating period (SI Section S6). Figure 2: **Lower polariton enhanced Raman and localization in grating slots.****(a)** Raman maps acquired at two different heights above the grating surface. **(b)** Raman map at PMMA surface (\(z\)=1 \(\upmu\)m) showing integrated peak area of lower polariton (\(A_{LP}\)). **(c,d)** Raman maps at grating surface (\(z\)=0 \(\upmu\)m) for **(c)** LP (\(A_{LP}\)) and **(d)** PMMA (\(A_{PMMA}\)) modes. White scale bars are 2\(\upmu\)m. **(e-h)** Laterally averaged \(A_{LP}\) and \(\omega_{LP}\)-\(\omega_{0}\) across the grating for **(e,f)**\(z\)=1 \(\upmu\)m and **(g,h)**\(z\)=0 \(\upmu\)m. ### Understanding localized polariton Raman scattering To explain the localization of the \(\left|{LP}\right\rangle\) signal within the slots, the \(\left|{LP}\right\rangle\) asymmetric PERS spectrum, the lack of an upper polariton signal, and the larger near-field Rabi splitting, we perform near-field electromagnetic simulations and develop a simplified theory for Raman scattering of polaritons within gratings. The \(\left|{LP}\right\rangle\) PERS intensity depends on the localization of the infrared polariton (\(\sim\)1681cm-1) and the visible Raman pump (532 nm). Finite-difference time-domain (FDTD) simulations show the \(\left|{LP}\right\rangle\) optical near-field (\(E_{x}\)) is indeed tightly localized at the slot edges on the grating surface at z=0 \(\upmu\)m (Fig. 4a). The 532 nm laser field is uniformly enhanced on the facet and at the slot edges (Sl Fig. S14c). Hence, their combined impact is to out-couple the \(\left|{LP}\right\rangle\) PERS signals more effectively from the slot edges. Figure 3: **Detuning of grating modes**. (**a**) PERS spectra of lower polariton mode for different grating periods (\(\Lambda\)). (**b**) Laterally-averaged lower polariton mode (integrated area) _vs y_-position, which is localised at the grating slots (vertical bar is 100 cts-cm-1mW-1s-1). Horizontal gray bars indicate slot positions. The asymmetry of the \(|LP\rangle\) PERS signal is likely due to angle averaging inherent in confocal Raman microscopy with a high numerical-aperture objective to get spatial profiles. The visible-frequency Raman dipoles sample the \(|LP\rangle\) across a range of momentum \(k_{y}\) in the infrared (Fig. 1b), which broadens the range of frequencies measured. The fraction of molecular oscillator in each hybrid polariton state changes with momentum \(k_{y}\), and is quantified by the Hopfield coefficient \(b_{LP,UP}\) as a function of angle for the \(|LP\rangle\) and \(|UP\rangle\) (Fig. 4b). The molecular fraction contributes to PERS, so using density-of-states estimates[18], we weight each simulated reflectance spectrum (SI Section S3) with its molecular fraction and evaluate the sum up to the numerical aperture of our objective lens. The resulting spectrum is asymmetric (Fig. 4c) with a significant low-frequency tail as observed in experiments (Fig. 1d). We note this approximation only for the \(|LP\rangle\) PERS signal. Figure 4: **Polariton-enhanced Raman scattering in gratings.** (a) Field distribution (\(E_{x}/E_{0}\)) for the lower polariton under near-normal incidence (\(\theta\)=0.1\({}^{\circ}\)) transverse-magnetic (TM) excitation perpendicular to the grating grooves (dashed). (b) Coupled oscillator fit to RCWA simulations of grating scattering for upper and lower polariton modes vs momentum (\(k_{y}\)). Colours show vibrational Hopfield coefficient fraction (SI Section S2). (c) Molecular density-of-states (corrected and angle-averaged) showing asymmetric broadening of LP peak (grey bar indicating extinction of 0.1). (d) Polariton states showing scattering from ground state to bright (strongly coupled) and dark states, labelling Raman scattering (\(\omega\)) and infrared absorption (\(\nu\)). (e) Plasmon-vibration coupling strength \(g\) vs normalised position \(y/\Lambda\), with red points from Fig. 2. (f) Map of spatial plasmon-vibration coupling strength \(g\) along the \(\Lambda\)=4.7 \(\upmu\)m grating (modelled surface profile in white). considers the \(k_{y}\) momentum component and not the full parabolic dispersion including \(k_{x}\). By contrast, the \(|UP\rangle\) PERS signal is suppressed due to fast non-radiative decay of the upper polaritons into the reservoir of dark vibrational states present in the system, and the lower molecular fraction of \(|UP\rangle\) states at small \(k_{y}\) (Fig. 4b). This is analogous to electronic strong coupling, where only emission from the lower polariton branch is detected at room temperature [24]. Finally, the experimentally observed enhanced near-field Rabi splitting (distinct from far-field) is recovered in our theoretical model showing spatial variation in molecule-plasmon coupling constant \(g\) due to spatial variation in the optical field strength. The dominant contribution to the lower and upper polariton optical fields comes from the coupled first-order (\(\pm 2\pi/\Lambda\)) grating-scattered branches of the surface plasmon [25]. Using the Chandezon method to calculate the contribution to the optical field from the lower grating-scattered branch at \(k_{y}=0\) from a rectangular grating (SI Section S7), we find the spatially dependent light-molecule coupling from, \(g_{i}(y,z)\approx-\mu_{i}\cdot\mathbf{E}(y,z)\), where \(g_{i}\) is the plasmon-vibration coupling strength for molecule \(i\) in position \(y\) on the grating at height \(z\) from the gold surface, \(\mathbf{E}\) is the optical field associated with the lower grating-scattered branch at \(k_{y}=0\), and \(\mu_{i}\) is the vibration IR transition dipole moment. While the effective coupling strength [26]\(\langle g\rangle=\sqrt{\sum_{i}g_{i}^{2}}\) is measured in far-field extinction spectra, in the near-field molecules experience a spatially-varying \(g_{\text{NF}}\) which is maximum near slot edges and greatly reduced on the mesa (Fig. 4e-f), directly seen in the PERS spectra of the \(|LP\rangle\) state, and can greatly exceed the far-field mean \(\langle g\rangle\). The value of \(g_{\text{NR}}\) has an upper bound given by the bulk optical parameters of PMMA [27]. We find that this maximum value \(\sim\) 140 cm-1 (SI Section S8) is consistent with the maximum near-field value we find here of 174 \(\pm\) 30 cm-1. Thus, any near-field enhancement of the strong coupling effect can at most place the LP at \(\sim\)1590 cm-4, broadly consistent with our observations. We note that this bulk limit may help resolve the thus-far puzzling observations of Shalabney _et al._[20] and Menghrajani _et al._[9], both of whom found Raman features at \(\sim\)1590 cm-1; both might be due to Raman signals associated with localised modes. This highlights the role of (i) the visible wavelength SERS effect in amplifying signals from a small number of molecules at edges, and (ii) localized hot-spots in the infrared that couple to the dispersive grating mode giving rise to the large coupling strength \(g\) measured in PERS. The large near-field \(g_{\text{NF}}\) = 174 \(\pm\) 30 cm-1 pushes the system into the ultrastrong coupling regime (\(\omega_{\text{Rabi}}\)/ \(\omega_{0}\)=0.2\(>\)0.1), with clear effects on Raman spectra as predicted [16, 17]. Open grating systems thus allow the direct observation of Raman signatures from lower polaritons of systems under vibrational strong coupling. Grating systems uniquely allow a visible laser probe to map out the local density-of-polaritonic-states via Raman scattering. These signatures are not observed for systems out of strong coupling, nor for different molecular vibrations which are uncoupled to the grating cavity modes. As the collective Rabi splitting (\(\Omega\)) depends on the number of molecules (\(N\)) as \(\Omega\propto N^{1/2}\), the individual splitting experienced by a molecule depends on \(N^{-1/2}\). Hence, a small mode volume in the IR (encompassing small \(N\)) is critical to observe splitting effects in polariton-enhanced Raman scattering. For previous measurements in microcavities and plasmonic surface lattice resonators [9, 10, 11], \(N\) was much larger since mode volumes were \(\sim\)\(\lambda^{3}\). Within gratings, the number of molecules involved is much reduced due to the tight localization of the field at the grating slot edges, whose signals are effectively enhanced by SERS. The degree of local field confinement is such that the molecules at the edges are in the ultrastrong coupling regime (\(\omega_{\mathrm{Rabi}}\)/ \(\omega_{0}\)=0.2). ## Associated Content **Supporting Information**. ## Author Information **Corresponding Author** * Prof Jeremy J Baumberg, [email protected]_ * Prof William L Barnes, [email protected]_ ## Acknowledgment We acknowledge support from European Research Council (ERC) under Horizon 2020 research and innovation programme THOR (Grant Agreement No. 829067) and PICFORCE (Grant Agreement No. 883703). R.A. acknowledges support from the Rutherford Foundation of the Royal Society Te Aparangi of New Zealand, and the Winton Programme for the Physics of Sustainability. R.C. and R.A. acknowledge support from Trinity College, University of Cambridge. K.S.M. acknowledges financial support from the Leverhulme Trust research grant 'Synthetic biological control of quantum optics'.
2306.04095
PANE-GNN: Unifying Positive and Negative Edges in Graph Neural Networks for Recommendation
Recommender systems play a crucial role in addressing the issue of information overload by delivering personalized recommendations to users. In recent years, there has been a growing interest in leveraging graph neural networks (GNNs) for recommender systems, capitalizing on advancements in graph representation learning. These GNN-based models primarily focus on analyzing users' positive feedback while overlooking the valuable insights provided by their negative feedback. In this paper, we propose PANE-GNN, an innovative recommendation model that unifies Positive And Negative Edges in Graph Neural Networks for recommendation. By incorporating user preferences and dispreferences, our approach enhances the capability of recommender systems to offer personalized suggestions. PANE-GNN first partitions the raw rating graph into two distinct bipartite graphs based on positive and negative feedback. Subsequently, we employ two separate embeddings, the interest embedding and the disinterest embedding, to capture users' likes and dislikes, respectively. To facilitate effective information propagation, we design distinct message-passing mechanisms for positive and negative feedback. Furthermore, we introduce a distortion to the negative graph, which exclusively consists of negative feedback edges, for contrastive training. This distortion plays a crucial role in effectively denoising the negative feedback. The experimental results provide compelling evidence that PANE-GNN surpasses the existing state-of-the-art benchmark methods across four real-world datasets. These datasets include three commonly used recommender system datasets and one open-source short video recommendation dataset.
Ziyang Liu, Chaokun Wang, Jingcao Xu, Cheng Wu, Kai Zheng, Yang Song, Na Mou, Kun Gai
2023-06-07T01:31:12Z
http://arxiv.org/abs/2306.04095v2
# (Technical Report) ###### Abstract. Recommender systems play a crucial role in addressing the issue of information overload by delivering personalized recommendations to users. In recent years, there has been a growing interest in leveraging graph neural networks (GNNs) for recommender systems, capitalizing on advancements in graph representation learning. These GNN-based models primarily focus on analyzing users' positive feedback while overlooking the valuable insights provided by their negative feedback. In this paper, we propose PANE-GNN, an innovative recommendation model that unifies **P**ositive **A**nd **N**egative **E**dges in **G**raph **N**eural **N**etworks for recommendation. By incorporating user preferences and dispreferences, our approach enhances the capability of recommender systems to offer personalized suggestions. PANE-GNN first partitions the raw rating graph into two distinct bipartite graphs based on positive and negative feedback. Subsequently, we employ two separate embeddings, the interest embedding and the disinterest embedding, to capture users' likes and dislikes, respectively. To facilitate effective information propagation, we design distinct message-passing mechanisms for positive and negative feedback. Furthermore, we introduce a distortion to the negative graph, which exclusively consists of negative feedback edges, for contrastive training. This distortion plays a crucial role in effectively denoising the negative feedback. The experimental results provide compelling evidence that PANE-GNN surpasses the existing state-of-the-art benchmark methods across four real-world datasets. These datasets include three commonly used recommender system datasets and one open-source short video recommendation dataset. Recommender system; Negative feedback; Graph neural networks + Footnote †: [leftmargin=*] organization in Figure 2) reveal a decrease in performance compared to NGCF that does not utilize negative feedback. It suggests that directly incorporating negative feedback may not always yield benefits. **Challenges**. The aforementioned observations underscore the challenge of developing effective algorithms that can effectively incorporate negative feedback into recommender systems. The under-utilization of negative feedback in current approaches motivates us to explore the usage of negative feedback through GNNs in order to enhance the quality of recommendations. However, learning high-order structural information from a signed bipartite graph faces difficulties due to the limitations of the _network homophily assumption_ and the _balance theory assumption_. The network homophily assumption posits that similar nodes are more likely to connect to each other than dissimilar nodes. Many GNN models (Beng et al., 2015; Wang et al., 2016; Wang et al., 2017) adopt a message-passing mechanism that aggregates information from local neighbors to update the embedding of the anchor node based on this assumption. However, homophily is not applicable in signed graphs where dissimilar nodes are connected by negative edges. The balance theory assumption implies that "the friend of my friend is my friend", "the enemy of my friend is my enemy", and "the enemy of my enemy is my friend". Existing methods for signed unipartite graphs (Beng et al., 2015; Wang et al., 2016; Wang et al., 2017) leverage this assumption to aggregate and propagate information across layers. However, the balance theory assumption does not match with the signed bipartite graph in recommender systems (Han et al., 2016; Wang et al., 2017; Wang et al., 2017). In real-world recommendation scenarios, users typically possess diverse interests rather than unique interests. Consequently, the fundamental idea of "the enemy of my enemy is my friend" (i.e., "two items disliked by the same user are similar") in the balance theory assumption does not accurately capture the complexity of real-world situations. These limitations necessitate the development of novel approaches to effectively leverage negative feedback in recommender systems, accounting for the unique characteristics of signed bipartite graphs and the diverse interests of users in real-world settings. **Our idea**. The key idea revolves around utilizing high-order structural information from both the positive graph (i.e., user-item interaction graph containing only positive feedback edges) and the negative graph (i.e., user-item interaction graph containing only negative feedback edges) simultaneously. To enhance recommendations by incorporating negative feedback, this paper presents a novel recommendation model called PANE-GNN (unifying Positive And Negative Edges in Graph Neural Networks for recommendation). In this model, each user or item is assigned two embeddings, i.e., interest embedding and disinterest embedding, to capture the user's interests and disinterests, respectively. Taking into account the network homophily assumption, we devise two message-passing mechanisms for the positive graph and the negative graph. On the positive graph, interest embeddings are propagated and updated, capturing the user's interests. On the other hand, on the negative graph, disinterest embeddings are propagated and updated, capturing the user's disinterests or items they explicitly dislike. Furthermore, to generate robust embeddings that remain invariant to graph perturbations, we utilize graph contrastive learning on the negative graph and its perturbed version. This approach enhances the model's ability to capture relevant patterns in the presence of graph noise. The main three contributions of this work are as follows: * We propose a novel GNN-based recommendation model called PANE-GNN. The model performs message passing on both the positive graph and the negative graph to effectively incorporate positive and negative feedback (Section 3.2.1). * We design contrastive learning on the negative graph (Section 3.2.2), a new ranking method with a disinterest-score filter (Section 3.2.3), and a dual feedback-aware Bayesian personalized ranking loss (Section 3.3), all of which improve recommendation accuracy through the integration of positive and negative feedback signals. * The proposed PANE-GNN is extensively evaluated on four real-world datasets (Section 4). The experimental results demonstrate that PANE-GNN outperforms state-of-the-art GNN-based recommendation methods. ## 2. Related Work We provide a review of existing work about 1) recommender systems based on GNNs, and 2) graph neural networks on signed graphs. ### Recommender Systems based on GNNs Recently, GNNs have become the new state-of-the-art approach in many recommendation problems (Han et al., 2016; Wang et al., 2017). The main advantage of using GNNs for recommender systems is that it can capture higher-order structural information in the observed data. Based on the message-passing architecture of GNNs, NGCF (Wang et al., 2017) adopts the Figure 1. An example of video recommendation from YouTube. The integration of positive and negative feedback plays a pivotal role in achieving accurate recommendation outcomes. In this example, the user prefers team sports while showing no interest in single-player sports. Figure 2. Comparison of single-relational (NGCF) and multi-relational (GHCF) recommendation models on the ML-1M dataset. Hadamard product between user embedding and item embedding to promote passing more messages from similar items to users. Considering that nonlinear activation contributes little to the recommendation performance, LR-GCCF (Beng et al., 2015) removes non-linearities from the original graph convolutional network (GCN) model (Zhou et al., 2017) and adds a residual network structure on it to alleviate the over-smoothing problem in the graph convolution aggregation. Likewise, LightGCN (Liu et al., 2017) removes both feature transformation and nonlinear activation and only retains neighborhood aggregation for collaborative filtering. The simplified model has higher computational efficiency and is much easier to implement and train. Our proposed method differs from the above methods in that we consider the negative feedback information in the observed data and devise a novel message-passing process that takes into account both positive and negative feedback. ### Graph Neural Networks on Signed Graphs Most of the previous work focus on building GNNs for unsigned graphs where there are only positive edges. Currently, signed graphs where each edge has a positive or negative sign, have become increasingly ubiquitous in the real world. For example, the users in a social network may hold common or opposite political views. Since the network homophily assumption is the theoretical basis of the message-passing mechanism in GNNs, those unsigned GNNs cannot be applied to signed graphs directly. As a pioneering work of signed GNNs, SGCN (Garshan et al., 2016) assigns balanced embedding and unbalanced embedding for each node and propagates the two embeddings in the signed graph based on balance theory. Furtherly, SNEA (Shi et al., 2017) optimizes the message-passing process in SGCN by assigning different importance coefficients to each node pair connected with different edges. Inspired by adversarial learning, ASiNe (Shi et al., 2017) plays a minimax game in the positive graph and negative graph by leveraging a generator and a discriminator for positive edges and negative edges in a signed graph, respectively. SiReN (Shi et al., 2018) generates positive embeddings and negative embeddings for each node in a signed graph via a GNN model and a multilayer perceptron (MLP) model, respectively. Then SiReN adopts an attention layer to integrate the two embeddings into the final embeddings. Unlike the existing methods based on the balance theory assumption, which may not be directly applicable to the signed bipartite graph in recommender systems, the proposed method in this work takes a different approach. It splits the raw rating graph into two distinct graphs and emphasizes the propagation of information within each graph based on the type of edges. ## 3. Method In this section, we introduce the notations used in the paper, present the architecture of PANE-GNN, and describe its optimization objective. ### Notations In the given raw rating graph \(\mathcal{G}=(\mathcal{U},\mathcal{I},\mathcal{E})\), where \(\mathcal{U}\) represents the set of users, \(\mathcal{I}\) represents the set of items, and \(\mathcal{E}\) represents the set of edges, we split the graph into two edge-disjoint graphs: the positive graph \(\mathcal{G}p=(\mathcal{U},\mathcal{I},\mathcal{E}p)\) and the negative graph \(\mathcal{G}n=(\mathcal{U},\mathcal{I},\mathcal{E}n)\). Here, \(\mathcal{E}p\) represents the edges corresponding to positive ratings, and \(\mathcal{E}n\) represents the edges corresponding to negative ratings. The union of \(\mathcal{E}p\) and \(\mathcal{E}n\) gives the set of all edges \(\mathcal{E}\). In the positive graph \(\mathcal{G}p\), we aim to learn the interest embeddings for users and items, denoted as \(\mathbf{z}_{u}\) and \(\mathbf{z}_{i}\), respectively. These embeddings capture the relationship between liking and being liked. In contrast, in the negative graph \(\mathcal{G}n\), we focus on learning the disinterest embeddings for users and items, represented as \(\mathbf{v}_{u}\) and \(\mathbf{v}_{i}\), respectively. These embeddings capture the relationship between disliking and being disliked. For a comprehensive overview of the notations used in this paper, please refer to Table 1. ### Model architecture The architecture of the PANE-GNN model is depicted in Figure 3. It consists of three key technical designs: message passing on the positive graph \(\mathcal{G}p\) and the negative graph \(\mathcal{G}n\), contrastive learning on the negative graph \(\mathcal{G}n\), and ranking with a disinterest-score filter. In the message passing stage, information propagation takes place on both \(\mathcal{G}p\) and \(\mathcal{G}n\). This solution allows the model to leverage the structural information present in both graphs to enhance the representation learning process. The contrastive learning stage focuses on the negative graph \(\mathcal{G}n\). By employing contrastive learning, the model denoises the negative feedback and generates robust embeddings that remain invariant to graph perturbations. Finally, the ranking method with a disinterest-score filter is applied to generate the final recommendations. This method incorporates the learned embeddings from both the positive and negative graphs to rank the items and filter out items that do not align with the user's interests. \begin{table} \begin{tabular}{c c} \hline \hline **Notation** & **Description** \\ \hline \(\mathcal{U}\) & Set of users. \\ \(\mathcal{I}\) & Set of items. \\ \(\mathcal{E}_{p}\) & Set of positive edges. \\ \(\mathcal{E}_{n}\) & Set of negative edges. \\ \(\mathcal{E}=\mathcal{E}_{p}\cup\mathcal{E}_{n}\) & Set of all edges. \\ \(\mathcal{G}=(\mathcal{U},\mathcal{I},\mathcal{E})\) & Raw rating graph. \\ \(\mathcal{G}_{p}=(\mathcal{U},\mathcal{I},\mathcal{E}_{p})\) & Positive graph. \\ \(\mathcal{G}_{n}=(\mathcal{U},\mathcal{I},\mathcal{E}_{n})\) & Negative graph. \\ \(\mathcal{G}_{d}=(\mathcal{U},\mathcal{I},\mathcal{E}_{d})\) & Distorted graph from \(\mathcal{G}_{n}\). \\ \(N=\mathcal{I}\{U,\mathcal{I}\}\) & Number of all nodes in \(\mathcal{G}\). \\ \(\mathbf{A}_{p},\mathbf{A}_{n},\mathbf{A}_{d}\in\mathbb{R}^{N\times N}\) & Adjacency matrices of \(\mathcal{G}_{p}\), \(\mathcal{G}_{n}\), \(\mathcal{K}\)\(\mathcal{G}_{d}\). \\ \(\mathcal{N}_{p}(u)\), \(\mathcal{N}_{n}(u)\), \(\mathcal{N}_{d}(u)\) & Neighbor sets of user \(u\) in \(\mathcal{G}_{p}\), \(\mathcal{G}_{n}\), \(\&\&\)\(\)\(\mathcal{G}_{d}\). \\ \(\mathcal{N}_{p}(i)\), \(\mathcal{N}_{n}(i)\), \(\mathcal{N}_{d}(i)\) & Neighbor sets of item \(i\) in \(\mathcal{G}_{p}\), \(\mathcal{G}_{n}\), \(\&\)\(\&\)\(\mathcal{G}_{d}\). \\ \(\text{Z}\in\mathbb{R}^{N\times H}\) & Interest embedding matrix. \\ \(\mathbf{v}_{i}\in\mathbb{R}^{N\times H}\) & Disinterest embedding matrix. \\ \(\mathbf{z}_{u},\mathbf{v}_{i}\in\mathbb{R}^{H}\) & Interest embeddings on \(\mathcal{G}_{p}\). \\ \(\mathbf{v}_{u},\mathbf{v}_{i}\in\mathbb{R}^{H}\) & Disinterested embeddings on \(\mathcal{G}_{n}\). \\ \(\hat{\mathbf{v}}_{u},\hat{\mathbf{v}}_{i}\in\mathbb{R}^{H}\) & Disinterested embeddings on \(\mathcal{G}_{d}\). \\ \hline \(H\) & Embedding size. \\ \(K\) & Layer number of graph neural networks. \\ \(p\) & Probability of edge removing. \\ \(b\) & Feedback-aware coefficient. \\ \(\delta\) & Filtering threshold. \\ \(\lambda_{1}\) & Contrastive learning coefficient. \\ \(\lambda_{2}\) & L2 regularization coefficient. \\ \(\tau\) & Temperature coefficient. \\ \hline \hline \end{tabular} \end{table} Table 1. Frequently used notations in this paper. #### 3.2.1. Message passing on \(\mathcal{G}_{p}\) and \(\mathcal{G}_{n}\) In contrast to prior work that primarily focuses on message passing on the positive graph \(\mathcal{G}\mathcal{p}\), PANE-GNN takes into account the high-order structural information in the negative graph \(\mathcal{G}n\) as well. In PANE-GNN, we introduce two types of embeddings: interest embeddings and disinterest embeddings. These embeddings capture the relationships between liking and being liked, as well as disliking and being disliked, respectively, for each user or item. To effectively aggregate and propagate these embeddings, PANE-GNN utilizes a technique called light graph convolution (LGC) (Gardner et al., 2017), which allows the embeddings to be updated and combined within the respective graph structures. In the message passing process on the positive graph \(\mathcal{G}_{p}\), the interest embeddings \(\mathbf{z}_{u}^{(k+1)}\) and \(\mathbf{z}_{i}^{(k+1)}\) at the (\(k\)+1)-th layer are updated by summing the normalized interest embeddings at the \(k\)-th layer: \[\mathbf{z}_{u}^{(k+1)}=\sum_{i\in N_{\mathbf{p}}(u)}\frac{1}{ \sqrt{|N_{\mathcal{D}}(u)|}\sqrt{|N_{\mathcal{D}}(i)|}}\mathbf{z}_{i}^{(k)},\] \[\mathbf{z}_{i}^{(k+1)}=\sum_{u\in N_{\mathcal{D}}(i)}\frac{1}{ \sqrt{|N_{\mathcal{D}}(i)|}\sqrt{|N_{\mathcal{D}}(u)|}}\mathbf{z}_{u}^{(k)}. \tag{1}\] The final interest embeddings \(\mathbf{z}_{u}\) and \(\mathbf{z}_{i}\) can be obtained by averaging the interest embeddings from all layers: \[\mathbf{z}_{u}=\frac{1}{K+1}\sum_{k=0}^{K}\mathbf{z}_{u}^{(k)},\quad\mathbf{ z}_{i}=\frac{1}{K+1}\sum_{k=0}^{K}\mathbf{z}_{i}^{(k)}, \tag{2}\] where \(K\) is the total number of layers. In Eq. (2), \(\mathbf{z}_{u}^{(0)}\) and \(\mathbf{z}_{i}^{(0)}\) are trainable parameters that represent the initial embeddings for user \(u\) and item \(i\), respectively. These embeddings are randomly initialized before the model training process begins. For the message passing process on the negative graph \(\mathcal{G}_{n}\), the disinterest embeddings \(\mathbf{v}_{u}^{(k+1)}\) and \(\mathbf{v}_{i}^{(k+1)}\) at the (\(k\)+1)-th layer are updated according to the following equations: \[\mathbf{v}_{u}^{(k+1)}=\sum_{i\in N_{\mathbf{u}}(u)}\frac{1}{ \sqrt{|N_{\mathbf{n}}(u)|}\sqrt{|N_{\mathbf{n}}(i)|}}\mathbf{v}_{i}^{(k)},\] \[\mathbf{v}_{i}^{(k+1)}=\sum_{u\in N_{\mathbf{n}}(i)}\frac{1}{ \sqrt{|N_{\mathbf{n}}(i)|}\sqrt{|N_{\mathbf{n}}(u)|}}\mathbf{v}_{u}^{(k)}, \tag{3}\] The final disinterest embeddings \(\mathbf{v}_{u}\) and \(\mathbf{v}_{i}\) are calculated by averaging the disinterest embeddings of all layers: \[\mathbf{v}_{u}=\frac{1}{K+1}\sum_{k=0}^{K}\mathbf{v}_{u}^{(k)},\quad\mathbf{v} _{i}=\frac{1}{K+1}\sum_{k=0}^{K}\mathbf{v}_{i}^{(k)}, \tag{4}\] where \(\mathbf{v}_{u}^{(0)}\) and \(\mathbf{v}_{i}^{(0)}\) are trainable parameters that are randomly initialized, similar to the initialization of interest embeddings \(\mathbf{z}_{u}^{(0)}\) and \(\mathbf{z}_{i}^{(0)}\). Correspondingly, the matrix forms of the above message-passing processes are as follows: \[\mathbf{Z}^{\prime}\mathbf{=}\frac{1}{K+1}\sum_{k=0}^{K}\mathbf{Z}^{(k)}, \quad\mathbf{Z}^{(k+1)}\mathbf{=}(\mathbf{D}_{p}^{-\frac{1}{2}}\mathbf{A}_{p} \mathbf{D}_{p}^{\frac{1}{2}})\mathbf{Z}^{(k)}, \tag{5}\] Figure 3. The architecture of PANE-GNN. In model training, PANE-GNN performs message passing on both \(\mathcal{G}_{p}\) and \(\mathcal{G}_{n}\) and contrastive learning on \(\mathcal{G}_{n}\) to generate interest embedding \(Z\) and disinterest embedding \(\mathbf{V}\). In model prediction, PANE-GNN recommends a sequence of items to each user based on a ranking method with a disinterest-score filter. \[\text{V}{=}\frac{1}{K+1}\sum_{k=0}^{K}\mathbf{V}^{(k)},\quad\mathbf{V}^{(k+1)}{=}( \mathbf{D}_{n}^{-\frac{1}{2}}\mathbf{A}_{n}\mathbf{D}_{n}^{\frac{1}{2}})\mathbf{ V}^{(k)}, \tag{6}\] where \(\mathbf{D}_{p}{=}\text{diag}(\mathbf{A}_{p}\mathbf{1}_{N\times N})\) and \(\mathbf{D}_{n}{=}\text{diag}(\mathbf{A}_{n}\mathbf{1}_{N\times N})\) are the degree matrices of \(\mathcal{G}_{p}\) and \(\mathcal{G}_{n}\), respectively. Here \(N{=}|\mathcal{U}\cup\mathcal{I}|\) is the number of all nodes in \(\mathcal{G}\) and \(\mathbf{1}_{N\times N}{\in}\mathbb{R}^{N\times N}\) is a square matrix of ones. To incorporate dense non-graph information into the model, we use a two-layer MLP model to transform the initial interest embeddings \(\mathbf{Z}^{(0)}\) into a more expressive embedding \(\mathbf{Z}^{\prime\prime}\): \[\mathbf{Z}^{\prime\prime}{=}\text{ReLU}(\text{ReLU}(\mathbf{Z}^{(0)})\mathbf{ W}_{\text{MLP}}^{(1)})\mathbf{W}_{\text{MLP}}^{(2)}), \tag{7}\] where \(\mathbf{W}_{\text{MLP}}^{(1)}\). \(\mathbf{W}_{\text{MLP}}^{(2)}{\in}\mathbb{R}^{H\times H}\) are two trainable weight matrices to perform feature transformation. Next, to determine the importance of the \(\mathbf{Z}^{\prime}\) and \(\mathbf{Z}^{\prime\prime}\) embeddings in generating the final interest embedding, we employ an attention mechanism. We introduce an attention layer that learns two importance scores \(\alpha_{1},\alpha_{2}{\in}\mathbb{R}^{+}\) and yields the final interest embedding \(\mathbf{Z}\): \[\begin{split}\mathbf{Z}{=}(\alpha_{1}\mathbf{1}_{N\times H}) \odot\mathbf{Z}^{\prime}+(\alpha_{2}\mathbf{1}_{N\times H})\odot\mathbf{Z}^{ \prime\prime},\\ (\alpha_{1},\alpha_{2}){=}\text{Softmax}(\text{Tanh}(\mathbf{Z}^{ \prime}\mathbf{W}_{\text{Att}}^{(1)})\mathbf{W}_{\text{Att}}^{(2)},\text{Tanh} (\mathbf{Z}^{\prime\prime}\mathbf{W}_{\text{Att}}^{(1)})\mathbf{W}_{\text{ Att}}^{(2)}),\end{split} \tag{8}\] where \(\mathbf{W}_{\text{Att}}^{(1)}{\in}\mathbb{R}^{H\times H},\mathbf{W}_{\text{ Att}}^{(2)}{\in}\mathbb{R}^{H\times 1}\) are two trainable weight matrices and \(\odot\) denotes the Hadamard product. #### 3.2.2. Contrastive learning on \(\mathcal{G}_{n}\) Positive feedback serves as a reliable indicator of users' interests, while negative feedback is more susceptible to timeliness and contains more noise compared to positive feedback (Kumar et al., 2017). To address this issue, we propose a denoising approach in PANE-GNN by distorting the raw negative graph \(\mathcal{G}_{n}\) into a new graph \(\mathcal{G}_{d}\) and applying contrastive learning between the two graphs. This approach is accomplished by applying edge removing, which is a widely used data augmentation strategy in graph contrastive learning, to the adjacency matrix \(\mathbf{A}_{n}\) of the negative graph \(\mathcal{G}_{n}\), resulting in the modified adjacency matrix \(\mathbf{A}_{d}\): \[\mathbf{A}_{d}=\mathbf{A}_{n}\odot\mathbf{P},\quad\mathbf{P}\sim\mathcal{B}(1- p), \tag{9}\] where \(\mathbf{P}\) is a random masking matrix drawn from a Bernoulli distribution with parameter \(p\). Then for the message passing process on \(\mathcal{G}_{d}\), the disinterest embeddings \(\tilde{\mathbf{v}}_{u}^{(k+1)}\) and \(\tilde{\mathbf{v}}_{i}^{(k+1)}\) at the (\(k{+}1\))-th layer are updated using the following equations: \[\begin{split}\tilde{\mathbf{v}}_{u}^{(k+1)}&=\sum_{ i\in\mathcal{N}_{d}(u)}\frac{1}{\sqrt{|\mathcal{N}_{d}(i)|}\sqrt{|\mathcal{N}_{d}(i)|}} \tilde{\mathbf{v}}_{i}^{(k)},\\ \tilde{\mathbf{v}}_{i}^{(k+1)}&=\sum_{u\in\mathcal{ N}_{d}(i)}\frac{1}{\sqrt{|\mathcal{N}_{d}(i)|}\sqrt{|\mathcal{N}_{d}(u)|}} \tilde{\mathbf{v}}_{u}^{(k)},\end{split} \tag{10}\] where \(\mathcal{N}_{d}(u){\subset}\mathcal{N}_{n}(u)\) and \(\mathcal{N}_{d}(i){\subset}\mathcal{N}_{n}(i)\) are the neighbor sets of user \(u\) and item \(i\) in \(\mathcal{G}_{d}\), respectively. The final disinterest embeddings \(\tilde{\mathbf{v}}_{u}\) and \(\tilde{\mathbf{v}}_{i}\) in \(\mathcal{G}_{d}\) are calculated by averaging the disinterest embeddings of all layers: \[\tilde{\mathbf{v}}_{u}=\frac{1}{K+1}\sum_{k=0}^{K}\tilde{\mathbf{v}}_{u}^{(k)},\quad\tilde{\mathbf{v}}_{i}=\frac{1}{K+1}\sum_{k=0}^{K}\tilde{\mathbf{v}}_{i} ^{(k)}, \tag{11}\] where \(\tilde{\mathbf{v}}_{u}^{(0)}{=}\mathbf{v}_{u}^{(0)}\) and \(\tilde{\mathbf{v}}_{i}^{(0)}{=}\mathbf{v}_{i}^{(0)}\). Correspondingly, the matrix form of the message-passing process on \(\mathcal{G}_{d}\) is as follows: \[\tilde{\mathbf{V}}{=}\frac{1}{K+1}\sum_{k=0}^{K}\tilde{\mathbf{V}}^{(k)}, \quad\tilde{\mathbf{V}}^{(k+1)}{=}(\mathbf{D}_{d}^{-\frac{1}{2}}\mathbf{A}_{d} \mathbf{D}_{d}^{\frac{1}{2}})\tilde{\mathbf{V}}^{(k)}, \tag{12}\] where \(\mathbf{D}_{d}{=}\text{diag}(\mathbf{A}_{d}\mathbf{1}_{N\times N})\) is the degree matrix of \(\mathcal{G}_{d}\). ``` 0: Positive graph \(\mathcal{G}_{p}\), negative graph \(\mathcal{G}_{n}\), trainable parameters \(\Theta_{\text{Enh}}{=}\left\{\mathbf{Z}^{(0)},\mathbf{V}^{(0)}\right\}\) and \(\Theta_{\text{NN}}{=}\left[\mathbf{W}_{\text{MLP}}^{(1)}\mathbf{W}_{\text{MLP}}^{(2)}, \mathbf{W}_{\text{Att}}^{(1)},\mathbf{W}_{\text{Att}}^{(2)}\right]\), embedding size \(H\), GNNs layer number \(K\), hyperparameters \(p,b,\delta,\lambda_{1},\lambda_{2},\tau\). 0: Interest embedding matrix \(\mathbf{Z}\), disinterest embedding matrix \(\mathbf{V}\). 1: Initialize \(\Theta_{\text{Enh}}{=}\) and \(\Theta_{\text{NN}}{\text{ via the Glorot method}}\); 2: Initialize embedding matrices: \(\mathbf{Z}\leftarrow\mathbf{Z}^{(0)},\mathbf{V}\leftarrow\mathbf{V}^{(0)}, \tilde{\mathbf{V}}\leftarrow\mathbf{V}^{(0)}\); 3: Distort \(\mathcal{G}_{n}\) into \(\mathcal{G}_{d}\) according to Eq. (9); 4:while not converged do 5: Generate training set \(\mathcal{D}_{p}\) from \(\mathcal{G}_{p}\) based on Eq. (14); 6: Generate training set \(\mathcal{D}_{n}\) from \(\mathcal{G}_{n}\) based on Eq. (15); 7:for each mini-batch \(\mathcal{B}_{p}{\subset}\mathcal{D}_{p}\)do 8: Calculate \(\mathbf{Z}^{\prime}\) according to Eq. (5); 9: Calculate \(\mathbf{Z}^{\prime\prime}\) according to Eq. (7); 10: Update \(\mathbf{Z}\) according to Eq. (8); 11:endfor 12:for each mini-batch \(\mathcal{B}_{n}{\subset}\mathcal{D}_{n}\)do 13: Update \(\mathbf{V}\) according to Eq. (6); 14: Update \(\mathbf{V}\) according to Eq. (12); 15:endfor 16: Calculate \(\mathcal{L}_{\text{DB}}\) according to Eq. (17); 17: Calculate \(\mathcal{L}_{\text{CL}}\) according to Eq. (18); 18:\(\mathcal{L}_{\text{Reg}}\leftarrow\|\Theta_{\text{Enh}}\|^{2}\); 19:\(\mathcal{L}\leftarrow\mathcal{L}_{\text{DB}}+\lambda_{1}\cdot\mathcal{L}_{ \text{CL}}+\lambda_{2}\cdot\mathcal{L}_{\text{Reg}}\) Update \(\Theta_{\text{Enh}}\) and \(\Theta_{\text{NN}}\) by taking one step of gradient descent on \(\mathcal{L}\); 20:endwhile 21:return\(\mathbf{Z},\mathbf{V}\). ``` **Algorithm 1** PANE-GNN #### 3.2.3. Ranking with a disinterest-score filter To calculate interest scores, we utilize the matrix multiplication between the user embedding \(\mathbf{z}_{u}\) and the item embedding \(\mathbf{z}_{i}\), denoted as \(S_{\text{it}}=\mathbf{z}_{u}\mathbf{z}_{i}^{\text{T}}\). This score represents the affinity between user \(u\) and item \(i\) based on their respective interest embeddings. Similarly, the disinterest score is calculated as \(S_{\text{it}}=\mathbf{v}_{u}\mathbf{v}_{i}^{\text{T}}\). This score captures the disinterest or negative affinity between user \(u\) and item \(i\) based on their respective disinterest embeddings. Furthermore, we leverage mini-batch learning to train PANE-GNN, then each mini-batch on \(\mathcal{G}_{p}\) and \(\mathcal{G}_{n}\) are denoted as \(\mathcal{B}_{p}{\subset}\mathcal{D}_{p}\) and \(\mathcal{B}_{n}{\subset}\mathcal{D}_{n}\), respectively. The trainable parameter group of PANE-GNN consists of two parts: the embeddings \(\Theta_{\text{Emb}}{=}\left\{\mathbf{Z}^{(0)},\mathbf{V}^{(0)}\right\}\) of the 0-th layer, and the neural network parameters \(\Theta_{\text{NN}}{=}\left\{\mathbf{W}_{\text{MLP}}^{(1)},\mathbf{W}_{\text{MLP}}^{(2)}, \mathbf{W}_{\text{Att}}^{(1)},\mathbf{W}_{\text{Att}}^{(2)}\right\}\), which include the weight matrices for the MLP layers and attention layers. The overall loss function \(\mathcal{L}\) is defined as follows: \[\mathcal{L}=\mathcal{L}_{\text{DB}}+\lambda_{1}{\cdot}\mathcal{L}_{\text{CL} }+\lambda_{2}{\cdot}\mathcal{L}_{\text{Reg}}, \tag{16}\] where \(\mathcal{L}_{\text{Reg}}{=}\left\|\Theta_{\text{Emb}}\right\|^{2}\) denotes the L2 regularization term of the 0-th layer embeddings. \(\lambda_{1}\) and \(\lambda_{2}\) are two hyperparameters that control the strength of contrastive learning and L2 regularization, respectively. In order to incorporate the feedback information from both \(\mathcal{G}_{p}\) and \(\mathcal{G}_{n}\), we propose a dual feedback-aware BPR loss \(\mathcal{L}_{\text{DB}}\) inspired by the Bayesian personalized ranking (BPR) loss (Srivastava et al., 2015): \[\mathcal{L}_{\text{DB}}=-\sum_{(u,i,j)\in\mathcal{B}_{p}}\ln\sigma(\hat{y}_{u,i}{-}\hat{y}_{u,j})-\sum_{(u,i,j)\in\mathcal{B}_{n}}\ln\sigma(\hat{y}_{u,j}{ -}\hat{y}_{u,i}), \tag{17}\] where \(\sigma(x){=}\frac{1}{1+\exp(-x)}\) is the sigmoid function and \(b{>}1\) is a feedback-aware coefficient. The presence of \(b\) ensures the following priority order: positive feedback \(>\) negative feedback \(>\) no feedback. This priority implies that positive feedback is given higher importance than negative feedback, and both positive and negative feedback are considered more valuable than no feedback. In addition, we design the contrastive objective \(\mathcal{L}_{\text{CL}}\) on \(\mathcal{G}_{n}\) via the InfoNCE loss (Kumar et al., 2017): \[\mathcal{L}_{\text{CL}}{=}-\sum_{u\in\mathcal{U}}\ln\frac{\exp(\frac{\mathbf{ v}_{u}\hat{v}_{u}^{\text{T}}}{\tau})}{\sum_{u^{\prime}\in\mathcal{U}}\exp(\frac{ \mathbf{v}_{u}\hat{v}_{u}^{\text{T}}}{\tau})}-\sum_{i\in I}\ln\frac{\exp(\frac {\mathbf{v}_{i}\hat{v}_{i}^{\text{T}}}{\tau})}{\sum_{i^{\prime}\in I}\exp( \frac{\mathbf{v}_{i}\hat{v}_{i}^{\text{T}}}{\tau})}, \tag{18}\] where \(\tau\) is a temperature coefficient. This objective allows us to leverage the contrastive learning framework to enhance the robustness and discriminative power of disinterest embeddings in the recommendation process. The complete procedure of PANE-GNN is summarized in Algorithm 1. ## 4. Experiment In this section, we provide descriptions of the four real-world datasets (Section 4.1) and five baselines (Section 4.2) used in our experiments. We also introduce the metrics (Section 4.3) and hyperparameter setups (Section 4.4). Furthermore, we compare the performance of different methods and conduct a comprehensive evaluation of the performance of PANE-GNN (Section 4.5). ### Datasets We evaluate our approach on four real-world datasets: MovieLens-1M (ML-1M), Amazon-Book, Yelp, and KuaiRec. * **ML-1M** ([http://q6e9.cn/VMQw](http://q6e9.cn/VMQw)): This widely-used movie review dataset consists of approximately 6,000 users and 4,000 movies. Users rate movies on a 5-star scale, and each user has provided at least 20 ratings. * **Amazon-Book** ([https://61a.life/K7oer](https://61a.life/K7oer)): We selected the Amazon-Book dataset from a large crawl of product reviews on Amazon. The dataset comprises around 35,000 users, 38,000 items, and 1.9 million 5-star ratings. Similar to previous work (Kumar et al., 2017; Wang et al., 2018), we removed users or items with fewer than 20 interactions. * **Yelp** ([https://x064.cn/jak1U](https://x064.cn/jak1U)): This dataset consists of reviews for local businesses. It includes approximately 41,000 users, 30,000 businesses, and 2.1 million 5-star ratings. Like the Amazon-Book dataset, we excluded users or businesses with fewer than 20 interactions. * **KuaiRec** ([https://54z.life/DuQDC](https://54z.life/DuQDC)): This real-world dataset was collected from the recommendation logs of Kuaishou, a video-sharing mobile app. It contains around 7,100 users, 10,000 short videos (each with multiple tags), and a user-video interaction matrix. For ML-1M, Amazon-Book, and Yelp, we use the threshold of 3.5 to split the original ratings as binary signals. For KuaiRec, as suggested by the authors in (Kumar et al., 2017), we use the rule of "whether the video watch ratio is higher than 2.0" to achieve binary signals. The detailed statistics of the above four datasets are shown in Table 2. In the training set of KuaiRec, the number of negative ratings is far higher than that of positive ratings, which provides a more realistic and biased training environment compared to the other three datasets. ### Baselines We compare PANE-GNN with five state-of-the-art GNN-based recommendation models. * **NGCF**(Xu et al., 2017): NGCF is a GNN-based recommendation framework that explicitly incorporates high-order collaborative signals from the user-item bipartite graph through embedding propagation. * **LR-GCCF**(Xu et al., 2017): LR-GCCF incorporates the GCN model into the recommender system. Instead of employing non-linear transformations in the GCN, LR-GCCF utilizes linear embedding propagations. Additionally, it introduces a residual network structure to address the over-smoothing issue that can arise from applying multiple layers of graph convolutions. * **LightGCN**(Hu et al., 2017): LightGCN redesigns a light graph convolution structure specific to recommendations by abandoning the use of feature transformation and nonlinear activation. This approach aims to simplify the model while maintaining competitive performance. * **SGCN**(Hu et al., 2017): SGCN leverages balance theory to aggregate and propagate information in a signed graph. By considering balanced and unbalanced embeddings, SGCN effectively captures the information from both positive and negative feedback signals. * **SiReN**(Wang et al., 2018): SiReN is designed for signed bipartite graphs. It utilizes a GNN model and an MLP model to generate two sets of embeddings for the partitioned graph. Additionally, SiReN designs a sign-aware BPR loss to differentiate the effects of high-rating and low-rating items. ### Metrics We evaluate the effectiveness of PANE-GNN using three performance metrics: \(Precision@K\), \(Recall@K\), and \(nDCG@K\) (normalized discounted cumulative gain\(@K\)). These metrics provide insights into the accuracy, completeness, and ranking quality of the recommendation results. _Precision@K_ measures the proportion of relevant items among the top-\(K\) recommended results for a user: \[Precision@K=\frac{1}{|\mathcal{U}|}\sum_{u\in\mathcal{U}}\frac{|GT_{u}\cap R_{u}( K)|}{|GT_{u}|}, \tag{19}\] where \(GT_{u}\) denotes the ground truth item set liked by user \(u\) in the test set and \(R_{u}(K)\) denotes the recommended top-\(K\) items for user \(u\). _Recall@K_ quantifies the proportion of relevant items among all correct results for a user: \[Recall@K=\frac{1}{|\mathcal{U}|}\sum_{u\in\mathcal{U}}\frac{|GT_{u}\cap R_{u}( K)|}{|GT_{u}|}. \tag{20}\] _nDCG@K_ is a ranking quality measurement that assigns higher values to relevant items appearing at higher ranks: \[nDCG@K =\frac{1}{|\mathcal{U}|}\sum_{u\in\mathcal{U}}\frac{DCG_{u}@K}{ IDCG_{u}@K},\] \[DCG_{u}@K =\sum_{i=1}^{K}\frac{G_{u}(i)}{log_{2}(i+1)}\;\;\;iDCG_{u}@K =\sum_{i=1}^{K}\frac{1}{log_{2}(i+1)}, \tag{21}\] where \(G_{u}(i)\) equals 1 if the item at rank \(i\) in the recommended list is in the ground truth item set \(GT_{u}\), and 0 otherwise. ### Hyperparameter Setups In the experiments, we set the embedding size of PANE-GNN to 64, similar to LightGCN and SiReN. The embedding parameters of PANE-GNN are initialized using the Glorot method (Kang et al., 2017). We use the Adam optimizer (King and Ba, 2015) with a default learning rate of 5e-3 to optimize PANE-GNN. The training process of PANE-GNN employs mini-batch learning, where the default batch size is set to 1,024. We train PANE-GNN for a total of 1,000 epochs for all datasets. PANE-GNN incorporates L2 regularization with a coefficient of 0.01 on KuaiRec and 0.05 on the other three datasets. Negative sampling is employed during training, and the number of negative samples is set to 1 on KuaiRec and 40 on the other three datasets. The architecture of PANE-GNN consists of 4 layers of GNNs and 2 layers of MLP in total. The temperature value used in the contrastive loss is set to 0.8. Additionally, the dropout rate for the MLP layer or attention layer is set to 0.5. The filter in PANE-GNN utilizes a disinterest score threshold of 0.5 by default. The implementation of PANE-GNN is done using PyTorch. The source code is available at [https://reurl.cc/0ELqO6](https://reurl.cc/0ELqO6). For ML-1M, Amazon-Book, and Yelp datasets, we perform 5-fold cross-validation by splitting each dataset into training and test sets. The training set contains 80% of the ratings, while the remaining 20% constitutes the test set. As for KuaiRec, following the suggestion in the original paper (Kang et al., 2017), we use the user-item interactions from the fully-observed small matrix as the test set, and the remaining interactions are used for training. ### Experimental Results We conduct experiments to answer the following four key research questions: * **RQ1:** Does PANE-GNN improve overall recommendation performance compared to other GNN-based methods (Section 4.5.1)? * **RQ2:** How do different components in PANE-GNN affect its performance (Section 4.5.2)? * **RQ3:** How robust is PANE-GNN in terms of different hyperparameters (Section 4.5.3)? * **RQ4:** What are the final recommendation results of PANE-GNN from a qualitative perspective (Section 4.5.4)? #### 4.5.1. Comparison of overall performance (RQ1) Table 3 presents a comprehensive performance comparison between PANE-GNN and state-of-the-art GNN-based methods using the evaluation metrics _Precision@K_, _Recall@K_, and _nDCG@K_ with varying values of \(K\). Across all four datasets (ML-1M, Amazon-Book, Yelp, and KuaiRec), PANE-GNN consistently outperforms the five baseline methods, demonstrating the success and effectiveness of the designed message-passing approach on both the positive and negative graphs. Notably, the performance improvement of PANE-GNN on KuaiRec is particularly significant compared to the other datasets. For instance, PANE-GNN outperforms the runner-up LightGCN by 0.85% in terms of _Recall@_05 and 2.87% in terms of _Recall@_01. This outcome highlights the advantage of PANE-GNN when dealing with biased datasets where the number of positive ratings is considerably lower than negative ratings. In comparison to SiReN, which utilizes an attention model to integrate embeddings from the positive and negative graphs, PANE-GNN surpasses it in empirical evaluation. It is because PANE-GNN generates the disinterest embedding \(\mathbf{V}\) from the negative graph, which provides a comprehensive user profile and enables the filtering of irrelevant items. Interestingly, SGCN, which relies on the balance theory assumption, performs poorly compared to other methods. This finding suggests that the balance theory assumption, designed for signed unipartite graphs, is not suitable for real-world recommendation scenarios where users typically have diverse interests. #### 4.5.2. Ablation studies (RQ2) The ablation studies on PANE-GNN are conducted to investigate the functions of different components. Four variants of PANE-GNN are designed and evaluated: * **Variant-A**: Using message passing on the negative graph \(\mathcal{G}_{n}\). * **Variant-B**: Using message passing on the positive graph \(\mathcal{G}_{p}\). * **Variant-C**: Using message passing on both \(\mathcal{G}_{p}\) and \(\mathcal{G}_{n}\). * **Variant-D**: Introducing graph contrastive learning on Variant-C. The results of the ablation studies on the ML-1M and KuaiRec datasets are presented in Table 4. The observations from the ablation studies are as follows. **Variant-A**: Variant-A, which only uses message passing on the negative graph \(\mathcal{G}_{n}\), exhibits poor performance in all metrics on both datasets. It indicates that positive feedback is crucial for recognizing users' interests, and negative feedback alone cannot replace it, although it helps recognize users' dislikes. **Variant-B** vs. **Variant-C**: Comparing Variant-B (message passing only on \(\mathcal{G}p\)) and Variant-C (message passing on both \(\mathcal{G}p\) and \(\mathcal{G}_{n}\)), it is observed that Variant-C, which integrates the structural information from the negative graph, performs better. It suggests \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Dataset** & **\#User** & **\#Item** & **\#Rating** & **Density (\%)** & **Ratio** \\ \hline ML-1M & 6,040 & 3,952 & 1,000,209 & 4.19 & 1:0.73 \\ Amazon-Book & 35,736 & 38,121 & 1,960,674 & 0.14 & 1:0.24 \\ Yelp & 41,772 & 30,037 & 2,116,215 & 0.16 & 1:0.47 \\ KuaiRec & 7,176 & 10,728 & 761,425 & 0.98 & 1:13.30 \\ \hline \hline \end{tabular} \end{table} Table 2. Statistics of four real-world datasets. “Ratio” denotes the number ratio between positive and negative ratings in the training set. that incorporating the negative graph enhances the model's performance. **Variant-C** vs. **Variant-D**: Introducing the contrastive learning loss on \(\mathcal{G}_{n}\) in Variant-D further improves the model's performance. For instance, Variant-D achieves a 3.14% higher _Recall_@10 than Variant-C on the KuaiRec dataset. It demonstrates the effectiveness of contrastive learning for learning accurate disinterest embeddings from the negative graph. **Variant-D** vs. **PANE-GNN**: Comparing Variant-D and the full PANE-GNN, it is observed that leveraging the disinterest-score filter in ranking consistently improves the performance of Variant-D. It confirms the accuracy of disinterest scores and the effectiveness of the disinterest-score filter. #### 4.5.3. Hyperparameter sensitivity analysis (RQ3) To evaluate the sensitivity of PANE-GNN to different hyperparameters, we conduct a comprehensive hyperparameter sensitivity analysis on ML-1M and KuaiRec. We systematically vary the values of key hyperparameters and measure their impact on the model performance in terms \begin{table} \begin{tabular}{c c c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Variant**} & \multirow{2}{*}{**Description**} & \multicolumn{3}{c|}{\(K=5\)} & \multicolumn{3}{c|}{\(K=10\)} & \multicolumn{3}{c}{\(K=15\)} \\ & & & \(Precision@K\) & _Recall@K_ & _nDCG@K_ & \(Precision@K\) & _Recall@K_ & _nDCG@K_ & _Precision@K_ & _Recall@K_ & _nDCG@K_ \\ \hline \multirow{6}{*}{**Variant-D**} & A & MP on \(\mathcal{G}_{n}\) & 0.64\(\pm\)0.02 & 0.13\(\pm\)0.04 & 0.68\(\pm\)0.03 & 0.62\(\pm\)0.01 & 0.26\(\pm\)0.01 & 0.67\(\pm\)0.02 & 0.61\(\pm\)0.02 & 0.43\(\pm\)0.01 & 0.70\(\pm\)0.03 \\ & B & MP on \(\mathcal{G}_{p}\) & 31.51\(\pm\)0.15 & 12.11\(\pm\)0.10 & 34.49\(\pm\)0.20 & 26.35\(\pm\)0.22 & 19.23\(\pm\)0.11 & 32.59\(\pm\)0.17 & 23.32\(\pm\)0.20 & 24.47\(\pm\)0.19 & 32.34\(\pm\)0.15 \\ & C & MP on \(\mathcal{G}_{p}\) \& \(\mathcal{G}_{n}\) & 23.65\(\pm\)0.08 & 12.87\(\pm\)0.18 & 35.92\(\pm\)0.15 & 27.49\(\pm\)0.23 & 20.35\(\pm\)0.09 & 34.13\(\pm\)0.11 & 24.17\(\pm\)0.14 & 25.67\(\pm\)0.13 & 33.79\(\pm\)0.13 \\ & D & Variant-C + GCL & 33.46\(\pm\)0.11 & 13.05\(\pm\)0.15 & 36.56\(\pm\)0.21 & 27.77\(\pm\)0.20 & 20.40\(\pm\)0.11 & 34.45\(\pm\)0.09 & 24.36\(\pm\)0.12 & 25.70\(\pm\)0.17 & 34.05\(\pm\)0.04 \\ & PANE-GNN Variant-D + Filter & 33.66\(\pm\)0.14 & 13.26\(\pm\)0.17 & 36.90\(\pm\)0.25 & 27.97\(\pm\)0.14 & 20.50\(\pm\)0.18 & 34.70\(\pm\)0.13 & 24.66\(\pm\)0.22 & 25.95\(\pm\)0.09 & 34.37\(\pm\)0.14 \\ \hline \multirow{6}{*}{**Variant-D**} & A & MP on \(\mathcal{G}_{n}\) & 5.54\(\pm\)0.00 & 5.13\(\pm\)0.01 & 6.54\(\pm\)0.01 & 5.40\(\pm\)0.01 & 10.29\(\pm\)0.00 & 8.09\(\pm\)0.02 & 5.61\(\pm\)0.01 & 15.78\(\pm\)0.01 & 10.08\(\pm\)0.00 \\ & B & MP on \(\mathcal{G}_{p}\) & 24.22\(\pm\)0.10 & 33.25\(\pm\)0.13 & 40.05\(\pm\)0.13 & 17.61\(\pm\)0.08 & 42.19\(\pm\)0.23 & 41.60\(\pm\)0.16 & 14.64\(\pm\)0.15 & 49.36\(\pm\)0.17 & 43.83\(\pm\)0.12 \\ \cline{1-1} & C & MP on \(\mathcal{G}_{p}\) \& \(\mathcal{G}_{n}\) & 24.70\(\pm\)0.14 & 32.90\(\pm\)0.09 & 40.52\(\pm\)0.20 & 17.70\(\pm\)0.22 & 42.67\(\pm\)0.27 & 41.59\(\pm\)0.09 & 14.92\(\pm\)0.14 & 50.12\(\pm\)0.18 & 44.03\(\pm\)0.13 \\ \cline{1-1} & D & Variant-C + GCL & 24.94\(\pm\)0.18 & 34.04\(\pm\)0.22 & 41.21\(\pm\)0.06 & 19.37\(\pm\)0.14 & 45.81\(\pm\)0.04 & 44.10\(\pm\)0.28 & 15.88\(\pm\)0.19 & 53.36\(\pm\)0.06 & 46.28\(\pm\)0.18 \\ \cline{1-1} & PANE-GNN Variant-D + Filter & 25.85\(\pm\)0.10 & 34.61\(\pm\)0.11 & 41.91\(\pm\)0.20 & 19.39\(\pm\)0.17 & 46.03\(\pm\)0.07 & 44.13\(\pm\)0.05 & 16.16\(\pm\)0.16 & 53.47\(\pm\)0.15 & 46.55\(\pm\)0.10 \\ \hline \hline \end{tabular} \end{table} Table 4. Results (%) of ablation studies on ML-1M and KuaiRec. Here “MP”, “GCL”, and “Filter” denote message passing, graph contrastive learning, and the disinterest-score filter, respectively. \begin{table} \begin{tabular}{c c c c c c c c|c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{\(K=5\)} & \multicolumn{3}{c|}{\(K=10\)} & \multicolumn{3}{c}{\(K=15\)} \\ & & & \(Precision@K\) & _Recall@K_ & _nDCG@K_ & \(Precision@K\)_ & _Recall@K_ & _nDCG@K_ & \(Precision@K\)_ & _Recall@K_ & _nDCG@K_ \\ \hline \multirow{6}{*}{**Variant-D**} & NGCF\({}^{\dagger}\) & 29.73\(\pm\)0.43 & 10.99\(\pm\)0.26 & 32.38\(\pm\)0.45 & 24.77\(\pm\)0.23 & 17.48\(\pm\)0.25 & 30.31\(\pm\)0.33 & 21.74\(\pm\)0.22 & 22.29\(\pm\)0.27 & 29.85\(\pm\)0.29 \\ & LR-GCCF\({}^{\dagger}\) & 30.52\(\pm\)0.33 & 11.40\(\pm\)0.23 & 33.30\(\pm\)0.44 & 25.39\(\pm\)0.27 & 18.02\(\pm\)0.31 & 31.17\(\pm\)0.39 & 22.20\(\pm\)0.25 & 22.92\(\pm\)0.46 & 30.66\(\pm\)0.42 \\ & LightGCN\({}^{\dagger}\) & 32.18\(\pm\)0.22 & 12.06\(\pm\)0.11 & 35.19\(\pm\)0.23 & 26.79\(\pm\)0.13 & 19.09\(\pm\)0.16 & 32.97\(\pm\)0.18 & 23.49\(\pm\)0.16 & 24.32\(\pm\)0.29 & 32.49\(\pm\)0.22 \\ & SGCN\({}^{\dagger}\) & 24.84\(\pm\)0.03 & 9.10\(\pm\)0.17 & 26.83\(\pm\)0.35 & 18.73\(\pm\)0.20 & 14.92\(\pm\)0.26 & 25.47\(\pm\)0.24 & 18.73\(\pm\)0.20 & 19.32\(\pm\)0.37 & 25.30\(\pm\)0.26 \\ & SiReN\({}^{\dagger}\) & 33.28\(\pm\)0.54 & 12.79\(\pm\)0.27 & 36.37\(\pm\)0.55 & 27.74\(\pm\)0.37 & 20.16\(\pm\)0.33 & 34.23\(\pm\)0.47 & 2 of \(Recall@10\). The results are shown in Figure 4 and the following findings were observed: * GNNs layer number \(K\): As shown in Figure 4 (b), we observed that the \(Recall@10\) metric initially increases with an increasing number of GNNs layers on the ML-1M dataset. However, beyond a certain point, the \(Recall@10\) value starts to decrease. This observation aligns with the phenomenon of over-smoothing, where an excessive number of GNNs layers can cause the aggregation of node embeddings to become too similar, resulting in the loss of discriminative information. Additionally, as the number of GNNs layers increases, the computational efficiency of the model may be negatively impacted. Considering both the risk of over-smoothing and computational efficiency, we recommend setting \(K\) as 3 or 4 to ensure good recommendation outcomes while maintaining computational efficiency. * Feedback-aware coefficient \(b\): From the analysis of Figure 4 (d), we observed that \(b=1\) resulted in inferior performance compared to other values of \(b\) on ML-1M. It indicates that discriminating between positive and negative feedback during the optimization process is crucial for achieving better results on ML-1M. The sub-optimal performance of \(b=1\) suggests that the model might not adequately capture the discriminative signals between positive and negative feedback when they are given equal weight. On the KuaiRec dataset, the stability of PANE-GNN's performance and its insensitivity to different values of \(b\) suggest that the dataset's inherent characteristics might diminish the significance of distinguishing between positive and negative feedback. Based on these observations, we recommend setting \(b\) as 2 or 3. * Regularization coefficient \(\lambda_{2}\): As shown in Figure 4 (g), \(\lambda_{2}=0.1\) performs worst compared with others on ML-1M and KuaiRec. Although the L2 regularization term in Eq. (16) can prevent over-fitting, high \(\lambda_{2}\) excessively penalizes the model's parameters, resulting in underfitting. Hence, we suggest selecting \(\lambda_{2}\) from the range of [0.01, 0.05] for PANE-GNN. * Others: We found that PANE-GNN demonstrates robustness to various other hyperparameters, including the edge removing ratio \(p\) and contrastive learning coefficient \(\lambda_{1}\). #### 4.5.4. Case study (RQ4) In this subsection, we evaluate the recommendation quality of PANE-GNN by analyzing the tag information of videos in KuaiRec. In Figure 5 (a), we observe that the user has a preference for outdoor sports-related videos based on the tags of liked videos in the training set. Conversely, Figure 5 (b) displays the tags of disliked videos, indicating disinterest in videos related to dressing or clothing. Figure 5 (c) and Figure 5 (d) depict the tags of the recommended videos generated by PANE-GNN before and after the filtering process, respectively. Our observations reveal the following insights: In Figure 5 (c), the recommended videos generated by PANE-GNN generally align with the user's interests depicted in Figure 5 (a), except for a few specific words such as "Wearing" and "Beauty". With the disinterest-score filter (Figure 5 (d)), PANE-GNN successfully filters out less relevant recommendations, while suggesting more relevant videos with tags like "Walking", "Outdoors", and "Countryside". These findings emphasize two key points: 1) PANE-GNN effectively captures both user interests and disinterests from the training data, and 2) the implementation of the disinterest-score filter proves to be an effective approach for generating more relevant recommendation outcomes. Figure 4. Results of sensitivity analysis on ML-1M and KuaiRec. Figure 5. Tag clouds of a specific user on the KuaiRec dataset. Each figure presents the tags of the top-10 videos. Conclusion and Future Work In this work, we address the problem of leveraging negative feedback to improve recommender systems. Existing approaches in the literature focused on GNN-based recommendation models that only consider message passing on the positive graph. To overcome this limitation and capture high-order structural information from both positive and negative graphs, we propose a novel GNN-based recommendation model called PANE-GNN. By aggregating and updating messages on these two graphs, we enable the model to effectively incorporate positive and negative feedback. Additionally, we employ contrastive learning on the negative graph to reduce noise and filter out items with high disinterest scores, ensuring the relevance of the recommended results. Experimental evaluations conducted on four real-world datasets demonstrate that PANE-GNN consistently outperforms state-of-the-art GNN-based recommendation methods. We also conduct an in-depth analysis of PANE-GNN to validate its effectiveness across different components and its robustness to hyperparameters. In the future, we plan to investigate the exposure bias issue in GNN-based recommendation models.
2310.17069
Electric Vehicle Aggregation Review: Benefits and Vulnerabilities of Managing a Growing EV Fleet
Electric vehicles (EVs) are becoming more popular within the United States, making up an increasingly large portion of the US's electricity consumption. Hence, there is much attention has been directed on how to manage EVs within the power sector. A well-investigated strategy for managing the increase in electricity demand from EV charging is aggregation, which allows for an intermediary to manage electricity flow between EV owners and their utilities. When implemented effectively, EV aggregation provides key benefits to power grids by relieving electrical loads.. These benefits are aggregation's ability to shift EV loads to peak shave, which often leads to lower emissions, electricity generation prices, and consumer costs depending on the penetration levels of non-dispatchable electricity sources. This review seeks to appropriately highlight the broad vulnerabilities of EV aggregation alongside its benefits, namely those regarding battery degradation, rebound peaks, and cybersecurity. The holistic overview of EV aggregation provides comparisons that balance expectations with realistic performance.
Kelsey Nelson, Javad Mohammadi, Yu Chen, Erik Blasch, Alex Aved, David Ferris, Erika Ardiles Cruz, Philip Morrone
2023-10-26T00:18:14Z
http://arxiv.org/abs/2310.17069v2
# Electric Vehicle Aggregation Review: Benefits and Vulnerabilities of Managing a Growing EV Fleet ###### Abstract Electric vehicles (EVs) are becoming more popular within the United States, making up an increasingly large portion of the US's electricity consumption. Hence, there is much attention has been directed on how to manage EVs within the power sector. A well-investigated strategy for managing the increase in electricity demand from EV charging is aggregation, which allows for an intermediary to manage electricity flow between EV owners and their utilities. When implemented effectively, EV aggregation provides key benefits to power grids by relieving electrical loads.. These benefits are aggregation's ability to shift EV loads to peak shave, which often leads to lower emissions, electricity generation prices, and consumer costs depending on the penetration levels of non-dispatchable electricity sources. This review seeks to appropriately highlight the broad vulnerabilities of EV aggregation alongside its benefits, namely those regarding battery degradation, rebound peaks, and cybersecurity. The holistic overview of EV aggregation provides comparisons that balance expectations with realistic performance. ## I Background and Motivation Electric vehicle (EV) aggregation is the management of an EV fleet by an intermediary that coordinates services between EV drivers and their electricity providers [1]. The intermediary's role can take different forms and often relies on the use of smart electric vehicle supply equipment (EVSE), which for this review will refer to EV charging infrastructure that is able to send electricity to the grid via bidirectional charging and/or receives direct communication between aggregators and EV drivers. Such communication allows for aggregators to shift loads [2, 3, 4] and in some cases coordinate vehicle-to-grid (V2G) services [5] in order to benefit the grid. Because the EV load is so flexible, aggregation can allow the fleet to charge at times that are conducive to the goals being prioritized, typically (1) to provide peak shaving, (2) to lower a grid's overall emissions and (3) to lower the price of electricity generation. Despite these advantages, EV aggregation has notable drawbacks that must be considered. Most reviews on the anticipated interactions between EVs and electric grids focus solely on the benefits of EV aggregation. While there are some reviews that do take into account potential pitfalls surrounding EVs at scale, to our knowledge these reviews could be improved through expanded considerations.. Authors in [6] investigate important costs of aggregation, but its scope deals almost strictly with quantifiable financial costs, such as the costs to implement technology for physical sensing, communication avenues, enablement costs, and how these costs are passed onto retapayers. Reference [7] discusses both the positive and negative aspects of EVs on charging systems, but not specifically the implementation of aggregation. Because [7] covers solely EV presence, it does not take into account issues unique to EV aggregation, such as price signaling's ability to cause rebound peak, the cybersecurity issues that are inherent to the need for communication via EVSE, and vehicle-to-grid (V2G)'s impact on EV battery degradation, though it lists some of these factors as notable for future research. Lastly, [8] is a review specifically regarding EV aggregation, however, it is primarily meant to inform aggregators and participating utilities of options for modeling methods for optimal bidding strategy and effective control methods for power systems planning. The authors of this study similarly note that future works should consider trade-offs between different objectives such as energy prices and battery degradation. The review in this paper fills a research gap that deals with the multidisciplinary aspects of EV aggregation in order to allow for the informed weighing of both positive and negative externalities of different aggregation strategies. The evaluation Fig. 1: Aggregators facilitate bi-directional communication and electricity flow between system components considerations emerge from considering a variety of literature spanning different perspectives, such as economics, power systems planning, and policy framework. Fig. 2 provides a visualization of how these perspectives come together to form the scope of this review. The rest of the paper begins with the known benefits of EV aggregation are summarized, most notably its potential for peak shaving, lowering grid emissions, and lowering electricity prices. Next, it examines the downsides of EV aggregation, which are (1) the increase in battery degradation from the use of bidirectional charging (2) a rebound peak effect, where once charging is no longer discouraged during the grid's peak demand, an unwanted secondary peak occurs afterward as several EV drivers plug their vehicles in, and (3) the cybersecurity vulnerabilities that the use of smart EVSE opens users and grids up to. Section IV discusses the extent to which these externalities would impact the different stakeholders within the EV space; EV owners, utilities, aggregators, and policy makers. ## II Benefits This section discusses the proven benefits of EV aggregation, providing evidence from real-world implementation as well as citing studies that test different strategies for maximizing aggregation benefits depending on the goal(s) being prioritized. Key to the discussion is peak shaving, which often inherently leads to lower electricity generation and charging prices, and lower grid emissions [9]. ### _Peak Shaving_ One of the most commonly cited benefits of EV aggregation is its ability to shift EV loads to shave down what the grid's peak power would have been in the absence of charging management [10]. There are several peak shaving strategies that have been investigated and found to be effective. Different control algorithms for both unidirectional charging and bi-directional charging have been thoroughly investigated, such as those that employ model predictive control [11], optimal control [12], and Kalman filtering [13]. More recently, neural networks and predictive models have been trained and tested using previous user data. [14] uses artificial neural networks to provide state of charge (SOC) estimation for aggregators, and [15] uses a neural network trained on 2021 data and tested with 2022 data in order to use day-ahead forecasting to implement rule-based peak shaving control. #### Ii-A1 Emissions Presently, peak shaving often translates directly to a reduction in the carbon intensity of the grid. To serve unusually high levels of demand, peaking power plants must come online, which have exceptionally high carbon intensity [16]. However, many regions of the United States are rapidly shifting their energy mixes by expanding renewable energy source (RES) capacity in order to reach its decarbonization goals [17]. RES balancing means that there will soon be times when RES output may exceed peak demand, making it potentially advantageous in the future to shift the flexible EV load to these times of high demand if the renewable output is also expected to be high [18]. The electrical-temporal shift would prevent the curtailment of RES output, ensuring that as much electricity consumption is from emission-free sources as possible. In [19], the authors investigate strategies to best avoid the curtailment of solar output under a high EV adoption and solar penetration scenario. They find that workplace charging along with accelerated EV adoption will be critical to minimizing solar curtailment, with workplace charging having the ability to reduce peak demand while also cutting RES curtailment by as much as 50%. Similarly, in [20] the authors propose a two-stage peak-shaving strategy using battery energy storage systems (BESSs) in order to create an optimization model with the minimization of wind curtailment as part of its objective function. The problem is solved by a neural network algorithm using data from a real-world case study and is able to reduce wind curtailment by over 40%. #### Ii-A2 Electricity Price Limiting the curtailment of emissions-free, non-dispatchable electricity sources (most commonly wind and solar) often inherently lowers the levelized cost of energy (LCOE) for energy providers, due to the fact that these sources are cheaper on average than their fossil fuel counterparts [21]. Though there is a strong correlation between RES supply and generation price, electricity price is still influenced by other factors, such as the need to quickly ramp up and down generation to meet highly variable demand. Therefore, it is still important to consider both factors independently in EV charging models which seek to achieve these goals. For example, previously discussed papers, [19] and [20], also include the cost within their optimization problems' objective function and find that electricity cost can easily be accounted for while also successfully limiting RES curtailment. There is also extensive literature that primarily focuses on the ability of EV aggregation to lower electricity prices without taking emissions into account. For example, [22] uses Fig. 2: EV Aggregation benefits and vulnerabilities covered in this review an optimal hierarchical aggregation algorithm for using V2G for day-ahead charging cost minimization, [23] employs a probability prediction model for estimating driver behavior for a day-ahead charging model, and [24] uses a price-based demand response model in three separate aggregation case studies in order to stabilize the grid via load leveling while minimizing electricity generation and EV charging costs. ## III Vulnerabilities Though the benefits from EV aggregation are numerous, they are not without negative consequences. As EV adoption rates continue to grow and light duty vehicles (LDVs) become increasingly electrified, awareness of the potential drawbacks and externalities of EV aggregation will be key to making informed decisions in order to maximize its beneficial effects. It's also key to understanding both the public's and utility's willingness to participate, as they can be directly affected by key vulnerabilities. The vulnerabilities that will be discussed in this section are battery degradation, a rebound peak effect, and cybersecurity issues. ### _Battery Degradation_ Currently, commercial EVs are manufactured and sold with Lithium-ion battery (LIB) packs, which degrade naturally over their lifetime. However, there is evidence that suggests that aggregation methods can accelerate this process. This directly impacting EV owners by decreasing their battery's range via capacity fade, and in some cases necessitating a costly battery replacement. Aggregation often leads to increased battery degradation because charging patterns affect how a battery degrades. Charging patterns which lead to lower electricity generation prices are often not the same charging patterns that preserve the battery's longevity [25]. Furthermore, this degradation is compounded when directional charging is implemented [26]. The findings from [27] state that degradation from V2G is negligible if V2G is used fewer than 20 times per year. However, states with extreme weather (e.g. extremely high or low temperatures which increase demand for electricity use), such as Arizona and Texas have far more than 20 days per year of peaking events [28, 29], meaning that it may not be reasonable to assume that V2G would be used within [27]'s threshold for minimal degradation. In order to quantify this risk in scalable monetary terms, they assign an approximate battery degradation cost to the user of $.38 per 2-hour event using an at-home level 1 charger and $.82 per 2-hour event using a level 2 charger. ### _Rebound Peak_ Paradoxically, though one of the main benefits of EV aggregation is its potential to limit peak demand, attempts to suppress EV charging during times of high demand can indirectly lead to the creation of a secondary peak later in the day, which is often referred to as a "rebound peak". In [30], the likelihood of oscillatory behavior from charging with both price-based and load-based energy management systems (EMS) is showcased. The authors use real Australian household energy consumption data and weather patterns in a hypothetical microgrid with high PV penetration to obtain their results, making the study especially timely as many countries, including the United States, expand both residential and utility-scale solar [31]. Authors in [32] provide a case study for conventional overnight charging management strategies during the winter season in Quebec. It confirms the anticipated rebound effect in a local EV fleet and presents a forecasting-based framework with historical charging profiles in order to mitigate this effect. The framework was successful in lowering peak-to-off-peak ratios from a baseline case by 21% with a noted "very low" cost of implementation. Similarly, [33] finds that discouraging charging during their study's system peak hours, which occurred in early summer evenings, caused a rebound peak once charging was no longer discouraged. ### _Cybersecurity_ Because smart EVSE must be interconnected as an Internet of Things (IoT) device in order to allow for effective communication between aggregators, utilities, and other participating agents; the connectivity would increase exploitable vulnerabilities, increasing the risk of cyberattacks, such as those involving grid disruption, impersonation, and data breaches [34]. There are many different avenues that attackers may take in order to disrupt grid operations. In [35], several different scenarios for distributed denial-of-service (DDoS) attacks are modeled at an aggregated level in order to see the end result that coordinated malicious attacks could have on the grid. The authors' simulated results found that using DDoS for a sudden load drop yielded minimal change to the grid's frequency. Additionally, load modulation to target the grid's resonant frequencies was found to have negligible impacts on operations. Furthermore, in simulating oscillations on the distribution system, they found that there is "virtually no risk" of distributed energy resource equipment tripping from such a load modulation event. The authors also note that they simulated an "extreme case" where all smart EVSE in the region are in use simultaneously during a peaking event, and all stations are at the end of the feeder. In [36], load manipulation attacks are simulated but with the introduction of power injections via V2G. Its findings state that even at low EV penetration levels, the attacker could disrupt grid operations even if outages do not occur. Additionally, the authors find that it would be plausible to use EV attacks in order to hide transmission outages by falsifying data in order to manipulate the reported load at both ends of a disconnected line to grid operators. Lastly, [37] finds that at the device level, load modulation is able to produce harmonic distortion of over 20% and reduce the EVSE's power factor to below.8. However, at the grid level, it found that current penetration levels of EVSE are insufficient for attackers to be able to trigger outages. Outside of direct grid and device disruption, [38] models impersonation attacks on advanced, hypothetical GPS-based wireless EV charging networks that use mobile energy disseminators (MEDs). It is shown that an attacker could effectively use GPS spoofing to cut in line in a charging queue, disturbing the optimization of MED routing within an EV network and increasing average travel time [38]. Disclosure attacks, which seek to surpass authorization to obtain data, are difficult to directly model. Important software and tools used in industry to identify weak points for disclosure attacks, however, have been discussed in published literature. For example, [35] discusses STRIDE (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege). Similarly, the Unified Modeling Language for Secure Systems Development was used in [37]. There have also been well-documented examples of successfully executed disclosure attacks within broad EV systems, such as Tesla's 2023 data breach [39]. Given current penetration levels within the United States for both EVs and smart EVSE, it is highly unlikely that coordinated cyberattacks involving frequency or load manipulation would have substantial, system-wide impacts on the grids that they target. Additionally, many power grid operators still prefer robustness over autonomous decision-making capability. However, disclosure attacks have already been carried out both within the industry and in controlled simulations via eavesdropping [40]. Additionally, with EV adoption rates on the rise, large-scale plans to expand the US public charging network already in place [41], and utility incentives for the installation of EV chargers [42] grid impacts will move more to the forefront of cybersecurity concerns within the EV space, meaning that aggregators will need to be careful that their serviced networks are following cybersecurity compliance guidelines using robust hardware and defense strategies [43]. ## IV Discussion ### _Stakeholder Trade-offs_ Each of the discussed benefits and vulnerabilities will be differently prioritized by stakeholders within the EV aggregation space. The key identified stakeholders that have been mentioned so far within this paper are EV owners, utilities, aggregators or fleet managers, and policymakers. This section will discuss their likely concern levels with the benefits and vulnerabilities of aggregation that are covered in this paper, as summarized by Fig. 3. The concern levels are determined by considering the likelihood of the stakeholder experiencing the consequences of a given externality, the severity of such consequences, their awareness of the externality, and the availability of preventative measures. #### Iv-A1 EV Owners EV owners also serve the role of electricity consumers within power systems, who are known to be largely motivated by price [44]. Though the US consumer base reports high levels of concern regarding sustainability when it comes to purchasing decisions [45], many find the price to be an inhibiting factor when it comes to prioritizing sustainability [46]. Given the prevailing preference for low prices, electricity price is categorized as a high concern with emissions only being categorized as moderate. Battery degradation is a direct and proven cost incurred by EV owners who participate in aggregation, so in a similar manner to electricity price, it is also expected to be of high concern. Regarding rebound peak, it is assumed that the average EV owner is not aware of how price signaling can cause oscillatory charging behavior that can lead to an unintentional rebound peak, making it a low concern for these stakeholders. Lastly, while the types of cyberattacks that would impact EV owners (disclosure, DDoS) are currently isolated to very rare occurrences, the issue has been gaining more attention lately in mainstream media [47, 39, 48], so it was placed in a category of low to moderate concern. #### Iv-A2 Utilities In the US, the largest portion of the average utility's customer base is residential/individual consumers, closely tying their concerns as stakeholders with those of the EV owners they service [49]. As most of the utility market is served by for-profit companies [50], however, they will largely inherently prioritize maintaining competitive electricity rates, keeping electricity cost as a high concern [51]. While utility concern for emissions has historically been low, differences in regional customer base ideology and state incentives may cause some utilities to heighten their concern with emissions, so it was categorized as being of low to moderate level. Battery degradation is something that aggregators will need to worry about incorporating into their aggregation cost models when trying to maximize EV owner participation, making it of low concern for utilities. Rebound peaks, however, would prove costly for utilities and greatly affect the cost of electricity, meaning they are of high concern and would be a potential deterrent for utilities to partner with EV aggregators. Lastly, though no reported cyberattacks have occurred against US grids via EVSE, such an attack would have more dramatic impacts on grid-wide operations than at the EV owner level. Therefore, it is a moderate concern for utilities to implement the available cybersecurity measures in order to prevent attacks from interfering with grid operations. #### Iv-A3 Aggregators As aggregators seek to maximize profit through trading energy, electricity price is of high concern with emissions ranging from low to moderate concern in order to account for potential monetary incentives for carbon-free electricity generation as discussed in IV-A2. Fig. 3: Stakeholders within the EV aggregation space and their anticipated concern levels with the benefits and vulnerabilities of EV aggregation As discussed previously in Section III, battery degradation for EV owners can be accelerated by excessive use of V2G or certain charging schedule alterations. Because it is in aggregators' best interest to have high participation levels from EV owners [52], it should be a noteworthy concern for aggregators to create appropriate cost models that take this into account. Aggregators should also remain highly interested in avoiding rebound peaks in order to not deter utilities from cooperating with them. They should also be mindful of the potentially drastic consequences that could occur at the grid level from a scaled cyber attack. However, such an attack as noted previously would be unlikely so there will be only moderate concern with maintaining robust security measures. #### Iv-A4 Policy Makers Emissions should be of high priority for policymakers if they wish to influence the power sector to align more with government climate initiatives [53], which if left to its own devices, will trend towards the cheapest options regardless of environmental externalities [54]. Electricity price is still listed as a moderate priority, as effective climate policy should attempt to not drastically increase electricity prices in order to be well received and sustainable [55]. Battery degradation is not easily or realistically controlled by policy, so it is of low concern for policymakers. Rebound peak is also difficult to control directly by policy, however excessive peaking events would require increased spending to maintain power systems infrastructure which may require public funding, so rebound peak is categorized as being of low to moderate concern for policymakers [56]. Lastly, though cyberattacks are currently scarce in the EV space, cyberattacks as a whole are becoming more common globally [57]. In light of this, the US is becoming increasingly concerned with cybersecurity policy [58]. Thus, it is categorized as moderate to high due to its increasing importance to policymakers. ### _Conclusions_ EV aggregation has been gaining substantial attention and praise for its proven grid benefits regarding peak shaving, emission reductions, and electricity costs. Despite the benefits of effective EV aggregation, it's critical for many stakeholders to maintain awareness of the externalities that are inherent to the coordination of such a highly variable and widely distributed energy resource. Aggregation vulnerabilities, such as battery degradation, the creation of unintentional rebound peaks, and cybersecurity issues have the potential to offset some of the positive aspects of large-scale coordination of EV fleets. EV owners, utilities, aggregators, and policymakers are all uniquely impacted by such vulnerabilities, and understanding how different stakeholders will be concerned with different externalities will be crucial to their cooperation. These issues are also very likely to evolve going forward as new battery technologies emerge, electricity mixes change, and EV penetration rises, meaning that it will be paramount to maintain awareness of EV aggregation's ever-changing vulnerabilities in order to maximize the benefits of such a large, flexible, and mobile ancillary resource.
2302.00704
Pathologies of Predictive Diversity in Deep Ensembles
Classic results establish that encouraging predictive diversity improves performance in ensembles of low-capacity models, e.g. through bagging or boosting. Here we demonstrate that these intuitions do not apply to high-capacity neural network ensembles (deep ensembles), and in fact the opposite is often true. In a large scale study of nearly 600 neural network classification ensembles, we examine a variety of interventions that trade off component model performance for predictive diversity. While such interventions can improve the performance of small neural network ensembles (in line with standard intuitions), they harm the performance of the large neural network ensembles most often used in practice. Surprisingly, we also find that discouraging predictive diversity is often benign in large-network ensembles, fully inverting standard intuitions. Even when diversity-promoting interventions do not sacrifice component model performance (e.g. using heterogeneous architectures and training paradigms), we observe an opportunity cost associated with pursuing increased predictive diversity. Examining over 1000 ensembles, we observe that the performance benefits of diverse architectures/training procedures are easily dwarfed by the benefits of simply using higher-capacity models, despite the fact that such higher capacity models often yield significantly less predictive diversity. Overall, our findings demonstrate that standard intuitions around predictive diversity, originally developed for low-capacity ensembles, do not directly apply to modern high-capacity deep ensembles. This work clarifies fundamental challenges to the goal of improving deep ensembles by making them more diverse, while suggesting an alternative path: simply forming ensembles from ever more powerful (and less diverse) component models.
Taiga Abe, E. Kelly Buchanan, Geoff Pleiss, John P. Cunningham
2023-02-01T19:01:18Z
http://arxiv.org/abs/2302.00704v3
# Pathologies of Predictive Diversity in Deep Ensembles ###### Abstract Classical results establish that ensembles of small models benefit when predictive diversity is encouraged, through bagging, boosting, and similar. Here we demonstrate that this intuition does not carry over to ensembles of deep neural networks used for classification, and--in fact--the opposite can be true. Unlike regression models or small (unconfident) classifiers, predictions from large (confident) neural networks concentrate in vertices of the probability simplex. Thus, decorrelating these points necessarily moves the ensemble prediction away from vertices, harming confidence and moving points across decision boundaries. Through large scale experiments, we demonstrate that diversity-encouraging regularizers hurt the performance of high-capacity deep ensembles used for classification. Even more surprisingly, discouraging predictive diversity can be beneficial. Together this work strongly suggests that the best strategy for deep ensembles is utilizing more accurate, but likely less diverse, component models. Machine Learning, ICML ## 1 Introduction Successful ensembles rely on component models that make different predictive errors (Dietterich, 2000). This idea can be formalized by factorizing ensemble performance into the average performance of its constituent models and the _predictive diversity_ across models (Sec. 2, see also Abe et al. (2022)). Fig. 1 (left) shows that achieving better ensemble performance (lower diagonal level sets) requires either increasing diversity (moving rightward) or improving single model performance (down). Historically, increasing predictive diversity has proven to be the best strategy. Boosting (Freund, 1995), bagging (Breiman, 1996), and random forests (Breiman, 2001) all yield powerful ensembles of decision trees, even though the individual component trees obtain worse performance in isolation: Fig. 1 (center left) shows this phenomenon via ensemble performance decomposition. Increasing decision tree depth (depicted by marker size) yields better ensembles (lower level sets), as increases in predictive diversity (movement rightwards) more than offset declines in individual tree performance (upwards). The best random forest has the _most_ predictive diversity, even though it has the worst-performing component models. Recently, deep ensembles--or ensembles of large-capacity neural networks--have become ubiquitous across large-scale classification tasks (e.g. Szegedy et al., 2015; Beluch et al., 2018; Tran et al., 2022). While it is often assumed that the intuitions from tree-based ensembles carry over to this setting (e.g. Mishtal and Arel, 2012; Fort et al., 2019; Ross et al., 2020; Pagliardini et al., 2022), we find that--surprisingly--deep ensemble performance is _anticorrelated_ with predictive diversity. We demonstrate this phenomenon in Fig. 1 (right), which depicts the ensemble performance decomposition for various deep ensembles trained on CIFAR10 and ImageNet (details in Appx. B). The best _deep_ ensembles have models with low error (furthest down) at the cost of predictive diversity (most left). In stark contrast to classical ensembles, the best deep ensembles in Fig. 1 exhibit the least predictive diversity. Lower predictive diversity is not necessarily troubling. For example, Abe et al. (2022) demonstrate that purported benefits of deep ensemble diversity--namely uncertainty quantification and robustness--can be replicated by single models in isolation. Nevertheless, it is unclear whether explicitly encouraging diversity during training will improve deep ensembles, or whether anticorrelation between ensemble diversity and performance is intrinsic to large overparameterized models. Is the success of deep ensembles _in spite of_ or _because of_ their lack of predictive diversity? Here we address this question through a large scale empirical study of nearly 600 ensembles, using interventions that directly encourage or discourage diversity during training. Previous studies of deep ensemble diversity (e.g. Nixon et al., 2020; Webb et al., 2020; Ortega et al., 2022) have focused on diversity-encouraging interventions applied to one or two architectures. We build on these prior works by: (i) considering both high- and low-capacity models, (ii) considering different tasks (classification versus regression), and (iii) considering four training interventions, used for both diversity-encouragement and diversity-discouragement. Counterintuitively, we find that encouraging predictive diversity--while beneficial for regression and low-capacity classification--is detrimental for high-capacity classification ensembles (the most common deep ensembles). Even more surprisingly, _discouraging_ predictive diversity can be beneficial. Our subsequent analysis attributes these phenomena to two observations about high-capacity classification models: (i) encouraging diversity affects all datapoints (not only errors) and (ii) a majority of predictions lie in the vertices of the probability simplex, where increased diversity necessarily degrades predictions. Altogether, our results suggest that deep ensembles perform well _because of their lack of predictive diversity_, and improving individual models (likely at the further cost of predictive diversity) is the best strategy towards further improvements. ## 2 Preliminaries **Problem Setting.** We primarily consider multiclass classification problems with inputs \(\mathbf{x}\in\mathbb{R}^{D}\) and targets \(y\in[1,\ldots,C]\), which is arguably the canonical task for deep neural networks (and thus deep ensembles) (Hui and Belkin, 2020; Stewart et al., 2022). A standard _deep ensemble_ consists of \(M\) models \(\mathbf{f}_{1}(\cdot),\ldots,\mathbf{f}_{M}(\cdot)\), where each \(\mathbf{f}_{i}\) maps input \(\mathbf{x}\) to the \(C\)-dimensional probability simplex. The _ensemble prediction_\(\bar{\mathbf{f}}(\mathbf{x})\) is the arithmetic mean of the component network outputs: \[\bar{\mathbf{f}}(\mathbf{x})\triangleq\tfrac{1}{M}\sum_{i=1}^{M}\mathbf{f}_{i}(\mathbf{x}). \tag{1}\] Typically, the component models are instantiations of the same neural network architecture. They are usually trained independently on the same dataset, which is equivalent to minimizing the following objective: \[\mathcal{L}_{\text{ens}}\triangleq\tfrac{1}{M}\sum_{i=1}^{M}\mathbb{E}_{p(\bm {x},y)}\left[\ell\left(\mathbf{f}_{i}(\mathbf{x}),y\right)\right], \tag{2}\] where \(\ell\) is the loss function for an individual model. Unless otherwise stated, we assume high capacity component models, which are more likely to yield highly confident predictions that concentrate on the vertices of the probability simplex (Guo et al., 2017). **Measuring predictive diversity via the Jensen gap.** Assume that we measure predictive performance with a strictly convex loss \(\ell\) (e.g. cross entropy, mean squared error, etc.). As a consequence of Jensen's inequality, the simple ensembling approach in Eq. (1) is guaranteed to improve upon the average performance of the component models: \[\begin{array}{l}\vspace{-0.2cm}\frac{1}{M}\sum_{i=1}^{M}\ell\big{(}\mathbf{f}_ {i}(\mathbf{x}),y\big{)}\vspace{-0.2cm}\big{\}}{\begin{array}{l}\vspace{-0.2cm} \text{Jensen gap}\\ \text{arg. single model loss}\end{array}}-\vspace{-0.2cm}\frac{\ell\big{(}\bar{ \mathbf{f}}(\mathbf{x}),y\big{)}}{\begin{array}{l}\vspace{-0.2cm}\text{Jensen gap}\\ \text{ens. loss}\end{array}}\geq 0.\end{array} \tag{3}\] We refer to the difference in Eq. (3) as the _Jensen gap_. By strict convexity, the Jensen gap will equal \(0\) if and only if all component models make the same predictions. Conversely, the Jensen gap will increase as the \(\mathbf{f}_{i}(\mathbf{x})\) grow more dissimilar. Thus, throughout this paper we will use the Jensen gap as a measure of predictive diversity. (In Appx. D, we verify that the Jensen gap is highly correlated with other metrics Figure 1: **Deep Ensembles and tree ensembles improve performance through different strategies**. Ensemble performance measured as negative log likelihood (NLL) is decomposed into average single model NLL (y-axis) and predictive diversity (x-axis). **Left**) In isolation, improving predictive diversity (right) or average single model performance (down) will improve ensemble performance. Different strategies achieve the same level set of ensemble loss (diagonal lines): lower right is better. **Middle Left**) Random forests of larger trees (larger markers) have increasing diversity. The best performing ensemble (dotted line) has the most predictive diversity, even at the cost of single model performance. For deep ensembles **(Middle Right and Right)**, best ensemble performance is obtained by ensembles with the lowest average single model loss. Unlike random forests, the best deep ensembles have the _least_ diversity, whether component models are of the same (blue) or different (orange) architectures. of predictive diversity used in previous works). **A decomposition of the ensemble loss.** Quantifying diversity via the Jensen gap yields an interpretable decomposition of the ensemble loss. Tautologically, we have that: \[\begin{split}\underbrace{\ell\left(\bar{\mathbf{f}}(\mathbf{x}),y\right)}_{ \text{ens. loss}}&=\underbrace{\frac{1}{M}\sum_{i=1}^{M}\ell \left(\mathbf{f}_{i}(\mathbf{x}),y\right)}_{\text{avg. single model loss}}\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad- \underbrace{\left[\frac{1}{M}\sum_{i=1}^{M}\ell\left(\mathbf{f}_{i}(\mathbf{x}),y \right)-\ell\left(\bar{\mathbf{f}}(\mathbf{x}),y\right)\right]}_{\text{Jensen gap (pred. diversity)}}.\end{split} \tag{4}\] Thus, we see from Eq. (4) that ensemble performance is precisely a trade-off between 1. the average performance of its component model and 2. the predictive diversity amongst models, We will use this decomposition throughout this paper to analyze the performance of ensembles (e.g. Fig. 1). ## 3 Rethinking Diversity For Deep Ensembles From Fig. 1 we see that predictive diversity plays a less significant role for deep ensembles (specifically: those formed from large classification models) than for other ensembles (e.g. formed from decision trees). What would happen if we were to encourage additional predictive diversity by modifying the ensemble training objective in Eq. (2)? Though a direct intervention may be beneficial in certain ensembling scenarios (e.g. Mishtal and Arel, 2012; Webb et al., 2020; Ortega et al., 2022), we do not expect the same benefits for the deep ensembles we are concerned with. This hypothesis is based on two observations about large classification models--which we empirically confirm in Sec. 5: **Observation 1: Encouraging diversity during training affects all predictions, not only errors.** In an ideal world, it would be possible to increase the predictive diversity around erroneous model predictions (i.e. decorrelating errors) while leaving otherwise correct predictions untouched. However, it is highly unlikely that any training intervention could selectively increase predictive diversity in such a way. Recent evidence suggests that both correct and erroneous predictions made by modern neural networks are largely driven by robust inductive biases. Feldman and Zhang (2020); Feldman (2020) demonstrate the role of data memorization in the performance of large scale neural networks, and Abe et al. (2022) show that the predictions of a large neural network is highly correlated with an ensemble of smaller networks with similar performance. Strikingly, Mania et al. (2019); Gairhos et al. (2020) demonstrate that neural network classifiers make correlated errors, even across different architectures. When errors are largely driven by inductive bias, diversity encouragement should be just as likely to decorrelate accurate and confident predictions (which already constitute a much larger portion of the test set) as often as they do errors. Indeed, we empirically demonstrate this finding in Sec. 5. **Observation 2: In constrained prediction spaces, diversity is impossible at extreme points.** With Observation 1 in mind, we demonstrate why increasing diversity for otherwise accurate predictions is potentially harmful. We make use of a geometrical argument that is depicted in Fig. 2 and empirically confirmed in Sec. 5. Consider a datapoint that is classified correctly by all component models. Since large classification neural networks tend to be extremely confident (Guo et al., 2017), it is likely that the component model predictions will all concentrate in the same vertex of the probability simplex (see Fig. 2, "Standard Ensembling"). In this case, it is impossible to increase predictive diversity without modifying the average prediction. Increasing diversity would necessarily require at least one component model to become less confident, which could not be counteracted by other models since their predictions already lie on an extreme point of the simplex. As a consequence, encouraging diversity would necessarily cause the average prediction to become less confident, and may potentially move across the decision boundary (see Fig. 2, "Diversity Encouraged"). We note that this is not an issue for regression--since predictions are unconstrained--nor for small-model classification--since predictions are less confident and thus lie in a portion of the simplex that can be locally approximated by an unbounded space. **Discouraging predictive diversity.** Given intuitions from the existing ensemble literature, it would be natural to assume that discouraging predictive diversity should harm Figure 2: **Intuition: for classification tasks, encouraging diversity during training worsens confident/accurate predictions.** When training ensembles with probabilistic predictions (i.e. with a softmax cross entropy loss), outputs of individual ensemble members (\(\times\)) must lie in a simplex. The ensemble prediction (circle) is the average of these predictions. If ensemble outputs are already highly confident and accurate (top panel), diversity encouragement (left panel) necessarily degrades the confidence of the ensemble prediction, simply due to the geometry of the simplex. Further diversity encouragement can result in errors. In contrast, diversity discouragement (right panel) need not hurt ensemble predictions. performance. Taken to an extreme, discouraging diversity would result in all component models producing the same prediction, removing any benefits of ensembling. However, our understanding from the previous observations paint a different picture. If many of the errors made by ensemble members are already highly correlated (as suggested by Observation 1), we would expect few performance deficits from further correlating these errors. Moreover, there is reason to expect that discouraging diversity could potentially be beneficial. Recent theoretical evidence suggests that increasing the capacity of overparameterized models amounts to variance reduction (Neal et al., 2018; Adlam and Pennington, 2020)--a result which is further supported empirically by Fig. 1. Discouraging diversity through the use of a regularizer is an alternative path towards variance reduction that could mimic the effects of larger models. ## 4 Experiments To test our hypotheses, we perform large-scale experiments measuring to what extent _encouraging versus discouraging_ predictive diversity affects deep ensemble performance. In particular, we aim to compare and contrast the behavior of large-model classification deep ensembles (for which we expect diversity to be harmful) versus regression and small-model deep ensembles (for which we expect diversity to be helpful). In total, we train 574 ensembles across two datasets and six architectures, using four regularizers that encourage/discourage ensemble diversity. **Methods.** We consider four methods to encourage/discourage predictive diversity. Each can be formulated as a regularized version of standard ensemble training (Eq. 2): \[\operatorname*{\mathbb{E}}_{p(\mathbf{x},y)}\!\!\left[\frac{1}{M}\! \sum_{i=1}^{M}\!\ell(\mathbf{f}_{i}(\mathbf{x}),y)\!\!\right]\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! 1998). See Appx. B for training and implementation details. **Classes of ensembles.** We contrast the behavior of _(high-capacity classification) deep ensembles_ with regression deep ensembles and low-capacity deep ensembles. _Ensembles of regression neural networks._ We apply regression ensembles to CIFAR10/CIFAR100 by framing classification as regression (Hui and Belkin, 2020; Demirkaya et al., 2020). In this setting the model outputs are not required to add up to 1 (i.e. the prediction space is unconstrained), so Observation 2 does not apply to these ensembles. Consequentially, we hypothesize that diversity encouragement will not harm performance. For these models, \(\ell\) is mean squared error and we study only the Jensen Gap regularizer, which corresponds to regularizing variance (see Appx. A). In this setting, the other regularizers above become redundant with Jensen Gap, or do not apply, as they are specific to classification settings. _Ensembles of low-capacity classification neural networks._ Finally, we study ensembles of small classification models. To the best of our knowledge, low-capacity models are the only setting where previous literature has demonstrated the benefits of diversity encouragement (Mishtal and Arel, 2012; Webb et al., 2020; Ortega et al., 2022). These models make predictions with lower confidence than modern deep ensembles (Guo et al., 2017), so we hypothesize that most predictions will not approach the boundaries of the prediction space (see Observation 2), thus avoiding the pathological behavior we hypothesize for large-model classification ensembles. For these ensembles--as well as the high-capacity ensembles--\(\ell\) is the cross entropy loss. ### Diversity Regularization Results Across different architectures, datasets, and diversity regularizations, we find large classification ensembles exhibit drastically different behavior from other ensemble types. **Encouraging diversity harms test performance in large scale models.** In Fig. 3, we illustrate the performance of Figure 4: **Measuring predictive diversity in diversity regularized ensembles. Each marker represents a ResNet18 ensemble from Fig. 3 trained on CIFAR10 (see Appx. C for CIFAR100). Warmer colors correspond to positive \(\gamma\) values (encouraging predictive diversity), while the cooler colors correspond to negative \(\gamma\) values (discouraging predictive diversity). The level set of standard deep ensemble performance (\(\gamma=0\)) is denoted by \(\times\) and the dotted diagonal line. Ensembles where diversity was encouraged (\(\gamma<0\); blue markers) express more predictive diversity than standard ensembles, but at the cost of average single model performance. Corresponding ensemble performance is _worse_ (above dotted line) than standard ensembling, as measured by NLL. In contrast, discouraging predictive diversity (\(\gamma>0\); red markers) generates ensembles that can outperform standard training (below the dotted line). The ensemble with the best test NLL is one where diversity was discouraged.** Figure 3: **Ensemble performance improves as we penalize predictive diversity. We train diversity regularized deep ensembles using Eq. (5) on CIFAR10. Each column corresponds to a different regularizer in Sec. 2 applied to ResNet18 ensembles (for other architectures and CIFAR100, see Appx. C). We compare normal ensemble training loss (dotted line) to regularization that encourage predictive diversity (left of dotted line). For all regularizers we use to control predictive diversity (Jensen Gap, Variance, JSD 1 vs. All, JSD Average), test accuracy suffers with increasing diversity encouragement, dropping out of the range of standard ensemble performance (blue bands), and sometimes below that of single model performance (black bands), whereas diversity discouragement can be benign.** diversity regularized ensembles of ResNet18 networks for CIFAR10. (See Appx. C for CIFAR100, as well as VGG, WideResNet, and DenseNet ensembles.) In Fig. 3, the blue line and bands depict standard (unregularized) ensemble performance (mean \(\pm\) two standard error). The black line and bands depict the error of single models. Against these baselines, each panel of Fig. 3 shows the performance of individual ensembles trained with different regularizers and values of \(\gamma\). Across all diversity regularizers and datasets, ensembles with a downweighted diversity penalty (\(\gamma<0\)) have _much worse performance_ than the ensemble trained with unmodified objective (\(\gamma=0\)). In some cases, the performance undershoots that of a single network. In Fig. 4, we decompose the performance of these ensembles into predictive diversity (x axis), and average single model performance (y axis) (see Eq. 4). Relative to standard ensemble training (indicated with \(\times\)), we see that ensembles trained with diversity-encouragement (blue dots) indeed express more predictive diversity. However, we see that this increase comes at the cost of single model performance, mirroring the anticorrelation between predictive diversity and single model performance observed in Fig. 1. The overall effect of encouraging predictive diversity is higher values of the ensemble loss (level sets further up and to the left). Discouraging predictive diversity can be beneficial to predictive performance.In contrast, Fig. 3 shows that further penalizing diversity (i.e. \(\gamma>0\) in the Jensen Gap plots) also behaves unexpectedly. (Again, see Appx. C for CIFAR100 results, as well as VGG, Wide ResNet, and DenseNet ensembles.) In most cases diversity discouragement leaves predictive performance intact, across the range of studied regularization strength. Overall in \(56\%\) of the 149 experiments where diversity was discouraged, performance remains intact, while it decreases in \(28\%\), and even improves for \(20\%\) of the ensembles (Table 2). In Fig. 4, we once again use the decomposition offered by Eq. (4) to analyze the effect of diversity-discouragement. We note that in many cases, discouraging diversity leads to ensembles with improved average single model performance. In all cases, the best performing ensembles (lowest level set) are trained with diversity discouragement. Encouraging diversity does not hurt regression ensembles.Fig. 5 demonstrates that diversity encouragement exhibits markedly different trends for regression ensembles. We plot the predictive performance of a WideResNet28-10 ensemble (trained with the MSE loss rather than cross-entropy) trained with diversity encouragement/discouragement. (See Appx. C for DenseNet ensembles.) Unlike in Fig. 3, the majority of ensembles trained with diversity encouragement (12/16) improve upon baseline performance, with improvements proportional to regularization strength. In contrast, the effect of discouraging predictive diversity appears to be dataset specific, leading to uniformly worse performance on CIFAR10 data, but leading to consistent improvements on CIFAR100, as in the case with large-model classification ensembles. Encouraging diversity does not hurt low capacity models.Finally, Fig. 6 demonstrates that low-capacity classification ensembles exhibit the _opposite_ behavior of their large-capacity counterparts. We repeat the experiment in Fig. 3 for the smaller ResNet 8 architecture on CIFAR10. (See Appx. C for CIFAR100 and LeNet ensembles.) Relative to the standard ensemble baseline (blue band), each regularizer has at least one range of values where encouraging diversity leads to performance improvements ensembles, and discouraging diversity appears to yield consistently worse performance. In total, diversity encouragement improves the performance of \(73\%\) of 86 low capacity ensembles. Here we note that the distinction between low-capacity and high-capacity models also appears to govern the trade-off between predictive diversity and average single model performance in models trained without diversity regularization. Returning to Fig. 1 we see that the anticorrelation between average single model performance and diversity only appears when we consider models that achieve a certain level of performance, and are thus presumably of large capacity. Tables 1 and 2 in Appx. C summarize our results. Additional results.In Appx. D we apply the ensembles from Fig. 3 to CIFAR10.1 (Recht et al., 2018), an out of distribution (OOD) test dataset for CIFAR10. Our results show that encouraging diversity during training poses no benefits over standard deep ensembles. In Appx. D.2, we examine the effects of diversity encouragement/discouragement on ensembles trained with a label smoothing loss (Szegedy et al., 2016). The targets for label smoothing are located away from the probability simplex vertices, which makes it possible to maintain accuracy while Figure 5: **With MSE loss, encouraging predictive diversity does not hurt performance.** We also train deep regression ensembles using Eq. (2) with to MSE loss, on CIFAR10 and CIFAR100 (coarse tables). As predicted by our hypothesis, encouraging diversity does not hurt predictive performance in regression. achieving some level of predictive diversity. Our results suggest label smoothing could ameliorate the harmful effects of diversity encouragement, but it does not change the overall trend in Fig. 3. ## 5 Analysis The results in the previous section empirically demonstrate that encouraging predictive diversity--while beneficial for regression and small-model classification--is detrimental for large-model classification. In Sec. 3, we presented two observations about large-scale classification models that could generate this discrepancy. In this section, we offer empirical evidence to confirm these observations. **Observation 1: Diversity encouragement impacts all ensemble predictions, not only the incorrect ones.** Ideally, diversity-encouraging regularizers would only decorrelate what would otherwise be erroneous predictions, without modifying correct predictions. However, as suggested in Sec. 3, it is more likely that diversity regularizers affect all predictions--whether erroneous or correct. To test this hypothesis for a given ensemble, we bin all datapoints based on their predictive diversity, and then we consider the _counterfactual ensemble accuracy_ of datapoints in each bin--i.e. whether a given ensemble with any amount of regularization would have made the correct prediction absent that regularization. We perform this analysis for ResNet-18 CIFAR10 ensembles trained with diversity encouragement. Fig. 7 plots histograms of each diversity bin (top row) as well as the counterfactual accuracy of each diversity bin (bottom row), where all plots are smoothed with a Gaussian kernel. From these plots we can observe numerous trends that hold across all regularizers. Without any diversity-encouraging regularization (black curves, \(\gamma=0\)) a majority of predictions incur little diversity (CE Jensen Gap \(<0.1\)), and counterfactual predictive accuracy decays slowly as a function of (per-datapoint) predictive diversity. If diversity encouragement only decorrelated errors, we would expect this relationship to sharpen: _higher_ values of counterfactual ensemble accuracy at low values of predictive diversity, and _lower_ values on data that expresses more diversity. Instead, as we apply diversity-encouraging regularizers (\(\gamma<0\)), we observe that a _greater_ portion of diverse predictions are counterfactually correct, implying that most datapoints with increased diversity were classified correctly to begin with. As a dramatic example, with a \(\gamma\!=\!-9\) variance-regularized ensemble, most predictions yield a CE Jensen Gap of \(\approx 0.5\). However, \(95\%\) of the data with this amount predictive diversity would have been correctly classified without diversity interventions. Indeed, for all regularized ensembles, the counterfactual accuracy around diversity mode is always around \(94\%\), the overall (unconditional) accuracy of the unregularized ensemble. These results demonstrate that the training interventions we study indiscriminately increase predictive diversity of all datapoints, regardless of whether these predictions would have have otherwise been correct. **Observation 2: In constrained prediction spaces, diversity is impossible at extreme points.** Finally, we demonstrate that--for nearly perfect predictions--increasing diversity yields worse ensemble performance on a per-datapoint level. In Fig. 7 (right), we plot how a single perfect ensemble prediction would respond to increased diversity across component models. We consider three potential mechanisms that could result in increased diversity: a geometric scaling of predictions, drawing predictions from a simple distribution, or adding noise to the underlying set of prediction logits (see Appx. F for details). As predicted in Sec. 3, all mechanisms result in a worse ensemble prediction (higher diagonal level sets) as the diversity across models is increased. We also note that Fig. 7 (right) also offers a plausible explanation for the anticorrelation between predictive diversity and ensemble performance in Fig. 1: better individual models (which yield better ensembles) are more likely to make predictions in the extreme points of the simplex (Guo et al., 2017). Figure 6: **Diversity encouragement helps performance for low-capacity models.** Low capacity ensembles are trained on CIFAR10 using the loss in Eq. (5) (for CIFAR100 and another architecture, see Appx. C). For low-capacity models, diversity encouragement with the same regularizers and datasets shown in Fig. 3 can produce improved performance, and discouraging diversity hurts performance. This result is consistent with our hypothesis that the effect of diversity encouragement depends on the magnitude of ensemble predictions (and thus the confidence of ensemble predictions). **Taken together,** these observations suggest why predictive diversity is only harmful for large-capacity classification ensembles. We note that Observation 1 is only harmful in settings where Observation 2 applies. Thus, while regression and low-capacity classification models are also subject to Observation 1 (indiscriminate predictive diversity increases), the effect is not inherently harmful since its predictions do not lie on boundary points. ## 6 Related Work **Traditional ensembles.** Historically, ensembling has been applied to less powerful base models such as decision trees (e.g. Freund, 1995; Breiman, 1996; Dietterich, 1998; Breiman, 2001; Cutler and Zhao, 2001), a notably different setting than modern deep networks. Breiman (2001) identifies the performance of base learners and diversity between ensemble members as the key factors that bound the ensemble loss. More generally, it has proven difficult to identify an explicit relationship between diversity encouraging metrics and resulting ensemble performance (Kuncheva and Whitaker, 2003). More recent work (Abe et al., 2022b; Wood et al., 2023) propose that diversity metrics can be derived directly from the methods used to combine or evaluate ensemble member predictions. Thus, there has been considerable literature on diversity encouragement, but none has considered the potentially negative consequences of diversity in modern deep networks. **Deep ensembles.** Ensembles of neural networks have long been considered (e.g. Hansen and Salamon, 1990; Perrone and Cooper, 1992), but _deep ensembles_ generally refer to ensembles formed over the random initialization of modern deep networks. Deep ensembles are most commonly deployed for classification, at times even reformulating regression as classification (Stewart et al., 2022). (As part of a larger analysis, Gupta et al. (2022) demonstrate differences between regression and classification ensembles). Simple ensembling over random initialization is notably effective (e.g. Lee et al., 2015; Lakshminarayanan et al., 2017; Fort et al., 2019) and remains the benchmark approach. Although it has been shown that models with increased diversity can boost performance (Gontijo-Lopes et al., 2022), there has been no consistently demonstrated benefit to encouraging diversity during training, despite many proposals (e.g. Mishtal and Arel, 2012; Lee et al., 2015; Ross et al., 2020; Thakur et al., 2020; Webb et al., 2020; Wen et al., 2020; Gong et al., 2022; Ortega et al., 2022; Pagliardini et al., 2022; Teney et al., 2022; Zhao et al., 2022). Our results show that we might abandon this effort, as diversity encouragement in deep ensembles should in fact harm performance. ## 7 Discussion We demonstrate that encouraging predictive diversity is unexpectedly harmful to the performance of deep ensembles (specifically, large-capacity classifiers), and--conversely--that discouraging diversity is often beneficial. We show that these counterintuitive effects of predictive diversity wash away when ensembles are composed of regression models or low-capacity (low-confidence) classification models, and is thus entirely consistent with classical ensemble literature. Overall, our findings highlight a key difference between Figure 7: **Left: Regularization increases the diversity of all predictions, not just errors. (Top) histograms of per-datapoint predictive diversity (as measured by the cross-entropy Jensen gap) for ensembles with various regularizers. (Bottom) dividing all predictions into bins based on their predictive diversity, we compute the per-bin _counterfactual ensemble accuracy_—the likelihood that an ensemble would have made a correct prediction absent regularization for datapoints in a given bin. (Bins are smoothed with a Gaussian kernel.) (Main takeaway) while regularization increases predictive diversity on a per-datapoint level, regularized predictions with high diversity are counterfactually more likely to have been correct. Right: Regardless of encouragement strategy, diversity is impossible at extreme points. Under 3 different methods of adding diversity to a perfect ensemble prediction (geometric scaling, sampling in logit space, or sampling from a dirichlet distribution), _any_ of these strategies will degrade average single model loss more rapidly than increases in predictive diversity, leading to worse ensemble performance overall.** deep ensembles and classical ensembles, demonstrating how valuable intuitions from traditional settings break down when applied to large-capacity deep ensembles. As the field continues to improve the performance of base models, ensembles with diversity encouragement will have ever diminishing value, as doing so will encourage ensemble members to make errors they would otherwise avoid. ## Acknowledgements We thank John Miller for sharing models trained on CIFAR10, and Taori et al. (2020) for making their trained ImageNet models and code open sourced and easy to use. We would also like to thank Rich Zemel for his thoughtful suggestions, as well as Yixin Wang, Yaniv Yacoby, Zelda Mariet and Jeremy Bernstein for their insightful comments. TA is supported by NIH training grant 2T32NS064929-11. EKB is supported by NIH 5T32NS064929-13. GP and JPC are supported by the Simons Foundation, McKnight Foundation, and Grossman Center for the Statistics of Mind. TA, EKB, GP and JPC are supported by NSF 1707398 Gatsby Charitable Trust GAT3708.
2308.15434
Random feature approximation for general spectral methods
Random feature approximation is arguably one of the most popular techniques to speed up kernel methods in large scale algorithms and provides a theoretical approach to the analysis of deep neural networks. We analyze generalization properties for a large class of spectral regularization methods combined with random features, containing kernel methods with implicit regularization such as gradient descent or explicit methods like Tikhonov regularization. For our estimators we obtain optimal learning rates over regularity classes (even for classes that are not included in the reproducing kernel Hilbert space), which are defined through appropriate source conditions. This improves or completes previous results obtained in related settings for specific kernel algorithms.
Mike Nguyen, Nicole Mücke
2023-08-29T16:56:03Z
http://arxiv.org/abs/2308.15434v1
# Random feature approximation for general spectral methods ###### Abstract Random feature approximation is arguably one of the most popular techniques to speed up kernel methods in large scale algorithms and provides a theoretical approach to the analysis of deep neural networks. We analyze generalization properties for a large class of spectral regularization methods combined with random features, containing kernel methods with implicit regularization such as gradient descent or explicit methods like Tikhonov regularization. For our estimators we obtain optimal learning rates over regularity classes (even for classes that are not included in the reproducing kernel Hilbert space), which are defined through appropriate source conditions. This improves or completes previous results obtained in related settings for specific kernel algorithms. ## 1 Introduction The rapid technological progress has led to accumulation of vast amounts of high-dimensional data in recent years. Consequently, to analyse such amounts of data it is no longer sufficient to create algorithms that solely aim for the best possible predictive accuracy. Instead, there is a pressing need to design algorithms that can efficiently process large datasets while minimizing computational overhead. In light of these challenges, two fundamental algorithmic tools, fast gradient methods, and sketching techniques, have emerged. Iterative gradient methods such as acceleration methods [14] or stochastic gradient methods [10] leading to favorable convergence rates while reducing computational complexity during learning. On the other hand sketching techniques enable the reduction of data dimension, thereby decreasing memory requirements through random projections. The allure of combining both methodologies has garnered significant attention from researchers and practitioners alike. Especially for Kernel based algorithms various sketching tools have gained a lot of attention in recent years. For non-parametric statistical approaches kernel methods are in many applications still state of the art and provide an elegant and effective framework to develop theoretical optimal learning bounds [1, 12, 13]. However those benefits come with a computational cost making these methods unfeasible when dealing with large datasets. In fact traditional kernelized learning algorithms require storing the kernel gram matrix \(\mathbf{K}_{i,j}=K(x_{i},x_{j})\) where \(K(.,.)\) denotes the kernel function and \(x_{i},x_{j}\) the data points. This results in a memory cost of at least \(O(n^{2})\) and a time cost of up to \(O(n^{3})\) where \(n\) denotes the data set size [15]. Most popular sketching tools to overcome these issues are Nystrom approximations [14] and random feature approximation (RFA) [13, 15]. In this paper, we investigate algorithms, using the interplay of fast learning methods and RFA and analyse generalization performance of such algorithms. Related work was contributed by [13] and [14]. They obtained optimal rates for Kernel Ridge Regression (KRR) and Stochastic Gradient Descent respectively, both algorithms were combined with RFA. Using a general spectral filtering framework [12] we proved fast rates for all kind of learning methods with implicit or explicit regularization. For example gradient descent, acceleration methods and we also cover the results of [13] for KRR. Moreover, we managed to overcome the saturation effect appearing in [13] and [14] by providing fast rates of convergence for objectives with any degree of smoothness. The rest of the paper is organized as follows. In Section 2, we present our setting and review relevant results on learning with kernels, and learning with random features. In Section 3, we present and discuss our main results, while proofs are deferred to the appendix. Finally, numerical experiments are presented in Section 4. **Notation.** By \(\mathcal{L}(\mathcal{H}_{1},\mathcal{H}_{2})\) we denote the space of bounded linear operators between real Hilbert spaces \(\mathcal{H}_{1}\), \(\mathcal{H}_{2}\). We write \(\mathcal{L}(\mathcal{H},\mathcal{H})=\mathcal{L}(\mathcal{H})\). For \(\Gamma\in\mathcal{L}(\mathcal{H})\) we denote by \(\Gamma^{T}\) the adjoint operator and for compact \(\Gamma\) by \((\lambda_{j}(\Gamma))_{j}\) the sequence of eigenvalues. If \(\theta\in\mathcal{H}\) we write \(\theta\otimes\theta:=\langle\cdot,\theta\rangle\theta\). We let \([n]=\{1,...,n\}\). For two positive sequences \((a_{n})_{n}\), \((b_{n})_{n}\) we write \(a_{n}\lesssim b_{n}\) if \(a_{n}\leq cb_{n}\) for some \(c>0\) and \(a_{n}\simeq b_{n}\) if both \(a_{n}\lesssim b_{n}\) and \(b_{n}\lesssim a_{n}\). ## 2 Setup We let \(\mathcal{X}\subset\mathbb{R}^{d}\) be the input space and \(\mathcal{Y}\subset\mathbb{R}\) be the output space. The unknown data distribution on the data space \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\) is denoted by \(\rho\) while the marginal distribution on \(\mathcal{X}\) is denoted as \(\rho_{X}\) and the regular conditional distribution on \(\mathcal{Y}\) given \(x\in\mathcal{X}\) is denoted by \(\rho(\cdot|x)\), see e.g. [16]. Given a measurable function \(g:\mathcal{X}\to\mathbb{R}\) we further define the expected risk as \[\mathcal{E}(g):=\mathbb{E}[\ell(g(X),Y)]\, \tag{2.1}\] where the expectation is taken w.r.t. the distribution \(\rho\) and \(\ell:\mathbb{R}\times\mathcal{Y}\to\mathbb{R}_{+}\) is the least-square loss \(\ell(t,y)=\frac{1}{2}(t-y)^{2}\). It is known that the global minimizer of \(\mathcal{E}\) over the set of all measurable functions is given by the regression function \(g_{\rho}(x)=\int_{\mathcal{Y}}y\rho(dy|x)\). ### Motivation of Kernel Methods with RFA Kernel methods are nonparametric approaches defined by a kernel \(K:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\), that is a symmetric and positive definite function, and a so called regularisation function \(\phi_{\lambda}\). The estimator then has the form \[f_{\lambda}:=\phi_{\lambda}\Big{(}\widehat{\Sigma}\Big{)} \widehat{\mathcal{S}}^{*}\mathbf{y}, \tag{2.2}\] where \(\widehat{\mathcal{S}}^{*}\mathbf{y}:=\frac{1}{n}\sum_{i=1}^{n}y_{i}K_{x_{i}}\), \(\widehat{\Sigma}:=\frac{1}{n}\sum_{j=1}^{n}\bigl{\langle}\cdot,K_{x_{j}} \bigr{\rangle}_{\mathcal{H}}K_{x_{j}}\), \(K_{x}:=K(x,.)\) and \(\mathcal{H}\) denotes the reproducing kernel Hilbert space (RKHS) of \(K\). [15] established optimal rates for kernel methods of the above form. The idea of this estimator is, when the sample size \(n\) is large, the function \(\widehat{\mathcal{S}}^{*}\mathbf{y}=\frac{1}{n}\sum_{i=1}^{n}y_{i}K_{x_{i}}\in \mathcal{H}\) is a good approximation of its mean \(\Sigma g_{\rho}=\int_{\mathcal{X}}g_{\rho}(x)K_{x}d\rho_{X}\). Hence the spectral algorithm (2.2) produces a good estimator \(f_{\lambda}\), if \(\phi_{\lambda}\Big{(}\widehat{\Sigma}\Big{)}\) is an approximate inverse of \(\Sigma\). To motivate RFA we now consider the following examples. The probably most common example for explicit regularisation is KRR: \[f_{\lambda}(x)=\sum_{i=1}^{n}\alpha_{i}K(x_{i},x),\quad\alpha=(\mathbf{K}+ \lambda nI)^{-1}y, \tag{2.3}\] where \(\mathbf{K}\) denotes the kernel gram matrix \(\mathbf{K}_{i,j}=K(x_{i},x_{j})\). Note that this estimator can be obtained from (2.2) by choosing \(\phi_{\lambda}(t)=\frac{1}{t+\lambda}\)[10]. In the above formula (2.3) the estimator has computational costs of order \(O(n^{3})\) since we need to calculate the inverse of an \(n\) by \(n\) matrix. However, if we assume to have a inner product kernel \(K_{M}(x,x^{\prime})=\Phi_{M}(x)^{\top}\Phi_{M}(x^{\prime})\), where \(\Phi_{M}\) is a feature map of dimension \(M\), the computational costs can be reduced to \(O(nM^{2}+M^{3})\)[11]. To also give an example of implicit regularization we here analyse an acceleration method, namely the Heavyball method which can also be derived from (2.2) [11] and is closely related to the normal gradient descent algorithm but has an additional momentum term: \[f_{t+1}=f_{t}-\frac{\alpha}{n}\sum_{j=1}^{n}(f_{t}(x_{j})-y_{j})K(x_{j},\cdot )+\beta(f_{t}-f_{t-1})\;, \tag{2.4}\] where \(\alpha>0,\beta\geq 0\) describe the step-sizes. So in each iteration we have to update our estimator \(f_{t}(x_{j})\) for all data points. This results in a computational cost of order \(O(tn^{2})\). However if we again assume to have a inner product kernel \(K_{M}(x,x^{\prime})=\Phi_{M}(x)^{\top}\Phi_{M}(x^{\prime})\) we can use theory of RKHS. Recall that the RKHS of \(K_{M}\) can be expressed as \[\mathcal{H}_{M}=\{h:\mathcal{X}\rightarrow\mathbb{R}|\;\exists\,\theta\in \mathbb{R}^{M}\;\;s.t.\;\;h(x)=\Phi_{M}(x)^{\top}\theta\}\] (see for example [10]). Since \(K_{M}\in\mathcal{H}_{M}\) and therefore all iterations \(f_{t}\in\mathcal{H}_{M}\), there exists some \(\theta_{t}\in\mathbb{R}^{M}\) such that \(f_{t}(x)=\Phi_{M}(x)^{\top}\theta_{t}\). This implies that instead of running (2.4) it is enough to update only the parameter vector: \[\theta_{t+1}=\theta_{t}-\frac{\alpha}{n}\sum_{j=1}^{n}(\Phi_{M}(x_{i})^{\top} \theta_{t}-y_{j})\Phi_{M}(x_{i})+\beta(\theta_{t}-\theta_{t-1})\;. \tag{2.5}\] The computational cost of the above algorithm (2.4) is therefore reduced from \(O(tn^{2})\) to \(O(tnM)\). The basic idea of RFA is now to consider kernels which can be approximated by an inner product [11]: \[K_{\infty}(x,y)\approx K_{M}(x,y):=\sum_{i=1}^{p}\Phi_{M}^{(i)}(x)^{\top}\Phi_ {M}^{(i)}(y), \tag{2.6}\] where \(\Phi_{M}^{(i)}:\mathcal{X}\rightarrow\mathbb{R}^{M}\), \(\Phi_{M}^{(i)}(x)=M^{-1/2}(\varphi^{(i)}(x,\omega_{1}),\ldots,\varphi^{(i)}(x, \omega_{M}))\) is a finite dimensional feature map and \(\varphi^{(i)}:\mathcal{X}\times\Omega\rightarrow\mathbb{R}\) with some probability space \((\Omega,\pi)\). More precisely this paper investigates RFA for kernels \(K\) which have an integral representation of the form \[K_{\infty}(x,y)=\sum_{i=1}^{p}\int_{\Omega}\varphi^{(i)}(x,\omega)\varphi^{(i)}(y,\omega)d\pi(\omega). \tag{2.7}\] Note that there are a large variety of standard kernels of the form (2.7) which can be approximate by (2.6). For example, the Linear kernel, the Gaussian kernel [11] or Tangent kernels [12]. In contrast to [11], we added an additional sum over different feature maps \(\Phi_{M}^{(i)}\), for a more general setting and to cover a special case of Tangent kernels namely the Neural-Tangent Kernel (NTK) [13] which provided a better understanding of neural networks in recently published papers [10, 13, 14, 15]. For one "hidden layer" the NTK is defined as \[K_{\infty}\big{(}x,x^{\prime}\big{)}\coloneqq\int_{\Omega}\sigma\Big{(} \omega^{\top}x\Big{)}\sigma\Big{(}\omega^{\top}x^{\prime}\Big{)}+\tau^{2} \Big{(}x^{\top}x^{\prime}+\gamma^{2}\Big{)}\sigma^{\prime}\Big{(}\omega^{\top }x\Big{)}\sigma^{\prime}\Big{(}\omega^{\top}x^{\prime}\Big{)}d\pi(\omega), \tag{2.8}\] where \(\tau,\gamma\in\mathbb{R}\) and \(\sigma\) defines the so called activation function. According to our setting the NTK from above can be recovered from (2.7) by setting \(p=d+2\) where \(d\) denotes the input dimension and \(\varphi^{(i)}(x,\omega)=\tau x^{(i)}\sigma^{\prime}\big{(}\omega^{\top}x\Big{)}\) for \(i\in[d]\) and \(\varphi^{(d+1)}(x,\omega)=\sigma\big{(}\omega^{\top}x\big{)}\), \(\varphi^{(d+2)}(x,\omega)=\tau\gamma\sigma^{\prime}\big{(}\omega^{\top}x\big{)}\). ### Kernel-induced operators and spectral regularization functions In this subsection, we specify the mathematical background of regularized learning. It essentially repeats the setting in [1] in summarized form. First we introduce kernel induced operators and then recall basic definitions of linear regularization methods based on spectral theory for self-adjoint linear operators. These are standard methods for finding stable solutions for ill-posed inverse problems. Originally, these methods were developed in the deterministic context (see [1]). Later on, they have been applied to probabilistic problems in machine learning (see, e.g., [1] or [1]). Recall that \(\mathcal{H}_{M}\) denotes the RKHS of the kernel \(K_{M}\) defined in (2.6). We denote by \(\mathcal{S}_{M}:\mathcal{H}_{M}\hookrightarrow L^{2}(\mathcal{X},\rho_{X})\) the inclusion of \(\mathcal{H}_{M}\) into \(L^{2}(\mathcal{X},\rho_{X})\) for \(M\in\mathbb{N}\cup\infty\). The adjoint operator \(\mathcal{S}_{M}^{*}:L^{2}(\mathcal{X},\rho_{X})\longrightarrow\mathcal{H}_{M}\) is identified as \[\mathcal{S}_{M}^{*}g=\int_{\mathcal{X}}g(x)K_{M,x}\rho_{X}(dx)\] where \(K_{M,x}\) denotes the element of \(\mathcal{H}_{M}\) equal to the function \(t\mapsto K_{M}(x,t)\). The covariance operator \(\Sigma_{M}:\mathcal{H}_{M}\longrightarrow\mathcal{H}_{M}\) and the kernel integral operator \(\mathcal{L}_{M}:L^{2}(\mathcal{X},\rho_{X})\to L^{2}(\mathcal{X},\rho_{X})\) are given by \[\Sigma_{M}f\coloneqq\mathcal{S}_{M}^{*}\mathcal{S}_{M}f =\int_{\mathcal{X}}\langle f,K_{M,x}\rangle_{\mathcal{H}_{M}}K_{M,x} \rho_{X}(dx)\] \[\mathcal{L}_{M}f\coloneqq\mathcal{S}_{M}\mathcal{S}_{M}^{*}f =\int_{\mathcal{X}}f(x)K_{M,x}\rho_{X}(dx)\] which can be shown to be positive, self-adjoint, trace class (and hence is compact). Here \(K_{M,x}\) denotes the element of \(\mathcal{H}_{M}\) equal to the function \(t\mapsto K_{M}(x,t)\). The empirical versions of these operators, corresponding formally to taking the empirical distribution of \(\rho_{X}\) in the above formulas, are given by \[\widehat{\mathcal{S}}_{M}:\mathcal{H}_{M}\longrightarrow\mathbb{R}^{n}, \left(\widehat{\mathcal{S}}_{M}f\right)_{j}=\big{\langle}f,K_{M,x_{j }}\big{\rangle}_{\mathcal{H}_{M}},\] \[\widehat{\mathcal{S}}_{M}^{*}:\mathbb{R}^{n}\longrightarrow\mathcal{H }_{M}, \widehat{\mathcal{S}}_{M}^{*}\mathbf{y}=\frac{1}{n}\sum_{j=1}^{n}y_ {j}K_{M,x_{j}},\] \[\widehat{\Sigma}_{M}:=\widehat{\mathcal{S}}_{M}^{*}\widehat{ \mathcal{S}}_{M}:\mathcal{H}_{M}\longrightarrow\mathcal{H}_{M}, \widehat{\Sigma}_{M}=\frac{1}{n}\sum_{j=1}^{n}\big{\langle} \cdot,K_{M,x_{j}}\big{\rangle}_{\mathcal{H}_{M}}K_{M,x_{j}}.\] Further let the numbers \(\mu_{j}\) are the positive eigenvalues of \(\Sigma_{\infty}\) satisfying \(0<\mu_{j+1}\leq\mu_{j}\) for all \(j>0\) and \(\mu_{j}\searrow 0\). **Definition 2.1** (Regularization function).: _Let \(\phi:(0,1]\times[0,1]\rightarrow\mathbb{R}\) be a function and write \(\phi_{\lambda}=\phi(\lambda,.)\). The family \(\{\phi_{\lambda}\}_{\lambda}\) is called regularisation function, if the following condition holds:_ 1. _There exists a constant_ \(D<\infty\) _such that for any_ \(0<\lambda\leq 1\)__ \[\sup_{0<t<1}|t\phi_{\lambda}(t)|\leq D.\] (2.9) 2. _There exists a constant_ \(E<\infty\) _such that for any_ \(0<\lambda\leq 1\)__ \[\sup_{0<t\leq 1}|\phi_{\lambda}(t)|\leq\frac{E}{\lambda}.\] 3. _Defining the residual_ \(r_{\lambda}(t):=1-\phi_{\lambda}(t)t\)_, there exists a constant_ \(c_{0}<\infty\) _such that for any_ \(0<\lambda\leq 1\)__ \[\sup_{0<t\leq 1}|r_{\lambda}(t)|\leq c_{0}.\] (2.10) It has been shown in e.g. Gerfo et al. (2008), Dicker et al. (2017), Blanchard and Mucke (2017) that attainable learning rates are essentially linked with the qualification of the regularization \(\left\{\phi_{\lambda}\right\}_{\lambda}\), being the maximal \(\nu\) such that for any \(q\in[0,\nu]\) and for any \(0<\lambda\leq 1\) \[\sup_{0<t\leq 1}|r_{\lambda}(t)|t^{q}\leq c_{q}\lambda^{q}, \tag{2.11}\] for some constant \(c_{q}>0\). ## 3 Main Results ### Assumptions and Main Results In this section we formulate our assumptions and state our main results. **Assumption 3.1** (Data Distribution).: _There exists positive constants \(Q\) and \(Z\) such that for all \(l\geq 2\) with \(l\in\mathbb{N}\),_ \[\int_{\mathcal{Y}}|y|^{l}d\rho(y\mid x)\leq\frac{1}{2}l!Z^{l-2}Q^{2}\] \(\rho_{X}\)_-almost surely. The above assumption is very standard in statistical learning theory. It is for example satisfied if \(y\) is bounded almost surely. Obviously, this assumption implies that the regression function \(g_{\rho}\) is bounded almost surely, as_ \[|g_{\rho}(x)|\leq\int_{\mathbb{R}}|y|d\rho(y\mid x)\leq\left(\int_{\mathbb{R}}| y|^{2}d\rho(y\mid x)\right)^{\frac{1}{2}}\leq Q\] **Assumption 3.2** (Kernel).: _Assume that the kernel \(K_{\infty}\) has an integral representation of the form (2.7) with \(\sum_{i=1}^{p}|\varphi^{(i)}(x,\omega)|^{2}\leq\kappa^{2}\) almost surely._ **Assumption 3.3** (Source Condition).: _Let \(R>0\), \(r>0\). Denote by \(\mathcal{L}_{\infty}:L^{2}(\mathcal{X},\rho_{X})\to L^{2}(\mathcal{X},\rho_{X})\) the kernel integral operator associated to \(K_{\infty}\). We assume_ \[g_{\rho}=\mathcal{L}_{\infty}^{r}h\;, \tag{3.1}\] _for some \(h\in L^{2}(\mathcal{X},\rho_{X})\), satisfying \(||h||_{L^{2}}\leq R\)._ This assumption characterizes the hypothesis space and relates to the regularity of the regression function \(g_{\rho}\). The bigger \(r\) is, the smaller the hypothesis space is, the stronger the assumption is, and the easier the learning problem is, as \(\mathcal{L}^{r_{1}}\big{(}L^{2}_{\rho_{X}}\big{)}\subseteq\mathcal{L}^{r_{2}} \big{(}L^{2}_{\rho_{X}}\big{)}\) if \(r_{1}\geq r_{2}\). The next assumption relates to the capacity of the hypothesis space. **Assumption 3.4** (Effective Dimension).: _For some \(b\in[0,1]\) and \(c_{b}>0,\Sigma_{\infty}\) satisfies_ \[\mathcal{N}_{\mathcal{L}_{\infty}}:=\operatorname{tr}\bigl{(} \mathcal{L}_{\infty}(\mathcal{L}_{\infty}+\lambda I)^{-1}\bigr{)}\leq c_{b} \lambda^{-b},\quad\text{ for all }\lambda>0 \tag{3.2}\] _and further we assume that \(2r+b>1\)._ The left hand-side of (3.2) is called effective dimension or degrees of freedom [1]. It is related to covering/entropy number conditions, see [15]. The condition (3.2) is naturally satisfied with \(b=1\), since \(\Sigma\) is a trace class operator which implies that its eigenvalues \(\left\{\mu_{i}\right\}_{i}\) satisfy \(\mu_{i}\lesssim i^{-1}\). Moreover, if the eigenvalues of \(\Sigma\) satisfy a polynomial decaying condition \(\mu_{i}\sim i^{-c}\) for some \(c>1\), or if \(\Sigma\) is of finite rank, then the condition (3.2) holds with \(b=1/c\), or with \(b=0\). The case \(b=1\) is refereed as the capacity independent case. A smaller \(b\) allows deriving faster convergence rates for the studied algorithms. The assumption \(2r+b>1\) refers to easy learning problems and if \(2r+b\leq 1\) one speaks of hard learning problems [20]. In this paper we only investigate easy learning problems and leave the question, how many features \(M\) are needed to obtain optimal rates in hard learning problems [17], open for future work. We now derive a generalisation bound of the excess risk \(\|g_{\rho}-\mathcal{S}_{M}f_{\lambda}^{M}\|_{L^{2}(\rho_{x})}\) with respect to our RFA estimator, \[f_{\lambda}^{M}\coloneqq\phi_{\lambda}(\widehat{\Sigma}_{M}) \widehat{\mathcal{S}}_{M}^{*}y\,.\] The main idea of our proof is based on a bias-variance type decomposition: Further introducing \[f_{\lambda}^{*}\coloneqq\mathcal{S}_{M}^{*}\phi_{\lambda}(\mathcal{L}_{M})g_{\rho},\] we write \[\|g_{\rho}-\mathcal{S}_{M}f_{\lambda}^{M}\|_{L^{2}(\rho_{x})} \leq\|g_{\rho}-\mathcal{S}_{M}f_{\lambda}^{*}\|_{L^{2}(\rho_{x})}+ \|\mathcal{S}_{M}f_{\lambda}^{*}-\mathcal{S}_{M}f_{\lambda}^{M}\|_{L^{2}(\rho_ {x})} \tag{3.3}\] \[=:\ \text{BIAS}\ +\ \text{VARIANCE}. \tag{3.4}\] We bound the bias and variance part separately in Proposition A.1 and A.3 to obtain the following theorem. **Theorem 3.5**.: _Provided the Assumptions 3.1,3.2, 3.3, 3.4 we have for \(\lambda=Cn^{-\frac{1}{2r+b}}\log^{3}(2/\delta)\) and \(\delta\in(0,1)\) with probability at least \(1-\delta\),_ \[\|g_{\rho}-\mathcal{S}_{M}f_{\lambda}^{M}\|_{L^{2}(\rho_{x})}\leq\|g_{\rho}- \mathcal{S}_{M}f_{\lambda}^{*}\|_{L^{2}(\rho_{x})}+\|\mathcal{S}_{M}f_{\lambda }^{*}-\mathcal{S}_{M}f_{\lambda}^{M}\|_{L^{2}(\rho_{x})}\leq\bar{C}n^{-\frac{r }{2r+b}}\log^{3r+1}\!\left(\frac{18}{\delta}\right)\] _as long as \(\nu\geq 0.5+r\lor 1\),_ \[M\geq\tilde{C}\log(n)\cdot\begin{cases}n^{\frac{1}{2r+b}}&r\in\left(0,\frac{1 }{2}\right)\\ n^{\frac{1+b(2r-1)}{2r+b}}&r\in\left[\frac{1}{2},1\right]\\ n^{\frac{2r}{2r+b}}&r\in(1,\infty)\end{cases}\] _and \(n\geq n_{0}:=e^{\frac{2r+b}{2r+b-1}}\), where the constants \(C,\tilde{C}\) and \(\bar{C}\) are independent of \(n,M,\lambda\) and can be found in section A.1._ If we can not make any assumption on the effective dimension i.e. assuming the worst case \(b=1\) we obtain the following corollary. **Corollary 3.6**.: _Provided the Assumptions 3.1,3.2, 3.3, with \(r=0.5\) we have for \(\lambda=Cn^{-\frac{1}{2}}\log^{3}(2/\delta)\) and \(\delta\in(0,1)\) with probability at least \(1-\delta\),_ \[\|g_{\rho}-\mathcal{S}_{M}f_{\lambda}^{M}\|_{L^{2}(\rho_{x})}^{2}\leq\bar{C}^ {2}n^{-\frac{1}{2}}\log^{5}\!\left(\frac{18}{\delta}\right)\] _as long as \(\nu\geq 0.5+r\lor 1\), \(M\geq\tilde{C}\log(n)\cdot n^{\frac{1}{2}}\) and \(n\geq 8\)._ Numerical Illustration We analyze the behavior of kernel GD (algorithm (2.5) for \(\beta=0\)) with the RF of the NTK kernel (2.8). In our simulations we used \(n=5000\) training and test data points from a standard normal distributed data set with input dimension \(d=1\) and a subset of the SUSY1 classification data set with input dimension \(d=14\). The measures we show in the following simulation are an average over 50 repetitions of the algorithm. Our theoretical analysis suggests that only a number of RF of the order of \(M=O(\sqrt{n}\cdot d)^{2}\) suffices to gain optimal learning properties. Indeed in Figure 4 we can observe for both data sets that over a certain threshold of the order \(M=O(\sqrt{n}\cdot d)\), increasing the number of RF does not improve the test error of our algorithm. Footnote 1: [https://archive.ics.uci.edu/ml/datasets/SUSY](https://archive.ics.uci.edu/ml/datasets/SUSY) Figure 1: Heat plot of the testerror for different numbers of RF \(M\) and interations \(T\). **Left:** Error of SUSY data set. **Right:** Error of random data set.
2304.03183
History-deterministic Timed Automata
We explore the notion of history-determinism in the context of timed automata (TA) over infinite timed words. History-deterministic (HD) automata are those in which nondeterminism can be resolved on the fly, based on the run constructed thus far. History-determinism is a robust property that admits different game-based characterisations, and HD specifications allow for game-based verification without an expensive determinization step. We show that the class of timed $\omega$-languages recognized by HD timed automata strictly extends that of deterministic ones, and is strictly included in those recognised by fully non-deterministic TA. For non-deterministic timed automata it is known that universality is already undecidable for safety/reachability TA. For history-deterministic TA with arbitrary parity acceptance, we show that timed universality, inclusion, and synthesis all remain decidable and are EXPTIME-complete. For the subclass of TA with safety or reachability acceptance, one can decide (in EXPTIME) whether such an automaton is history-deterministic. If so, it can effectively determinized without introducing new automaton states.
Sougata Bose, Thomas A. Henzinger, Karoliina Lehtinen, Sven Schewe, Patrick Totzke
2023-04-06T16:08:38Z
http://arxiv.org/abs/2304.03183v7
# History-deterministic timed automata \({}^{*}\) ###### Abstract. We explore the notion of history-determinism in the context of timed automata (TA) over infinite timed words. History-deterministic (HD) automata are those in which nondeterminism can be resolved on the fly, based on the run constructed thus far. History-determinism is a robust property that admits different game-based characterisations, and HD specifications allow for game-based verification without an expensive determinization step. We show that the class of timed \(\omega\)-languages recognised by HD timed automata strictly extends that of deterministic ones, and is strictly included in those recognised by fully non-deterministic TA. For non-deterministic timed automata it is known that universality is already undecidable for Buchi TA. For history-deterministic TA with arbitrary parity acceptance, we show that timed universality, inclusion, and synthesis all remain decidable and are ExpTime-complete. For the subclass of TA with safety or reachability acceptance, one can decide (in ExpTime) whether such an automaton is history-deterministic. If so, it can effectively determinized without introducing new automata states. Key words and phrases:Timed Automata, History-determinism, Good-for-games, fair simulation, synthesis. \({}^{*}\) This work is has in parts been presented at the 33rd International Conference on Concurrency Theory (CONCUR'22) [HLT22] and at the 16th International Workshop on Reachability Problems (RP'22) [BHL\({}^{+}\)22]. This work was supported in part by the ERC-2020-AdG 101020093. We also acknowledge support from the EPSRC, project number EP/V025848/1. Introduction The study of the theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in field of mathematics. The theory of _history-deterministic automata_ is a very important topic in field of mathematics. The theory of _history-deterministic automata_ is a very important topic in field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in field of mathematics. The theory of _history-deterministic automata_ is a very important topic in field of mathematics. The theory of _history-deterministic automata_ is a very important topic in field of mathematics. The theory of _history-deterministic automata_ is a very important topic in field of mathematics. The theory of _history-deterministic automata_ is a very important topic in the field of mathematics. The theory of _history-deterministic automata_ is a very important topic in field of mathematics. part we are concerned with solving the quintessential verification problem for timed systems, namely _timed language inclusion_, in the special case of history-deterministic (_i.e._ executable) specifications. Since universality is undecidable for general timed automata, so is the timed language-inclusion problem for nondeterministic specifications [1]. This is the reason why much previous work in timed verification has focused on identifying determinizable subclasses of timed automata, such as event-clock automata [1], and on studying deterministic extensions of the timed-automaton model, such as deterministic two-way timed automata [2]. Determinizable specifications can be complemented, thus supporting the _complementation-based_ approach to language inclusion: in order to check if every word accepted by the implementation \(A\) is also accepted by the specification \(B\), first determinize and complement \(B\), and then check the intersection with \(A\) for emptiness. We show that the history-determinism of specifications suffices for deciding timed language inclusion, which demonstrates that determinizability is not required. More precisely, we prove that if \(A\) is a timed automaton and \(B\) is a history-deterministic timed automaton, it can be decided in \(\textsc{ExpTime}\) if every timed word accepted by \(A\) is also accepted by \(B\) (Corollary 5.5). In contrast to the traditional complementation-based approach to language inclusion, the history-deterministic approach is _game-based_. Like the complementation-based approach, the game-based approach is best formulated in the generic setting of labelled transition systems with acceptance conditions, so-called _fair LTS_. The acceptance condition of a fair LTS declares a subset of the infinite runs of the LTS to be fair (a special case is _safety_ acceptance, which declares all infinite runs to be fair). Given two fair LTS \(A\) and \(B\), the language of \(A\) is included in the language of \(B\) if for every fair run of \(A\) there is a fair run of \(B\) over the same (infinite) word. A sufficient condition for the language inclusion between \(A\) and \(B\) is the existence of a fair simulation relation between the states of \(A\) and the states of \(B\), or equivalently, the existence of a winning strategy for player \(p_{B}\) in the following 2-player _fair simulation game_: (i) every transition chosen by player \(p_{A}\) on the state-transition graph \(A\) can be matched by a transition chosen by player \(p_{B}\) on the state-transition graph \(B\) with the same label (letter or time delay), and (ii) if the infinite sequence of transitions chosen by \(p_{A}\) produce a fair run of \(A\), then the matching transitions chosen by \(p_{B}\) produce a fair run of \(B\)[10]. Solving the fair simulation game is often simpler than checking language inclusion; it may be polynomial where language inclusion is not (_e.g._ in the case of finite safety or Buchi automata), or decidable where language inclusion is not (_e.g._ in the case of timed safety or Buchi automata [10]). We show that for all fair LTS \(A\) and all history-deterministic fair LTS \(B\), the condition that the language of \(A\) is included in the language of \(B\) is equivalent to the condition that \(A\) is fairly simulated by \(B\). This observation reduces the language inclusion problem for history-deterministic specifications to the problem of solving a fair simulation game between implementation and specification. The solution of fair simulation games depends on the complexity of the acceptance conditions of \(A\) and \(B\), but is often simpler than the complementation of \(B\), and fair simulation games can be solvable even in the case of specifications that cannot be complemented. In Section 4.2 we show the existence of such a timed language. The game-based approach to checking language inclusion, which requires history-determinism, is therefore more general, and often more efficient, than the traditional complementation-based approach to checking language inclusion, which usually requires full determinization. Indeed, history-determinism is exactly the condition that allows the game-based approach to language inclusion: for a given fair LTS \(B\), if it is the case that can fairly simulate all fair LTS \(A\) whose language is included in the language of \(B\), then \(B\) must be history-deterministic (Theorem 3.4). More generally, turn-based timed games for which the winning condition is defined by a history-deterministic timed automaton are no harder to solve than those with deterministic winning conditions: the winner of such a timed game can be determined on the product of the (timed) arena with the automaton specifying the winning condition. We conjecture that this is the case also for the concurrent timed games of [1] (cf. Section 8). Timed games have also been defined for the synthesis of timed systems from timed I/O specifications. Again, we show that the synthesis game of [1] can be solved not only for I/O specifications that are given by deterministic timed automata, but more generally, for those given by history-deterministic timed automata (Theorem 7.2). The second part of our results investigates the problem of deciding history-determinism for timed automata and the determinizability of history-deterministic timed automata. In this part, we have only partial results, namely results for timed safety and reachability automata. Timed safety automata, in particular, constitute an important class of specifications, as many interesting timed and untimed properties can be specified by timed safety automata if time is required to diverge [14, 15]. We prove that for timed safety automata and timed reachability automata, it can be decided in ExpTime if a given timed automaton is history-deterministic (Theorem 6.6). Checking history-determinism remains open for more general classes of timed automata, such as timed Buchi and coBuchi automata. We also show that every history-deterministic timed safety or reachability automaton can be determinized, without increasing the number of automaton states, but with an exponential increase in the number of transitions or length of guards (Theorem 4.5). Since the question of determinizability is undecidable for nondeterministic timed reachability automata [13], it follows from our result, that checking if a given non-deterministic timed reachability automaton has an equivalent HD timed reachability automaton is also undecidable. Finally, we show that if a timed safety or reachability automaton is good-for-games (in the sense explained earlier), then the automaton must be history-deterministic (Theorem 5.8). This implication is open for more general classes of timed automata. All our results hold regardless of whether one assumes weak or strong progression of time (zero-delays are allowed, resp. forbidden) and also whether time must diverge, i.e., infinite words of finite duration (Zeno words) are considered or not. ### Related Work The notion of history-determinism was introduced independently, with slightly different definitions, by Henzinger and Piterman [12] for solving games without determinization, by Colcombet [10] for cost-functions, and by Kupferman, Safra, and Vardi [11] for recognising derived tree languages of word automata. Initially, history-determinism was mostly studied in the \(\omega\)-regular setting, where these different definitions all coincide [1]. For some coBuchi-recognisable languages, history-deterministic automata can be exponentially more succinct than any equivalent deterministic automaton [12], and for Buchi and coBuchi automata, history-determinism is decidable in polynomial time [1, 13]. For transition-based history-deterministic automata, minimisation is PTime[1], while for state-based ones, it is NP-complete [14]. Recently, the notion has been extended to richer automata models, such as pushdown automata [1, 2] and quantitative automata [1, 2], where deterministic and nondeterministic models have different expressivity, and therefore, allowing a little bit of nondeterminism can, in addition to succinctness, also provide more expressivity. Paper StructureAfter defining preliminary notions we proceed to introduce history-determinism, and show a new, fair-simulation-based characterisation in Section 3. In Section 4 we consider the expressivity HD timed automata with different Parity acceptance conditions. We show that history-deterministic TA with safety or reachability acceptance are determinizable Section 5 considers questions concerning timed games, timed synthesis, and timed language inclusion and shows that history-determinism coincides with good-for-gameness for reachability and safety TA. In Section 6 we derive that one can decide, in ExpTime, if a given safety or reachability TA is history-deterministic. In Section 7 we study synthesis games with winning conditions given by history- deterministic TA and show that solving the corresponding synthesis game remains in ExpTime, as for deterministic TA. We conclude with a summary and some open problems and conjectures. ## 2. Preliminaries **Numbers, Words.** Let \(\mathbb{N}\) and \(\mathbb{R}_{\geq 0}\) denote the nonnegative integers and reals, respectively. For \(c\in\mathbb{R}_{\geq 0}\) we write \(\lfloor c\rfloor\) for its integer and \(\mathit{frac}(c)\stackrel{{\mathrm{def}}}{{=}}c-\lfloor c\rfloor\) for its fractional part. An alphabet \(\Sigma\) is a nonempty set of letters. \(\Sigma_{\varepsilon}\) denotes \(\Sigma\cup\{\varepsilon\}\). \(\Sigma^{*}\) and \(\Sigma^{\omega}\) denote the sets of finite and infinite words over \(\Sigma\), respectively and \(\Sigma^{\infty}=\Sigma^{*}\cup\Sigma^{\omega}\) denotes their union. The empty word is denoted by \(\varepsilon\), the length of a finite word \(v\) is denoted by \(|v|\), and the \(n\)-th letter of a finite or infinite word is denoted by \(w[n]\) (starting with \(n=0\)). **Labelled Transition Systems, Languages, Fair Simulation.** A _labelled transition system_ (LTS) is a graph \(S=(V,\Sigma,E)\) with set \(V\) of states and edges \(E\ \subseteq\ V\times\Sigma\times V\), labelled by alphabet \(\Sigma\). It is _deterministic_ if for all \((s,a)\in V\times\Sigma\) there is at most one \(s^{\prime}\) with \(s\stackrel{{ a}}{{\longrightarrow}}s^{\prime}\), and _complete_ if for all \((s,a)\in V\times\Sigma\) there is at least one \(s^{\prime}\) with \(s\stackrel{{ a}}{{\longrightarrow}}s^{\prime}\). We henceforth consider only complete LTSs. Together with an _acceptance condition_\(\mathit{Acc}\subseteq E^{\omega}\) this can be used to define languages over \(\Sigma\) as usual: a word \(w=l_{0}l_{1}\ldots\in\Sigma^{\omega}\) is accepted from \(s_{0}\) if there is a path (also \(\mathit{run}\)) \(\rho=s_{0}\stackrel{{ l_{1}}}{{\longrightarrow}}s_{1}\stackrel{{ l_{2}}}{{\longrightarrow}}s_{2}\ldots\) that is accepting, i.e., in \(\mathit{Acc}\). The _language_\(L(s_{0})\subseteq\Sigma^{\omega}\) of an initial state \(s_{0}\in V\) consists of all words for which there exists an accepting run from \(s_{0}\). We will write \(s\subseteq_{L}s^{\prime}\) to denote language inclusion, meaning \(L(s)\subseteq L(s^{\prime})\). The acceptance condition \(\mathit{Acc}\) can be given by a parity condition but does not have to be. We consider in this paper especially reachability (does the run visit a state in a given target set \(T\subseteq V\)?) and safety conditions (does the run always stay in a "safe" region \(F\subseteq V\)?). An LTS together with an accepting condition is referred to as _fair_ LTS [1]. _Fair_ simulations [1] are characterised by simulation games on (a pair of) fair LTSs in which Player 1 stepwise produces a path from \(s\), and Player 2 stepwise produces an equally labelled path from \(s^{\prime}\). Player 2 wins if she produces an accepting run whenever Player 1 does. That is, \(s\) is fairly simulated by \(s^{\prime}\) (write \(s\preceq s^{\prime}\)) iff Player 2 has a strategy in the simulation game so that, whenever the run produced by Player 1 is accepting then so is the run produced by Player 2 in response. Fair simulation \(s\preceq s^{\prime}\) implies language inclusion \(L(s)\subseteq L(s^{\prime})\) but not vice versa. **Timed Alphabets, Words, and LTSs.** For any alphabet \(\Sigma\) let \(\Sigma_{T}\) denote the timed alphabet \(\{(a,t)|a\in\Sigma,t\in\mathbb{R}_{\geq 0}\}\). A timed word is a finite or infinite word \(w\in(\Sigma_{T})^{\infty}\) consisting of letters in \(\Sigma\) paired with distinct non-negative non-decreasing real-valued timestamps. We will also write \(d_{0}a_{0}d_{1}a_{1}...\) to denote a timed word \((a_{i},t_{i})\in\Sigma_{T}^{\infty}\) where \(t_{0}=d_{0}\) and \(t_{i+1}=t_{i}+d_{i+1}\). Conversely, the duration and the timed word of any sequence in \((\Sigma\cup\mathbb{R})^{\infty}\) is given inductively as follows. For any \(d\in\mathbb{R}_{\geq 0}\), \(\tau\in\Sigma\), \(\alpha\in(\Sigma\cup\mathbb{R})^{*}\), and \(\beta\in(\Sigma\cup\mathbb{R})^{\infty}\) let \(duration(\tau)\stackrel{{\mathrm{def}}}{{=}}0\); \(duration(d)\stackrel{{\mathrm{def}}}{{=}}d\); \(duration(\alpha\beta)=duration(\alpha)+duration(\beta)\); \(tword(\varepsilon)=tword(d)\stackrel{{\mathrm{def}}}{{=}}\varepsilon\); \(tword(\alpha d)\stackrel{{\mathrm{def}}}{{=}}tword(\alpha)\); and \(tword(\alpha\tau)\stackrel{{\mathrm{def}}}{{=}}tword(\alpha)( \tau,duration(\alpha))\). An infinite timed word of finite duration is called a _Zeno_ word. A _timed_ LTS is one with edge labels in \(\Sigma\uplus\mathbb{R}_{\geq 0}\), so that edges labelled by \(\mathbb{R}_{\geq 0}\) (modelling the passing of time) satisfy the following conditions for all \(\alpha,\beta,\gamma\in V\) and \(d,d^{\prime}\in\mathbb{R}_{\geq 0}\). 1. (Zero-delay): \(\alpha\stackrel{{ 0}}{{\to}}\alpha\), 2. (Determinism): If \(\alpha\stackrel{{ d}}{{\to}}\beta\wedge\alpha\stackrel{{ d}}{{\to}}\gamma\) then \(\beta=\gamma\), 3. (Additivity): \(\alpha\stackrel{{ d}}{{\to}}\beta\stackrel{{ d^{\prime}}}{{\to}}\gamma\) then \(\alpha\stackrel{{ d+d^{\prime}}}{{\to}}\gamma\). The timed language \(L_{T}(s)\subseteq\Sigma_{T}^{\omega}\) of a state \(s\) consists of all the timed words read along accepting runs \(L_{T}(s)\stackrel{{\mathrm{def}}}{{=}}tword(L(s))\). We write \(L(S)\) for the timed language of the initial state of the LTS \(S\). **Timed Automata.** Timed automata are finite-state automata equipped with finitely many real-valued variables called _clocks_, whose transitions are guarded by constraints on clocks. Constraints on clocks \(C=\{x,y,\dots\}\) are (in)equalities \(x\triangleleft n\) where \(x\in C\), \(n\in\mathbb{N}\) and \(\triangleleft\in\{\leq,<\}\). Let \(\mathcal{B}(C)\) denote the set of Boolean combinations of clock constraints, called _guards_. A clock _valuation_\(\nu\in\mathbb{R}^{C}\) assigns a value \(\nu(x)\) to each clock \(x\in C\). We write \(\nu\models g\) if \(\nu\) satisfies the guard \(g\). A timed automaton (TA) \(\mathcal{T}=(Q,\iota,C,\Delta,\Sigma,\mathit{Acc})\) is given by * \(Q\) a finite set of states including an initial state \(\iota\); * \(\Sigma\) an input alphabet; * \(C\) a finite set of clocks; * \(\Delta\subseteq Q\times\mathcal{B}(C)\times\Sigma\times 2^{C}\times Q\) a set of transitions; each transition is associated with a guard, a letter, and a set of clocks to reset. A transition that reads letter \(a\in\Sigma\) will be called an \(a\)-transition. We assume that for all \((s,\nu,a)\in Q\times\mathbb{R}_{\geq 0}^{C}\times\Sigma\) there is at least one transition \((s,g,a,r,s^{\prime})\in\Delta\) so that \(\nu\) satisfies \(g\). * \(\mathit{Acc}\subseteq\Delta^{\omega}\) an acceptance condition. Timed automata induce timed LTSs, and can thus be used to define timed languages, as follows. A _configuration_ is a pair consisting of a control state and a clock valuation. These can evolve in two ways, as follows. For all configurations \((s,\nu)\in Q\times\mathbb{R}_{\geq 0}^{C}\), * there is a _delay_ step \((s,\nu)\stackrel{{ d}}{{\longrightarrow}}(s,\nu+d)\) for every \(d\geq 0\), which increments all clocks by \(d\). * there is a _discrete_ step \((s,\nu)\stackrel{{\tau}}{{\longrightarrow}}(s^{\prime},\nu^{ \prime})\) if \(\tau=(s,g,a,r,s^{\prime})\in\Delta\) is a transition so that \(\nu\) satisfies \(g\) and \(\nu^{\prime}=\nu[r\to 0]\), that is, it maps \(r\) to \(0\) and agrees with \(\nu\) on all other values. Naturally, every delay \(d\) yields a unique successor configuration and so, for any two \(d,d^{\prime}\geq 0\) and valuations \(\nu,\nu^{\prime}\), \(\nu\stackrel{{ d}}{{\longrightarrow}}\stackrel{{ d^{\prime}}}{{\longrightarrow}}\nu^{ \prime}\iff\nu\stackrel{{ d+d^{\prime}}}{{\longrightarrow}}\nu^{\prime}\). Hence, a TA indeed induces a timed LTS. Discrete steps, however, are a source of nondeterminism: a configuration may have several \(a\)-successors induced by different transitions whose guards are satisfied. \(\mathcal{T}\) is _deterministic_ if its induced LTS is deterministic, which is the case iff for every state \(s\), all transitions from \(s\) have mutually exclusive guards. A path \(\rho=(s_{0},\nu_{0})\stackrel{{ l_{1}}}{{\longrightarrow}}(s_{1}, \nu_{1})\stackrel{{ l_{2}}}{{\longrightarrow}}(s_{2},\nu_{2})\ldots\) is called _reduced_ if it does not contain consecutive delay steps. It is a _run on_ timed word \(w\in(\Sigma_{T})^{\infty}\) if \(tword(l_{1}l_{2}\ldots)=w\). The acceptance condition is lifted to the LTS as expected. Namely, a run is _accepting_ if \(\rho\in Acc.\) This way, the _language_\(L_{T}(s,\nu)\subseteq\Sigma_{T}^{\omega}\) of a configuration \((s,\nu)\) consists of all timed words for which there exists an accepting run from \((s,\nu)\). The language of \(\mathcal{T}\) is \(L_{T}(\mathcal{T})\stackrel{{\mathrm{def}}}{{=}}L_{T}((\iota,0))\), the languages if the initial configuration with state \(\iota\) and all clocks set to zero. ### Region Abstraction The following is the standard definition of regions for timed automata (cf. [1], def. 4.3). Let \(\mathcal{T}=(Q,\iota,C,\Delta,\Sigma,Acc)\) be a timed automaton and for any clock \(x\in C\) let \(c_{x}\) denote the largest constant in any clock constraint involving \(x\). Two valuations \(\nu,\nu^{\prime}\in\mathbb{R}_{\geq 0}^{C}\) are _(region) equivalent_ (write \(\nu\sim\nu^{\prime}\)) if all of the following hold. 1. For all \(x\in C\) either \(\lfloor\nu(x)\rfloor=\lfloor\nu^{\prime}(x)\rfloor\) or both \(\nu(x)\) and \(\nu^{\prime}(x)\) are greater than \(c_{x}\). 2. For all \(x,y\in C\) with \(\nu(x)\leq c_{x}\) and \(\nu(y)\leq c_{y}\), \(\mathit{fract}(\nu(x))\leq\mathit{fract}(\nu(y))\) iff \(\mathit{fract}(\nu^{\prime}(x))\leq\mathit{fract}(\nu^{\prime}(y))\). 3. For all \(x\in C\) with \(\nu(x)\leq c_{x}\), \(\mathit{fract}(\nu(x))=0\) iff \(\mathit{fract}(\nu^{\prime}(x))=0\). It follows that there are only finitely many equivalence classes w.r.t. \(\sim\), called regions, for any given TA. Two configurations \((s,\nu)\) and \((s^{\prime},\nu^{\prime})\) are (region) equivalent, write \((s,\nu)\sim(s^{\prime},\nu^{\prime})\), if \(s=s^{\prime}\) and \(\nu\sim\nu^{\prime}\). Two runs are (region) equivalent if they have the same length and stepwise visit region equivalent configurations. Let \(\mathit{maxfrac}(\nu)=\max\{\mathit{fract}(\nu(x))\mid x\in C\}\) denote the maximal fractional value of any clock in configuration \(\nu\). We will make use of the following two properties. **Proposition 2.1**.: 1. _For any valuation_ \(\nu\) _and_ \(d\leq 1-\mathit{maxfrac}(\nu)\) _we have_ \(\nu\sim\nu+d\)_._ 2. _Suppose that_ \((p,\nu)\sim(p^{\prime},\nu^{\prime})\) _and let_ \(\rho\in(\Delta\cup\mathbb{R}_{\geq 0})^{*}\) _satisfy_ \(duration(\rho)<1-\mathit{maxfrac}(\nu)\)_,_ \(duration(\rho)<1-\mathit{maxfrac}(\nu^{\prime})\) _and_ \((p,\nu)\stackrel{{\rho}}{{\longrightarrow}}(q,\mu)\)_._ _Then_ \((p^{\prime},\nu^{\prime})\stackrel{{\rho}}{{\longrightarrow}}(q^ {\prime},\mu^{\prime})\) _for some_ \((q^{\prime},\mu^{\prime})\sim(q,\mu)\)_._ Proof sketch.: Part 1 is immediate from the definition of regions. Part 2 can be shown by induction on the length of \(\rho\) using the facts that region-equivalent configurations enable the same discrete transitions and that any delay decreases the duration of the remaining path by the same amount it increases clocks. ## 3. History-determinism Informally, an automaton or LTS is history-deterministic if the non-determinism can be resolved on-the-fly, based only on the history of the word and run so far. We give two equivalent definitions, each being more convenient than the other for some technical developments. **Definition 3.1** (History-determinism).: A fair LTS \(S=(V,\Sigma,E)\) is _history-deterministic_ (from initial state \(s_{0}\in V\)) if there is a _resolver_\(r:E^{*}\times\Sigma\to E\) that maps every finite run and letter \(a\in\Sigma\) to an \(a\)-labelled transition such that, for all words \(w=a_{0}a_{1}\cdots\in L_{T}(s_{0})\) the run \(\rho\) defined inductively for \(i>0\) by \(\rho_{i+1}\stackrel{{\mathrm{def}}}{{=}}\rho_{i}r(\rho_{i},a_{i+1})\), is an accepting run on \(w\) from \(s_{0}\). Equivalently (from [1] for \(\omega\)-regular automata), a resolver corresponds exactly to a winning strategy for Player 2 in the following _letter game_. **Definition 3.2** (Letter game).: The letter game on a fair LTS \(S=(V,\Sigma,E)\) with initial state \(s_{0}\in V\) is played between Players 1 and 2. At turn \(i\): * Player 1 chooses a letter \(a_{i}\in\Sigma\). * Player 2 chooses an \(a_{i}\) labelled edge \(\tau_{i}\in E\). A play is a pair \((w,\rho)\) where \(w=a_{0}a_{1}\ldots\) is an infinite word and \(\rho=\tau_{0}\tau_{1}...\) is a run on \(w\). A play is winning for Player 2 if either \(w\notin L_{T}(s_{0})\) or \(\rho\) is an accepting run on \(w\) from \(s_{0}\). In these and other games we consider, strategies for both players are defined as usual, associating finite histories (runs) to valid player choices. Now winning strategies for Player 2 in the letter game exactly correspond to resolvers for \(S\) and vice-versa. **Proposition 3.3**.: _Player 2 wins the letter game on a fair LTS \(S\) if and only if \(S\) is history-deterministic._ While history-determinism is known to relate to fair simulation, in the sense that history-deterministic automata simulate deterministic ones for the same language [1], their relation has so far not been studied in more details. Below we show that history-determinacy can equivalently be characterised in terms of fair simulation. **Theorem 3.4**.: _For every fair LTS \(S\) and initial state \(q\) the following are equivalent:_ 1. \(S\) _is history-deterministic._ 2. _For all complete fair LTS_ \(S^{\prime}\) _with initial state_ \(q^{\prime}\)_,_ \(q^{\prime}\subseteq_{L}q\) _if and only if_ \(q^{\prime}\preceq q\)_._ Proof (1)\(\implies\) (2).: Fair simulation \(q\preceq q^{\prime}\) trivially implies \(q\subseteq_{L}q^{\prime}\) by definition. For the other implication, assume that \(q\subseteq_{L}q^{\prime}\). By assumption (1) there exists a resolver, i.e. a winning strategy in the letter game. Player 2 can win the fair simulation game by ignoring her opponent's configuration and moving according to this resolver. By the completeness assumption on \(S^{\prime}\), Player 1 can never propose a letter for which there is no successor in \(S^{\prime}\). So each player produces an infinite run on the same word \(w\) and the run produced by Player 2 is the same as that produced by the resolver in \(S^{\prime}\). If \(w\in L_{T}(q)\) then it is in \(L_{T}(q^{\prime})\) and Player 2's run accepts. If \(w\notin L_{T}(q)\) then Player 2 wins due to the fairness condition. In both cases she wins the fair simulation game and therefore \(q\preceq q^{\prime}\). **(2)\(\implies\) (1)** If condition (2) holds for all complete fair LTSs then \(q\) can fairly simulate the one consisting of a single state with self-loops for all transitions of \(S\) whose acceptance condition contains exactly all accepting runs from \(q\). Then the strategy for Player 2 in the fair simulation game can be used as a strategy in the letter game. ## 4. Expressivity We consider the expressivity of history-deterministic TA in comparison to deterministic and fully non-deterministic variants, for different accepting conditions. In particular, we prove (in Section 4.1) that for safety and reachability acceptance, HD TA can be determinized, whereas for coBuchi and above they cannot (Section 4.2). Moreover, we show (in Section 4.3) that Parity HD TA are strictly less expressive than fully non-deterministic TA, even those with only a reachability acceptance. ### Safety and Reachability HD TA are determinizable We start by showing that history-deterministic timed automata with safety acceptance are determinizable. To do so, we show (in Lemma 4.3) that these automata have simple resolvers, which only depend on the equivalence class of the current clock configuration with respect to the region abstraction. That is to say, the resolver only needs to know the integer part of clock values (up to the maximal value that appears in clock constraints) and the ordering of their fractional parts. We can then use such a simple resolver to determinize the automaton by adding guards that restrict transitions so that the automaton can only take one transition per region, as dictated by the resolver. **Definition 4.1** (Run-trees).: A _run-tree_ on a timed word \(u=(a_{0},t_{0})(a_{1},t_{1})\ldots\) from TA configuration \((s_{0},\nu_{0})\) is a tree where nodes are labelled by configurations, and edges by transitions such that 1. The labels along every branch form a run on \(u\) from \((s_{0},\nu_{0})\) 2. It is complete wrt. discrete steps: suppose the path leading towards some node is labelled by a run \(\rho\) which reads \(tword(\rho)=(a_{0},t_{0})\ldots(a_{i},t_{i})\), ends in a configuration \((s,\nu)\), and has \(duration(\rho)=t_{i+1}\). Then for every transition \(\tau=(s,g,a_{i+1},r,s^{\prime})\in\Delta\) with \(\nu\models g\) and so that \((s,\nu)\stackrel{{\tau}}{{\longrightarrow}}(s^{\prime},\nu^{ \prime})\), there is a \(\tau\)-labelled edge to a new node labelled by \((s^{\prime},\nu^{\prime})\). A run-tree is _reduced_ if all its branches are. That is, there are no consecutive delay steps. Notice that for every initial configuration and timed word, there is a unique reduced run-tree, all of whose branches are runs on the word (since we have no deadlocks), and vice versa, all reduced runs on the word appear as branches on the run-tree. We extend the region equivalence from configurations to run-trees in the natural fashion: two run-trees are equivalent if they are isomorphic and all corresponding configurations are equivalent. That is, they can differ only in fractional clock values and the duration of delays. The following is our key technical lemma. **Lemma 4.2**.: _Consider two region equivalent configurations \((s,\nu)\sim(s^{\prime},\nu^{\prime})\)._ _For every timed word \(u\) there is a timed word \(u^{\prime}\) so that the reduced run-tree on \(u\) from \((s,\nu)\) is equivalent to the reduced run-tree on \(u^{\prime}\) from \((s^{\prime},\nu^{\prime})\)._ Proof.: It suffices to show that for some (not necessarily reduced) run-tree on \(u\) from \((s,\nu)\) there exists some equivalent run-tree from \((s^{\prime},\nu^{\prime})\) as this implies the claim by collapsing all consecutive delay steps and thus producing the reduced tree on both sides. We proceed by stepwise uncovering the run-tree from \((s,\nu)\) for ever longer prefixes of \(u\) and constructing a corresponding equivalent run-tree from \((s^{\prime},\nu^{\prime})\). The intermediate finite trees we build have the property that all branches have the same duration. In each round we extend all current leafs, in both trees, either by 1. all possible non-deterministic successors (for the letter prescribed by the word \(u\)), in case the duration of the branch is already equal to the next time-stamp in \(u\), or 2. one successor configuration due to a delay, which must be _the same on all leafs_. For the second case, the delays used to extend the two trees need not be the same because we only want to preserve region equivalence. Also, the delay chosen for the tree rooted in \((s,\nu)\) need not follow the timestamps in \(u\) but can be shorter, meaning the run-tree may not be reduced. The difficulty lies in systematically choosing the delays to ensure that the two trees remain equivalent, and secondly, that in the limit this procedure generates a run-tree on the whole word \(u\) from \((s,\nu)\). Together this implies the existence of a corresponding word \(u^{\prime}\) and a run-tree from \((s^{\prime},\nu^{\prime})\). **Invariant.** We propose a stronger invariant, namely that the relative orderings of the fractional values _in all leafs are the same on both sides_. To be precise, let's reinterpret a clock valuation as a function \(\nu:C\times\mathbb{N}\to\{\bot\}\cup[0,1)\), that assigns to every clock and possible integral value either a fractional value between \(0\) and \(1\), or \(\bot\) (indicating that the given clock does not have the given integral value). This way for every clock \(x\) there is exactly one \(n\in\mathbb{N}\) with \(\nu(x,n)\neq\bot\) and the image \(\nu(C\times\mathbb{N})\) has at most \(|C|+1\) different elements. For any ordered set \(F=\{\bot<f_{1}<f_{2}<\cdots<f_{l}\}\supseteq\nu(C\times\mathbb{N})\) of fractional values, we can thus represent \(\nu\) as a function \(\hat{\nu}:C\times\mathbb{N}\to\{\bot,1,\ldots l\}\) that, instead of exact fractional clock values only yields their index in \(F\) (and maps \(\bot\mapsto\bot\)). Consider some run-tree with leafs \((q_{1},\nu_{1})(q_{2},\nu_{2})\cdots(q_{l}\nu_{l})\) with combined fractional values \(F=\bigcup_{i=1}^{l}\nu_{i}(C\times\mathbb{N})\), and an equivalent run-tree with leafs \((q^{\prime}_{1},\nu^{\prime}_{1})(q^{\prime}_{2},\nu^{\prime}_{2})\cdots(q^{ \prime}_{l}\nu^{\prime}_{l})\) with combined fractional values \(F^{\prime}=\bigcup_{i=1}^{l}\nu^{\prime}_{i}(C\times\mathbb{N})\). The two trees are _aligned_ if for all \(1\leq i\leq l\), \(\hat{\nu}_{i}=\hat{\nu}^{\prime}_{i}\). Notice that this still allows the two trees to differ on their exact fractional values but now they must agree on the relative order of all contained clocks on leafs, and in particular which ones are maximal and therefore the closest to the next larger integer. We will always select a delay of \(1-\max\{F\}\) and \(1-\max\{F^{\prime}\}\), respectively, in step 2 above. To show the claim we produce the required run-trees starting in \((s,\nu)\sim(s^{\prime}\nu^{\prime})\). These are in particular two aligned run-trees on the empty word. Assume two aligned trees as above, where leafs have fractional values \(F=\{\bot<f_{1}<f_{2}<\cdots<f_{m}\}\) and \(F^{\prime}=\{f^{\prime}_{0}<f^{\prime}_{1}<\cdots<f^{\prime}_{m}\}\), respectively, and assume that the tree rooted in \((s,\nu)\) reads a strict prefix \((a_{0},t_{0}),\ldots(a_{i},t_{i})\) of \(u\). _Case 1:_ the duration of all branches in the first tree equals \(t_{i+1}\), the timestamp of the next symbol in \(u\). Then we extend each leaf in both trees by all possible \(a_{i+1}\)-successors. This will produce two aligned trees because each leaf configuration in one must be region equivalent to the corresponding configuration in the other, and therefore satisfies the same guards, enabling the same \(a_{i+1}\)-transitions leading to equivalent successors. Note also that all branches in each tree still have the same duration, as no delay step was taken. _Case 2:_ the duration of all branches in the first tree is strictly less than \(t_{i+1}\). Then we extend all leafs in the tree from \((s,\nu)\) by a delay of duration \(d=1-f_{m}\) and all leafs in the other tree by a delay of duration \(d^{\prime}=1-f^{\prime}_{m}\). Naturally, this produces exactly one successor for each former leaf. The sets of new fractional values on leafs are \(\bigcup_{i=1}^{m}(\mu+d)(C\times\mathbb{N})=\{\bot<0<f_{1}+d<\cdots<f_{m-1}+d\}\) and for any former leaf \((q,\mu)\) extended by a delay \((q,\mu)\mathop{\longrightarrow}\limits^{d}(q,\mu+d)\), we have \[\hat{\mu}(x,n-1)=m\iff\widehat{(\mu+d)}(x,n)=0 \tag{4.1}\] and \[\hat{\mu}(x,n)=i<m\iff\widehat{(\mu+d)}(x,n)=i+1\leq m \tag{4.2}\] Analogous equivalences hold for the corresponding step \((q,\mu^{\prime})\mathop{\longrightarrow}\limits^{d^{\prime}}(q,\mu^{\prime}+ d^{\prime})\) on the other tree. Notice that the two cases above are exhaustive as again, for all \(x\in C\) there is exactly one \(n\in\mathbb{N}\) with \(\mu(x,n)\neq\bot\). We aim to show that \(\widehat{(\mu+d)}=(\widehat{\mu^{\prime}+d^{\prime}})\). Consider any \(x\in C\) and \(n\in\mathbb{N}\). We have that \[\widehat{(\mu+d)}(x,n)=m \stackrel{{\eqref{eq:n-1-1}}}{{\Longleftrightarrow}} \hat{\mu}(x,n+1)=0\] \[\stackrel{{\eqref{eq:n-1-2}}}{{\Longleftrightarrow}} \hat{\mu^{\prime}}(x,n+1)=0\] \[\stackrel{{\eqref{eq:n-1-2}}}{{\Longleftrightarrow}} (\widehat{\mu^{\prime}+d^{\prime}})(x,n)=m\] and \[\widehat{(\mu+d)}(x,n)=i<m \stackrel{{\eqref{eq:n-1-2}}}{{\Longleftrightarrow}} \hat{\mu}(x,n)=i+1\] \[\stackrel{{\eqref{eq:n-1-2}}}{{\Longleftrightarrow}} \hat{\mu^{\prime}}(x,n)=i+1\] \[\stackrel{{\eqref{eq:n-1-2}}}{{\Longleftrightarrow}} (\widehat{\mu^{\prime}+d^{\prime}})(x,n)=i<m\] It follows that \(\widehat{(\mu+d)}=\widehat{(\mu^{\prime}+d^{\prime})}\) which means that the two trees are again aligned, as required. To see why this procedure produces a run-tree on \(u\) (and an equivalent run-tree on some word \(u^{\prime}\)), observe that there can be at most \(|F|+1\) many consecutive delay extensions according to step 2) before all integral clock values are strictly increased. We are now ready to show that history-deterministic TA with safety acceptance have simple resolvers based on the region abstraction. We call a resolver \(r\)_region-based_ if it bases its decision only on the current letter and region. That is, if for any letter \(a\in\Sigma\) and any two finite runs \((\iota,0)\mathop{\longrightarrow}\limits^{\rho}(s,\nu)\) and \((\iota,0)\mathop{\longrightarrow}\limits^{\rho^{\prime}}(s^{\prime},\nu^{ \prime})\) consistent with \(r\) and so that \((s,\nu)\sim(s^{\prime},\nu^{\prime})\), it holds that \(r(\rho,a)=r(\rho^{\prime},a)\). **Lemma 4.3**.: _Every history-deterministic TA with safety acceptance has a region-based resolver. The same is true for history-deterministic TA over finite words._ Proof.: Let \(r\) be a resolver for a history-deterministic safety TA \(\mathcal{T}\). We now build a resolver that only depends on the region of the current configuration. To do so, we choose a representative configuration within each region, which will determine the choice of the resolver for the whole region: For every region \(R\in[Q\times\mathbb{R}_{\geq 0}^{C}]_{\sim}\), consider the configurations that are reached by at least one \(r\)-consistent run, and mark one of them \(m_{R}\), if at least one exists, along with one \(r\)-consistent run \(\rho_{R}\) leading to the configuration \(m_{R}\). Let \(r^{\prime}\) be the aspiring resolver that, when reading a letter \(a\), considers the region \(R\) of the current configuration, and follows what \(r\) does when reading \(a\) after the marked \(r\)-consistent run \(\rho_{R}\). We set \(r^{\prime}(\rho,a)\stackrel{{\mathrm{def}}}{{=}}r(\rho_{R},a)\) where \(R\) is the final region of the prefix-run \(\rho\). Note that \(r^{\prime}\) is well defined since it always follows transitions consistent with some \(r\)-consistent run and can therefore only visit marked regions. We claim that \(r^{\prime}\) is indeed a resolver. Towards a contradiction, assume that it is not a resolver, that is, there is some word \(w\in L_{T}(\mathcal{T})\) for which \(r^{\prime}\) builds a rejecting run. As \(\mathcal{T}\) is a safety automaton, we can consider the last configuration \((s,\nu)\) along this run from which the remaining suffix \(au\) of \(w\) can be accepted 1. Suppose that \(\rho\) is the prefix of the run built by \(r^{\prime}\) on \(w\), which ends in \((s,\nu)\) and let \(\tau=r^{\prime}(\rho,a)\) be the \(a\)-transition chosen by \(r^{\prime}\). We know that \(\tau\) leads from \((s,\nu)\) to some configuration \((s^{\prime},\nu^{\prime})\) from where \(u\) is not accepted. By definition of \(r^{\prime}\), there must be a marked configuration \(m_{R}\sim(s,\nu)\) reached by some run \(\rho_{R}\) from which \(r\) chooses the same \(a\)-transition \(\tau\). By Lemma 4.2 there must be a word \(au^{\prime}\) so that the run-tree on \(au\) from \((s,\nu)\) is equivalent to that on \(au^{\prime}\) from \(m_{R}\). This means that \(au^{\prime}\in L_{T}(m_{R})\) and, as \(r\) is a resolver, there must be an accepting run that begins with a step \((m_{R})\mathop{\longrightarrow}\limits^{\tau}(m_{R}^{\prime})\). We derive that \(u\) also has an accepting run from \((q,\nu)\) that begins with \(\tau\), contradicting the assumption that \((q,\nu)\) is the last position on the run \(r^{\prime}\) built on \(w\) so that its suffix can be accepted. Therefore, \(r^{\prime}\) is indeed a resolver. Using a technique of [1] we can show the existence of region-based resolvers also for HD timed reachability automata. **Lemma 4.4**.: _Every history-deterministic TA with reachability acceptance has a region-based resolver._ Proof.: Given a TA \(\mathcal{T}\) with a reachability condition, we call a configuration \((s,\nu)\)_almost final_ it is universal (\(L_{T}(s,\nu)=\Sigma_{T}^{\omega}\)) and Player 2 wins the letter game on \(\mathcal{T}\) starting at \((s,\nu)\). In particular, \((s,\nu)\) is almost final if \(s\) is a final state. We first argue that this is a property of regions, rather than just configurations: if \((s,\nu)\sim(s,\nu^{\prime})\) and \((s,\nu)\) is almost final then also \((s,\nu^{\prime})\) must be almost final. Indeed, first observe that \((s,\nu^{\prime})\) is universal by Lemma 4.2. To see why there must also be a resolver from \((s,\nu^{\prime})\) We observe that the letter game starting from any universal configuration can be turned into an equivalent timed reachability game. This is because by the universality assumption, Player 1 can only win by keeping her opponent away from accepting configurations forever. The construction uses one extra clock that is used in particular to prevent Player 2 from making delay steps. For such games, both players have region-based winning strategies (see Theorem 5.3). Consequently, Player 2 can use the same region-based strategy in the letter games from \((s,\nu^{\prime})\) and \((s,\nu)\), which exists by our initial assumption. We now argue that in the letter game for \(\mathcal{T}\), any resolver must reach an almost final configuration as soon as Player 1 has played a word \(u\) for which some run \(\rho\) reaches an almost final configuration. Indeed, all continuations of \(u\) are in \(L_{T}(\mathcal{T})\), so the configuration \((s,\nu)\) reached by the resolver must accept all continuations, and the resolver must represent a winning strategy in the letter game from \((s,\nu)\). We now turn \(\mathcal{T}\) into a new timed automaton \(\mathcal{T}^{\prime}\) over finite words with a single new accepting state that can be reached (via some new letter \(\$\)) exactly from all almost-final configurations. This is well defined because almost-final is a property of regions and therefore can be expressed by a clock constraint. Note that \(\mathcal{T}^{\prime}\) is history-deterministic because a resolver for \(\mathcal{T}\) is also a resolver for \(\mathcal{T}^{\prime}\). This is because a resolver for \(\mathcal{T}\) gives on a word \(u\) a run to an almost final configuration, whenever there exists one. Since \(\mathcal{T}^{\prime}\) is a history-deterministic TA on finite words, there exists a region-based strategy \(\sigma_{1}\) in the letter game for \(\mathcal{T}^{\prime}\) by Lemma 4.3. We combine \(\sigma_{1}\) with an region-based strategy \(\sigma_{2}\) in the letter game from almost final configurations in \(\mathcal{T}\). This yields a region-based strategy for Player 2 in the letter game for \(\mathcal{T}\) which plays according to \(\sigma_{1}\) while not almost final, and \(\sigma_{2}\) from then onwards. This strategy must be a resolver: If Player 2 provides a word \(u\) such that, for no finite prefix there is a run ending in an almost final configuration, then \(u\) is not in \(L_{T}(\mathcal{T})\) and Player 2 wins. If conversely, some prefix of \(u\) admits a run ending in an almost-final configuration, then \(\sigma_{1}\) guarantees that Player 2 must be in such configuration after reading this prefix. From here \(\sigma_{2}\) ensures that she wins whatever Player 1 plays. We can now use the region-based solver to determinize history-deterministic safety and reachability TA. **Theorem 4.5**.: _Every history-deterministic safety and reachability TA is equivalent to a deterministic TA._ Proof.: Consider a history-deterministic TA \(\mathcal{T}=(Q,\iota,C,\Delta,\Sigma,\mathit{Acc})\), with a region-based resolver (as in Lemma 4.3) \(r\), and let \(R\) be the region graph of \(\mathcal{T}\). Define \(\mathcal{T}^{\prime}=(Q,\iota,C,\Delta^{\prime},\Sigma,\mathit{Acc})\) where \((q,g\wedge z,a,X,q^{\prime})\in\Delta^{\prime}\) for \(z\) a guard defining a region of \(R\), that is, a guard that is satisfied exactly by valuations in \(R\), if \((q,g,a,X,q^{\prime})\in\Delta\) is the transition chosen by \(r\) in the region defined by the guard \(z\). In other words, \(\mathcal{T}^{\prime}\) is \(\mathcal{T}\) with duplicated transitions guarded so that a transition can only be taken from a region from which \(r\) chooses that transition. Observe that \(\mathcal{T}^{\prime}\) is deterministic: the guards describing regions are mutually exclusive, therefore the guards of any two transitions from the same state over the same letter have mutually exclusive guards. As runs of \(\mathcal{T}^{\prime}\) corresponds to a run of \(\mathcal{T}\) with added guards, \(L_{T}(\mathcal{T}^{\prime})\subseteq L_{T}(\mathcal{T})\). Conversely, if \(w\in L_{T}(\mathcal{T})\), then its accepting run consistent with \(r\) is also an accepting run in \(\mathcal{T}^{\prime}\), since each transition along this run, being chosen by \(r\), is taken at a configuration that satisfies the additional guards in \(\mathcal{T}^{\prime}\). We can therefore conclude that \(L_{T}(\mathcal{T})=L_{T}(\mathcal{T}^{\prime})\). While this determinization procedure preserves the state-space of the automaton, it multiplies the number of transitions (or the size of guards) by the size of the region abstraction. Then, while history-deterministic safety TA are no more expressive than deterministic ones, they could potentially be exponentially more succinct, when counting transitions and guards. ### CoBuchi HD TA are not determinizable We show that history-deterministic timed automata are strictly more expressive than deterministic ones, for coBuchi acceptance and above. For simplicity let's first assume that our definition of timed words/languages excludes Zeno words. The case were these are allowed is a straightforward adjustment that we comment on at the end of the section. The rest of this subsection is a proof of the following theorem. **Theorem 4.6**.: _The following timed language \(L\) over the singleton alphabet \(\Sigma=\{a\}\) is recognised by a one-clock history-deterministic co-Buchi automaton yet not by any deterministic Parity timed automaton. In words, \(L\) asks to eventually see events \(a\) at unit distance. Formally,_ \[L\ \stackrel{{\text{\tiny{def}}}}{{=}}\ \left\{(a,t_{0})(a,t_{1})... \mid\exists i\in\mathbb{N}.\quad\forall n\in\mathbb{N}.\quad\exists j>i.\quad t _{j}-t_{i}=n\right\}.\] #### \(L\) is HD recognisable We show that \(L\) is recognised by the history-deterministic timed \(\omega\)-automaton, in Fig. 1. This automaton has an initial rejecting state \(q\), from where there is a nondeterministic choice to either remain in this state or transition to an accepting state \(q^{\prime}\), which resets the unique clock. There are two transitions to stay in the accepting state: one enabled when the clock value is smaller than 1, and one enabled at clock value 1, which also resets the clock. If the clock value grows larger than 1, the only enabled transition goes back to the initial state. Since this is a co-Buchi automaton, an accepting run must eventually remain in the accepting state. First, this automaton recognises \(L\): if \(w\in L\) then there is an accepting run that moves to state \(q^{\prime}\) at time \(t\), where it then remains since the clock \(x\) is reset at the occurrence of each event \((a,t+n)\) for \(n\in\mathbb{N}\), so the clock value never grows larger than \(1\). Conversely, a word accepted by this automaton has a run that eventually moves to \(q^{\prime}\) at a time \(t\), and then remains in \(q^{\prime}\). For the run to stay in \(q^{\prime}\), it must reset \(x\) at every time-unit after \(t\), so \((a,t+n)\) must occur in the word for all \(n\in\mathbb{N}\), that is, the word is in \(L\). We now argue that this automaton is also history-deterministic. Given a finite word read so far and a new letter \(a\) at time \(t_{\mathit{new}}\), the resolver identifies the earliest time \(t_{\mathit{early}}\) such that \(a\) has so far occurred at time \(t_{\mathit{early}}+n\) for all integers \(n\) such that \(t_{\mathit{early}}+n\leq t_{\mathit{new}}\). Let \(r\) be the function that maps a run \(\rho\) ending in \(q\) to \(q^{\prime}\) if \(t_{\mathit{new}}=t_{\mathit{early}}+m\) for some integer \(m\), and otherwise to the only other available transition. We claim that this is indeed a resolver. If \(w\in L\) then there is an earliest time \(t\) such that \((a,t+n)\) occurs in \(w\) for all integers \(n\). Since \(t\) is minimal, eventually the resolver \(r\) will make its choice whether to move to \(q^{\prime}\) over a letter \((a,t_{\mathit{new}})\) based on whether \(t_{\mathit{new}}=t+m\) for some integer \(m\). Since time progresses and \((a,t+n)\) occurs in \(w\) for all integers \(n\), the run will eventually transition to \(q^{\prime}\) at a time \(t+m\) for some \(m\). From there, since \((a,t+n)\) occurs in \(w\) for all integers \(n\), the run over \(w\) remains in \(q^{\prime}\) and is therefore accepting. It remains to be shown that \(L\) is not recognised by a deterministic timed automaton. Figure 1. A history-deterministic timed co-Büchi automaton for \(L\). The state \(q^{\prime}\) has priority \(0\), i.e. is accepting, while the state \(q\) has priority \(1\). Figure 2. Blocks within an interval and ticks within a block **L is not Deterministic Parity recognisable.** Suppose towards a contradiction that \(L\) is recognisable by some deterministic Timed Automaton \(D\) with Parity acceptance. Let \(r\) be the number of its regions. We will construct two words, one belonging to \(L\) and one that does not, so that the run of \(D\) on \(w\) is region equivalent to the one on \(w^{\prime}\). The two words can only differ in the timing of events since there is only one letter in the alphabet. Both words will be constructed on the fly, according to the following schema. Consider the intervals and fractional values in Fig. 2; There are infinitely many disjoint intervals, \(b_{i,j}=[s_{i,j},e_{i,j}]\) so that all \(b_{i,1}\) have start and endpoint strictly between \(0\) and \(\frac{1}{3}\) and are increasing, i.e., \(s_{i+1,1}<e_{i,1}\) for all \(i\). Similarly, \(b_{i,2}\subseteq[\frac{1}{3},\frac{2}{3}]\), and \(s_{i+1,2}<e_{i,2}\) for all \(i\). The third sequence of intervals \(b_{i,3}\subseteq[\frac{2}{3},1]\) have start and endpoint strictly between \(\frac{1}{3}\) and \(1\) and are _decreasing_: \(e_{i+1,3}<s_{i,3}\) for all \(i\). Each interval \(b_{i,j}\) contains equi-distant values \(f_{i,j,0},f_{i,j,1},\ldots,f_{i,j,r}\) starting at \(f_{i,j,0}=s_{i,0}\). We step-wise construct \(w\) (and \(w^{\prime}\)) together with the run of \(D\) on it. In every integral interval from \(i-1\) to \(i\) we place events as follows. * start with a delay of \(f_{i,1,1}\), followed by a discrete event \(a\), then delay of \(f_{i,1,2}-f_{i,1,1}\) followed by \(a\), and so on. This induces a run of \(D\) on the prefix constructed and we continue constructing the prefix until the induced run closes a cycle in the region graph. This implies existence of times \(f_{i,1,k}\) and \(f_{i,1,k+\ell}\) such that the automaton is in configurations \((s_{i,1,k},\nu_{i,1,k})\) and \((s_{i,1,k+\ell},\nu_{i,1,k+\ell})\) and \((s_{i,1,k+\ell},\nu_{i,1,k+\ell})\sim(s_{i,1,k},\nu_{i,1,k})\). We denote by \(L_{i}\) the run between \(f_{i,1,k}\) and \(f_{i,1,k+\ell}\). * Now we force the automaton to close the same cycle, but with all events occurring at times in the interval \(b_{i,2}\) (respectively \(b_{1,2}\)) in \(w\) (respectively \(w^{\prime}\)). This can be done by adding a time delay by \(s_{i,2}-f_{i,1,k+\ell}\) in \(w\) followed by an event \(a\) at times \(f_{i,2,\ell^{\prime}}\) for all \(\ell^{\prime}\leq\ell\). We prove this formally in Lemma 4.7. * Finally we force the automaton to close the same cycle once more, with all times in interval \(b_{i,3}\). This can be done by adding a time delay \(s_{i,3}-f_{i,2,\ell}\) followed by events at times \(f_{i,3,1},f_{i,3,2},\ldots f_{i,3,\ell}\). We prove the correctness of the construction in Lemma 4.7. Consider the cycle \(L_{i}\) in the region graph obtained in step \(1\) above in the interval \([i-1,i]\), between \(f_{i,1,k}\) and \(f_{i,1,k+l}\). Note that the \(k\) and \(\ell\) depends on \(i\). However, we write \(k\) and \(\ell\) without as we only reason about loops within an integral interval. The duration of the loop, denoted by \(duration(L_{i})\) is \(f_{i,1,k+\ell}-f_{i,1,k}\). An important observation is that \(duration(L_{i})\leq e_{i,j}-s_{i,j}\) as the loop occurs within the interval between \(s_{i,1}\) and \(e_{i,1}\). **Lemma 4.7**.: _Let \(\nu_{i}\) and \(\nu^{\prime}_{i}\) be the configurations reached by the run of \(D\) at times \(i-1+f_{i,1,k}\) and \(i-1+f_{i,1,k+\ell}\). Then \(1-\text{maxfrac}(\nu_{i}+d_{ij})\geq duration(L_{i})\), where \(d_{ij}=s_{i,j}-f_{i,1,k}\) for \(j\in\{2,3\}\)._ _Furthermore, let \(\nu_{ij}\) be the configuration reached by the run of \(D\) at time \(i-1+f_{i,j,1}\), where \(j=\{2,3\}\). The cycle \(L_{i}\) is executable from \(\nu_{ij}\)._ Proof.: We prove this lemma by induction on \(i\). The case \(i=1\) is easy to see since \(\text{maxfrac}(\nu_{1}+d_{1j})\leq s_{1,j}\) and therefore \(1-\text{maxfrac}(\nu_{1}+d_{1j})\geq 1-s_{1,j}\geq e_{1,j}-s_{1,j}\geq duration(L_{1})\). Furthermore, \(\nu_{12}=\nu^{\prime}_{1}+d\), where \(d=s_{1,2}-f_{1,1,k+\ell}\leq 1-f_{1,1,k+\ell}\leq 1-\text{maxfrac}(\nu^{\prime}_{1})\). Therefore, by Proposition 2.1.1, \(\nu_{12}\sim\nu^{\prime}_{1}\sim\nu_{1}\). For \(\nu=\nu_{1}\) and \(\nu=\nu_{12}\), \(1-\text{maxfrac}(\nu)>e_{1,3}-s_{1,3}>duration(L_{1})\) as \(1>e_{1,3}\) and \(\text{maxfrac}(\nu)<s_{1,3}\). By applying Proposition 2.1.2, \(L_{1}\) is executable from \(\nu_{12}\) and ends in a configuration \(\nu^{\prime}_{12}\sim\nu_{12}\). The configuration \(\nu_{13}\) equals \(\nu^{\prime}_{12}+d^{\prime}\), where \(d^{\prime}=s_{1,3}-f_{1,2,\ell}<1-\text{\it maxfrac}(\nu^{\prime}_{12})\) as \(\text{\it maxfrac}(\nu^{\prime}_{12})\leq f_{1,2,\ell}\). Proposition 2.1.1 gives \(\nu_{13}\sim\nu_{12}\), and \(1-\text{\it maxfrac}(\nu_{13})\geq e_{1,3}-s_{1,3}\geq duration(L_{1})\). By Proposition2.1.2, we can conclude that \(L_{1}\) is executable from \(\nu_{13}\). To prove the inductive case, we bound the value of \(\text{\it maxfrac}(\nu_{i}+d_{ij})\) for \(j\in\{2,3\}\). Consider a clock \(x\in C\) and the last time when it was reset. Either it was never reset or the reset occurred at time \(f_{i^{\prime},j^{\prime},k^{\prime}}\). For a clock that is never reset, the fractional part of its value at \(\nu_{i}\) will be \(f_{i,1,k}\). If the clock was last reset within some blue block, i.e, at time \(i^{\prime}-1+f_{i^{\prime},1,k^{\prime}}\), then either \(i^{\prime}<i\) (corresponds to previous blue blocks), or \(k^{\prime}<k\) (corresponds to previous ticks within the current blue block). In both cases, the \(\text{\it fract}(x)=\text{\it fract}(f_{i,1,k}-(i^{\prime}-1+f_{i^{\prime},1,k^{ \prime}}))\leq f_{i,1,k}\). Note that any reset to clock \(x\) in a previous red block must also be reset again in the corresponding green block as the runs in the red and green block are the same by construction. For a clock \(x\) last reset in some previous green block, i.e, at time \(i^{\prime}-1+f_{i^{\prime},3,k^{\prime}}\), \(\text{\it fract}(x)=\text{\it fract}((i-1+f_{i,1,k})-(i^{\prime}-1+f_{i^{\prime},3,k^{\prime}}))=f_{i,1,k}+(1-f_{i^{\prime},3,k^{\prime}})\). Furthermore, \(f_{i^{\prime},3,k^{\prime}}>s_{i-1,3}\) as \(i^{\prime}\leq i\). Therefore, \(\text{\it fract}(x)\leq 1+f_{i,1,k}-s_{i-1,3}+1\) which bounds \(1-\text{\it fract}(x)\geq s_{i-1,3}-f_{i,1,k}\). Combining all the possibilities for clock resets, we obtain \(1-\text{\it maxfrac}(\nu_{i})\geq s_{i-1,3}-f_{i,1,k}\). It is easy to see that for \(j\in\{2,3\}\), \(\text{\it maxfrac}(\nu_{i}+d_{ij})\leq\text{\it maxfrac}(\nu_{i})+d_{ij}\). Therefore, \(1-\text{\it maxfrac}(\nu_{i}+d_{ij})\geq 1-\text{\it maxfrac}(\nu_{i})-d_{ij} \geq 1-(f_{i,1,k}-s_{i-1,3}+1)-(s_{i,j}-f_{i,1,k})\geq s_{i-1,3}-s_{i,j}\geq e _{i,j}-s_{i,2}\). The last step follows from the fact that \(e_{i,j}\leq s_{i-1,3}\) for \(j\in\{2,3\}\). Note that the duration of the loop \(L_{i}\) is less than \(e_{i,j}-s_{i,j}\) and thus completes the proof for fist part of the lemma. We now show that \(L_{i}\) is executable from \(\nu_{i2}\) and \(\nu_{i3}\). First, \(\nu_{i2}\sim\nu_{i}\) and \(\nu_{i3}\sim\nu_{i2}\) by repeated application of Proposition 2.1.1. This is similar to the argument in the base case. We just showed that \(1-\text{\it maxfrac}(\nu_{i2})>s_{i-1,3}-s_{i,2}>e_{i,2}-s_{i,2}=duration(L_{i})\). The same argument holds for \(\nu_{i3}\) as well. Also, \(\text{\it maxfrac}(\nu_{i})\leq 1-s_{i-1,3}+f_{i,1,k}\) and hence \(1-\max\nu_{i}\geq s_{i-1,3}-f_{i,1,k}<e_{i,3}-s_{i,3}<duration(L_{i})\). Therefore, by Proposition 2.1.2, \(L_{i}\) is executable from both \(\nu_{i2}\) and \(\nu_{i3}\). Notice that the so-constructed word \(w\) is not in \(L\) because all \(b_{i,j}\) are disjoint. The word \(w^{\prime}\) will be constructed almost the same way, with the only exception that the first repetition of the cycle is move not to \(b_{i,2}\) but always the same interval, \(b_{1,2}\). Its easy to see that Lemma 4.7 can be modified where \(b_{i,2}\) is replaced everywhere by \(b_{1,2}\). In particular this means that \(w\) contains an event at time \(n+s_{1,2}\) for any \(n\in\mathbb{N}\), and thus must be contained in \(L\). Therefore, \(D\) has an accepting run on \(w^{\prime}\) but the run on \(w^{\prime}\) is visits the same sequence of states as the run of \(D\) on \(w\). Therefore, \(D\) must accept \(w\) as well, which is a contradiction proving that \(L\) is not accepted by any deterministic Timed Automaton with Parity acceptance. This concludes the proof of Theorem 4.6. **Remark 4.8**.: In case our definition of timed words does allow for Zeno words, the argument above can be adjusted as follows. Instead of \(L\), we consider the language \(L\cup Z\), where \(Z\) denotes the set of all Zeno words. In fact the automaton depicted in Fig. 1 recognises \(L\cup Z\). However, the candidate resolver \(r\) proposed above may wait for ever in state \(q\) for the "oldest" fractional timestamp to re-appear while reading in a Zeno word. This results in a rejecting run on a word in the language, and so the \(r\) is no resolver. To fix this issue we adjust the automaton to the one depicted in Fig. 3. This is still a co-Buchi timed automaton, where the acceptance condition is via transitions: we want to finitely often use the edges that change state. Clearly this can be translated into a TA with state-based acceptance as before. The recognised language is still \(L\cup Z\), the same as before. Note that runs that stay in \(q\) for ever are accepting Zeno words. For this automaton, The candidate resolver \(r\) (and its justification) is the same as above, except the run build for Zeno-continuations when in state \(q\) is actually accepting. The new automaton is therefore history-deterministic. A proof that \(L\cup Z\) is not recognisable by any Parity DTA remains the same because the words constructed in the proof are non-Zeno. ### HD TA are less expressive than fully non-deterministic TA We now show that non-deterministic TA are more expressive than history-deterministic ones. **Theorem 4.9**.: _The following language \(L^{\prime}\) over the singleton alphabet \(\Sigma=\{a\}\) is recognised by a one-clock non-deterministic TA with reachability acceptance but not by any history-deterministic Parity TA. In words, \(L^{\prime}\) asks to see two events \(a\) at unit distance. Formally,_ \[L^{\prime}\stackrel{{\text{def}}}{{=}}\{(\sigma_{0},t_{0})( \sigma_{1},t_{1})...\mid\exists i,j\in\mathbb{N}.\quad t_{j}-t_{i}=1\text{ and }\sigma_{i}=a\text{ and }\sigma_{j}=a\}\,.\] Proof.: The non-deterministic TA shown in Figure 4 accepts the language \(L^{\prime}\) by guessing positions \(i\) by reading an \(a\), resetting a clock \(x\) and checking that it sees an \(a\) at distance \(1\). Assume towards a contradiction that there exists a HD TA \(H\) with \(k\) clocks and maximum constant in guards \(c_{x}\), that recognises \(L^{\prime}\). For all \(i\leq k\) consider the finite word \[w_{i}=\left(a,\frac{1}{k+1}\right)\cdots\left(a,\frac{k+1}{k+1}\right)\left(a,1+\frac{i}{k+1}\right)\] that sees \(k+1\) equi-distant events in the interval \([0,1]\) and then repeats the \(i\)th fractional value in the next integral interval. All these \(w_{i}\) are in \(L^{\prime}\) and so the resolver gives a run on all such words. Note that the prefix up to time \(1\) is the same on all \(w_{i}\) and therefore the resolver gives the same run, on all of them until then. Consider the configuration \(\nu\) reached by the resolver after reading the prefix up until and including the event \((a,1)\). Since \(H\) has \(k\) clocks and \(k+1\) events \(a\), there exists an \(j\leq k\) such that \(\nu(x)\neq 1-\frac{j}{k+1}\) holds for all Figure 4. A non-deterministic timed reachability automaton for \(L^{\prime}\). Figure 3. A history-deterministic timed co-Büchi automaton for \(L\cup Z\) (the double edges are rejecting, all other accepting). \(x\). That is, either no clock is reset while reading the \(j\)th event, or any clock reset at that time is again reset later. It follows that \(\nu+\frac{j}{k+1}\sim\nu+\frac{j}{k+1}+\left(\frac{1}{2(k+1)}\right)\). Finally, let's take the word \[w^{\prime}=\left(a,\frac{1}{k+1}\right)\left(a,\frac{2}{k+1}\right)\cdots \left(a,\frac{k+1}{k+1}\right)\left(a,1+\frac{j}{k+1}+\frac{1}{2(k+1)}\right)\] Clearly \(w^{\prime}\) is not in \(L^{\prime}\). However, \(H\) must have a run on \(w^{\prime}\) which follows the accepting run of \(H\) on \(w_{j}\). The final step in this run can be executed because the two runs end up in equivalent configurations. A contradiction. By Theorems 4.6 and 4.9 we thus conclude that the classes of languages accepted by deterministic, history-deterministic and non-deterministic TAs are all different. ## 5. Timed Games and Composition In this section we consider several games played on (LTSs of) timed automata and how they can be used to decide classical verification problems. We focus on turn-based games, although our techniques can be generalised to concurrent ones. We first look at language inclusion, then synthesis, and finally we consider good-for-games timed automata, that is, automata that preserve the winner when composed with a game and show that good-for-gameness and history-determinism coincide for both reachability and safety timed automata. **Definition 5.1** (Timed Games).: A _timed game_ is a turn-based two player game played on the configuration graph of a timed automaton. Formally, it consists of an arena \(\mathcal{G}=(Q,\iota,C,\Delta,\Sigma,L)\), which is a TA except that the set \(Q\) of states is partitioned into those belonging to Player 1 (\(Q_{1}\)) and those belonging to Player 2 (\(Q_{2}\)), and that the acceptance condition \(L\) is a timed language (not some set of acceptable runs). Configurations are defined as for TA. A timed game starts in the initial configuration \((\iota,\vec{0})\) and proceeds so that each round \(i\) from configuration \(c_{i}\), the the owner of the current state first advances time with a delay \(d_{i}\in\mathbb{R}_{\geq 0}\) and follows it up with a discrete step from \(c_{i}+d\) leading to a successor configuration \(c_{i+1}\) and emitting a letter \(a_{i}\in\Sigma\). An infinite play is winning for Player 2 iff the word \(d_{0}a_{0}d_{1}a_{1}\dots\) produced is in \(L\). A _timed parity game_ is a timed game where the acceptance condition \(L\) is given by a timed parity automaton. We assume w.l.o.g., that the configuration graph of the game and that of the parity automaton defining \(L\) is the same, so that a timed parity game is simply given as a timed game as above, and a colouring _priority_\(:Q\mapsto\mathbb{N}\). **Remark 5.2**.: Notice that the definition of timed games above enforces that delays and discrete letters are emitted alternatingly. In particular, every play uniquely introduces an (infinite) timed word. The definition does not explicitly forbid these words to be Zeno (as for instance [1] do) or requires that only Player 1 advances time. If required, both these criteria can be enforced by adding one extra clock an small gadgets to the arena. We will occasionally use the fact that one can effectively determine the winner of a timed parity games. **Theorem 5.3** (Theorem 5, [1]).: _Consider a timed parity game with \(m\) states, \(n\) clocks, maximal constant \(k\) and \(c\) priorities._ 1. _If Player_ \(i\) _has a winning strategy then also one that is region-based, i.e., makes the same choices from all configurations in the same region._ 2. _The game can be solved in time_ \(\mathcal{O}((m\cdot n!\cdot 2n\cdot(2k+1)n\cdot m\cdot c)^{(c+1)})\)_, therefore, in_ ExpTime_._ ### Language Inclusion and Fair Simulation Games The connection between history-determinism and fair simulation, established in Theorem 3.4, allows to transfer decidability results to history-deterministic TA. Let's first recall that simulation checking is decidable for timed automata using a region construction [10]. This paper precedes the notion of fair simulation (restricting Player 1 to fair runs) and is thus only applicable for safety conditions. However, the result holds for more general parity acceptance (for which each state is assigned an integer priority and where a run is accepted if the highest priority it sees infinitely often is even). **Theorem 5.4**.: _Checking fair simulation is in_ ExpTime _for parity timed automata._ Proof.: For membership in ExpTime, it suffices to observe that the simulation game can be presented as timed parity game with suitable bounds, then apply Theorem 5.3. Assume w.l.o.g. that both configurations to check are in the same timed automaton \(\mathcal{A}=(Q,\iota,C,\Delta,\Sigma,\mathit{priority})\) with \(c=|\mathit{priority}(Q)|\) many colours. The game is played on the product, with each player moving on their coordinate according to the TA semantics. The arena is bipartite and enforces alternation; Player 1 states are \(Q_{1}=(Q\times Q)\), Player 2 states are \((\Delta\times Q)\). The latter record Player 1's choice of discrete transition \(t\in\Delta\). Player 1 first announces a delay (which affects all clocks and she remains in control) or the transition she wants. The latter choice resets some new clock and moves to a Player 2 owned state. Player 2 then has to move according to a transition carrying the same label as \(t\), and updating both components along the two chosen transitions. This way, both players jointly construct on a timed word read in both copies. The winning condition (parameter \(L\) for the timed parity game) encodes that if Player 1's run is accepting then so is that of Player 2. This can be achieved with a blow-up of the state space that is exponential only in \(c\), the number of colours. Essentially, to implement the implication, 1. increment all colours in Player 1's copy by one to negate the set of accepted runs; 2. interpret the parity colouring(s) as Rabin chain condition with \(c/2\) many indices2. Footnote 2: More precisely, if \(c\) is even then both Rabin chain conditions have size \(c/2\) and otherwise, if it is odd, they both have size \(\lceil c/2\rceil\). 3. create a new Rabin acceptance condition for the product that implements the implication: for every index set \((F_{i},I_{i})\) of Player 1, of finite and infinitely occurring states, resp., introduce a new Rabin pair \((F_{i}\times Q,I_{i}\times Q)\) and similarly, for every index set \((F^{\prime}_{i},I^{\prime}_{i})\) of Player 2, introduce a new Rabin pair \((Q\times F^{\prime}_{i},Q\times I^{\prime}_{i}))\). The size of the resulting index set is \(c\). 4. Finally, turn the whole game with Rabin acceptance back into one with parity acceptance using least appearance records [11, 1]. This results in a timed parity game with \(m=|Q^{2}+\Delta\times Q|\cdot c!\) states and \(2c+1\) colours. The claim of ExpTime membership now follows by applying Theorem 5.3 as the obtained complexity can be bounded by \(\mathcal{O}((m\cdot n!\cdot 2n\cdot(2k+1)n\cdot 2c+1)^{(2c+2)})\). The base in the above expression is dominated by \(n!\cdot c!\) which can be bounded by \(2^{n\cdot c}\) by using Stirling's approximation, which gives an exponential bound on the time complexity. **Corollary 5.5**.: _Timed language inclusion is decidable and ExpTime-complete for history-deterministic TA. More precisely, given a TA \(S\) with initial state \(q\) and a history-deterministic TA \(S^{\prime}\) with initial state \(q^{\prime}\), checking if \(q\subseteq_{L}q^{\prime}\) holds is ExpTime-complete._ Proof.: As \(\mathcal{B}\) is history-deterministic and by Theorem 3.4, we have \(q\subseteq_{L}q^{\prime}\) if, and only if, \(q\preceq q^{\prime}\). The result follows from Theorem 5.4. A matching lower bound holds even for safety or reachability acceptance, assuming that constants in transition guards are given in binary. **Lemma 5.6**.: _Checking fair simulation between TA is ExpTime-hard already for reachability or safety acceptance, or over finite words._ Proof.: This can be shown by reduction from _countdown games_[10], which are two-player games \((Q,T,k)\) given by a finite set \(Q\) of control states, a finite set \(T\subseteq(Q\times\mathbb{N}_{>0}\times Q)\) of transitions, labelled by positive integers, and a target number \(k\in\mathbb{N}\). All numbers are given in binary encoding. The game is played in rounds, each of which starts in a pair \((p,n)\) where \(p\in Q\) and \(n\leq k\), as follows. First Player 1 picks a number \(l\leq k-n\), so that at least one \((p,l,p^{\prime})\in T\) exists; Then Player 2 picks one such transition and the next round starts in \((p^{\prime},n+l)\). Player 1 wins iff she can reach a configuration \((q,k)\) for some state \(q\). Determining the winner in a countdown game is ExpTime-complete [10] and can easily encoded as a simulation game between two TAs \(\mathcal{A}\) and \(\mathcal{B}\) as follows. Let \(\mathcal{A}\) be the TA with no clocks and unrestricted (guards are _True_) self-loops for the two letters \(a\) and \(e\); The idea is that Player 1 proposes \(l\) by waiting that long and then makes a discrete \(a\)-labelled move. Then Player 2, currently in some state \(p\) can update his configuration to mimic that of the countdown game, and punish (by going to a winning sink) if Player 1 cheated or the game should end. To implement this, \(\mathcal{B}\) has two clocks: one to store \(n\) - the total time that passed - and one to store the current \(l\), which is reset in each round. Suppose Player 1 waits for \(l\) units of time and then proposes \(a\). Player 2, currently in some state \(p\) will have * \(a\) and \(e\)-labelled transitions to a winning state with a guard that verifies that there is no transition \((p,l,p^{\prime})\). * \(a\)-labelled transitions to a state \(p^{\prime}\), with a guard that verifies that a some \((p,l,p^{\prime})\in T\) exists, and which resets clock \(x_{2}\). * \(a\), and \(e\)-labelled transitions to a winning state guarded by \(x_{1}>k\). This enables Player 2 to win if the global time has exceeded the target \(k\). The only way that Player 1 can win is by following a winning strategy in the countdown game and by playing the letter \(e\) once \(\mathcal{B}\) is in a configuration \((q,k)\). Player 2 will not be able to respond. ### Composition with Games Implicitly, at the heart of these reductions is the notion of composition: the composition of the game to solve with a history-deterministic automaton for the winning condition yields an equivalent game with a simpler winning condition. We say that an automaton is _good-for-games_ if this composition operation preserves the winner of the game for all games. While history-determinism always implies good-for-gameness, the converse is not necessarily true. While the classes of history-deterministic and good-for-games automata coincide for \(\omega\)-regular automata [1], this is not the case for quantitative automata [1], which can be good-for-games without being history-deterministic. We argue that for reachability and safety timed automata, good-for-gameness and history-determinism coincide. Intuitively, the composition of a timed automaton \(\mathcal{T}\) and a timed game \(\mathcal{G}\) consists of a game in which the two players play on \(\mathcal{G}\) while Player 2 must also build, letter by letter, a run of \(\mathcal{T}\) on the outcome of the game in \(\mathcal{G}\). **Definition 5.7** (Composition).: Given a TA \(\mathcal{T}\) and a timed game \(\mathcal{G}\) with winning condition \(L_{T}(\mathcal{T})\), the composition \(\mathcal{T}\circ\mathcal{G}\) consists of a game played on the product of the configuration spaces of \(\mathcal{T}\) and \(\mathcal{G}\), starting from the initial state of both, in which, at each turn \(i\), from a configuration \((c_{i},c^{\prime}_{i})\), Player 1 plays a time delay \(d_{i}\in\mathbb{R}_{\geq 0}\) advancing all clocks on both sides, the owner of the current \(\mathcal{G}\)-state chooses a move in \(\mathcal{G}\) to a successor-configuration \(c_{i+1}\), producing a letter \(a_{i}\), and then Player 2 chooses a transition over \(a_{i}\) enabled at the current \(\mathcal{T}\)-configuration \(c^{\prime}_{i}\), leading to a successor-configuration \(c^{\prime}_{i+1}\). The game then proceeds from \((c_{i+1},c^{\prime}_{i+1})\). Player 2 wins infinite plays if the run built in \(\mathcal{T}\) is accepting, and loses if it is rejecting or if she cannot move in the \(\mathcal{G}\)-component. Observe that if Player 1 wins in \(\mathcal{G}\), then he also wins in \(\mathcal{T}\circ\mathcal{G}\) with a strategy that produces a word not in \(L_{T}(\mathcal{T})\) in \(\mathcal{G}\), as then Player 2 can not produce an accepting run in \(\mathcal{T}\). Boker and Lehtinen [1, Lemma 7] show that for (quantitative) automata for which the letter-game is determined, (threshold) history-determinism coincides with good-for-gameness. The lemma is stated for quantitative automata, where thresholds are relevant; in the Boolean setting, it simply states that the determinacy of the letter game implies the equivalence of history-determinism and good-for-gameness. In our timed setting, a similar argument, combined with the determinacy of the letter game for safety and reachability TA, gives us the following. **Theorem 5.8**.: _Let \(\mathcal{T}\) be a safety or reachability TA. The following are equivalent:_ 1. \(\mathcal{T}\) _is history-deterministic._ 2. _For all timed games_ \(\mathcal{G}\) _with winning condition_ \(L_{T}(\mathcal{T})\)_, whenever Player 2 wins_ \(\mathcal{G}\)_, she also wins_ \(\mathcal{T}\circ\mathcal{G}\)_._ Proof.: \((1)\Longrightarrow(2)\) If \(\mathcal{T}\) is history-deterministic, the resolver can be used as a strategy in the \(\mathcal{T}\) component of \(\mathcal{T}\circ\mathcal{G}\). When combined with a winning strategy in \(\mathcal{G}\) that guarantees that the \(\mathcal{G}\)-component produces a word in \(L_{T}(\mathcal{T})\), the resolver guarantees that the \(\mathcal{T}\)-component produces an accepting run, thus giving the victory to Player 2. \((2)\Longrightarrow(1)\) Towards a contradiction, assume \(\mathcal{T}\) is not history-deterministic, that is, by determinacy of the letter game from Remark 6.2, that Player 1 has a winning strategy \(\sigma\) in the letter game. Now consider the game \(\mathcal{G}_{\sigma}\), without clocks or guards, in which positions, all belonging to Player 1, consist of the prefixes of timed words played by \(\sigma\), with moves \(w\xrightarrow{(t,a)}w(t,a)\). As \(\sigma\) is winning for Player 1, all maximal paths in \(\mathcal{G}_{\sigma}\) are labelled by a timed word in \(L_{T}(\mathcal{T})\), so \(\mathcal{G}_{\sigma}\) is winning for Player 2. We now argue that Player 1 wins \(\mathcal{T}\circ\mathcal{G}_{\sigma}\) by interpreting Player 2's moves in the \(\mathcal{T}\) component as her moves in the letter game, and choosing moves in \(\mathcal{G}\) mimicking the letter dictated by \(\sigma\). Then, if Player 2 could win against this strategy in \(\mathcal{T}\circ\mathcal{G}_{\sigma}\), she could also win against \(\sigma\) in the letter game by interpreting Player 1's choices of letters as moves in \(\mathcal{G}\), and responding with the same transition as she plays in the \(\mathcal{T}\) component of \(\mathcal{T}\circ\mathcal{G}_{\sigma}\). Such a strategy is a valid strategy in the letter game on \(\mathcal{T}\), and while it might not be winning in general, it is winning against \(\sigma\), contradicting that \(\sigma\) is a winning strategy for Player 1. This proof fails for acceptance conditions beyond safety and reachability, as it isn't clear whether timed Buchi and coBuchi automata define Borel sets. If this was the case then history-deterministic timed automata would be exactly those that preserve winners in composition with games, as is the case in the \(\omega\)-regular setting. ## 6. Deciding History-determinism Recall the letter game characterisation of history-determinism: Player 1 plays timed letters and Player 2 responds with transitions. Player 2 wins if either the word is not in the language of the automaton, or her run is accepting. As TA are not closed under complement, it isn't clear how to solve this game directly. Bagnol and Kuperberg [1] introduced _token games_, which are easier to solve, but which coincide with the letter game for various types of automata[1, 1, 1]. In this section we show that the simplest token game, called the one-token game, characterises history-determinism for safety and reachability LTSs, and therefore TAs. The case of reachability automata was already shown in [1], but for self-containment we include the proof here, generalised to fair LTSs. This strenghens and simplifies the result of the conference version [10], which used the two-token game to characterise the history-determinism of reachability automata. **Definition 6.1** (1-token game [1]).: Given a fair LTS \(S=(V,\Sigma,E)\) with initial state \(s_{0}\in V\), the game \(G_{1}(S)\) proceeds in rounds. At each round \(i\): * Player 1 plays a letter \(a_{i}\in\Sigma\) * Player 2 plays a transition \(\tau_{i}\) in \(S\) * Player 1 plays a transitions \(\tau^{\prime}_{i}\) in \(S\) This way, Player 1 chooses an infinite word \(w=a_{0}a_{1}\dots\) and a run \(\rho^{\prime}=\tau^{\prime}_{0}\tau^{\prime}_{1}\tau^{\prime}_{2}\dots\), and Player 2 chooses a run \(\rho=\tau_{0}\tau_{1}\dots\). The play is winning for Player 1 if \(\rho^{\prime}\) is an accepting run over \(t_{0}a_{0}...\) from \(s_{0}\) but \(\rho\) is not. Else it is winning for Player 2. Given a TA \(\mathcal{T}\), we write \(G_{1}(\mathcal{T})\) to mean the 1-token game on the fair LTS induced by \(\mathcal{T}\). **Remark 6.2**.: \(G_{1}(S)\) and the letter game are determined for any fair LTS \(S\) with any Borel-definable acceptance condition [11]. In particular, the letter game is determined for both safety and reachability TA \(\mathcal{T}\). Indeed, the winning condition for Player 2 is a disjunction of the complement of \(L_{T}(\mathcal{T})\) and of the acceptance condition of \(\mathcal{T}\). Then, as long as \(L_{T}(\mathcal{T})\) is Borel, by the closure of Borel sets under complementation and disjunction, the letter-game is Borel, and therefore determined, following Martin's Theorem [11]. If time is not required to diverge, then reachability timed languages and safety timed languages are clearly Borel. Since words in which time diverges are also Borel (they can be seen as the countable intersection of words where time reaches each unit time), this remains the case when we require divergence. \(G_{1}(S)\) was shown to characterise history-determinism for a number of quantitative automata in [1]. In Lemma 6.3 below we show, using similar proof techniques, that this is also the case for all safety LTSs. The key observation is that for Player 2 to win the letter game, it suffices that she avoids mistakes. We then show that a winning strategy for her in \(G_{1}(S)\) can be used to build such a strategy. **Lemma 6.3**.: _Given a fair LTS \(S\) with a safety acceptance condition, or interpreted over finite words, Player 2 wins \(G_{1}(S)\) if and only if \(S\) is history-deterministic._ Proof.: If \(S\) is history-deterministic then Player 2 wins \(G_{1}(S)\) by using the resolver to choose her transitions. This guarantees that for all words in \(L(S)\) played by Player 1, her run is accepting, which makes her victorious regardless of Player 1's run. For the converse, if Player 2 wins \(G_{1}(S)\), consider the following family of _copycat strategies_ for Player 1: at first, Player 1 plays \(\sigma\) and chooses the same transitions as Player 2; if, eventually, Player 2 chooses a transition \(\tau\) from a configuration \(c\) that is not language-maximal, that is, moves to a configuration \(c^{\prime}\) that does no accept some word \(w\) that is accepted by some other configuration \(c^{\prime\prime}\) reachable by some other transition \(\tau^{\prime}\) from \(c\), we call such a move non-cautious, and Player 1 stops copying Player 2 and instead chooses \(\tau^{\prime}\). From there, Player 1 wins by playing \(w\) and an accepting run on \(w\) from \(c^{\prime\prime}\). Since Player 2 wins \(G_{1}(S)\), her winning strategy \(\sigma\) does not play any non-cautious moves against copycat strategies. Then, she can use \(\sigma\) in the letter-game, by playing as \(\sigma\) would play in \(G_{1}(S)\) if Player 1 copies her transitions. This guarantees that she never makes a non-cautious move, and, in particular, for \(S\) with a safety acceptance condition, never moves out of the safe region of the automaton unless the prefix played by Player 1 has no continuations in \(L(S)\). This is a winning strategy in the letter-game, so \(S\) is history-deterministic. Similarly, for \(S\) interpreted over finite words, the letter game is a safety game, and a strategy that never makes a non-cautious move remains in the safe region of this game, and is therefore winning. This argument does not work for reachability TA: it is no longer enough for Player 2 to avoid bad moves to win; she needs to also guarantee that she will actually reach a final state. Given a fair LTS \(S\) with a reachability acceptance condition, we call a state \(q\)_almost final_ if, from \(q\), there is an accepting run on all words and Player 2 wins the letter game starting from \(S^{q}\). In particular, every final state is also almost final. **Lemma 6.4**.: _Given an \(S\) with a reachability acceptance condition, let \(S^{\prime}\) be \(S\) where every almost final state of \(S\) is made final in \(S\). Then:_ 1. _If Player_ \(2\) _wins_ \(G_{1}(S)\) _on infinite words, then Player_ \(2\) _also wins_ \(G_{1}(S^{\prime})\) _on finite words;_ 2. _If_ \(S^{\prime}\) _on finite words is history-deterministic, then so is_ \(S\) _on infinite words._ Proof.: (1) Let \(\sigma\) be winning strategy for Player 2 in \(G_{1}(S)\) on infinite words. Let the strategy \(\sigma^{\prime}\) for Player 2 on \(G_{1}(S^{\prime})\) on finite words simply copy \(\sigma\). If Player 2's run in a play of \(G_{1}(S^{\prime})\) that agrees with \(\sigma^{\prime}\) reaches a state that was almost final in \(S\), and therefore final in \(S^{\prime}\), it is winning for Player 2. If Player 2's run in a play of \(G_{1}(S^{\prime})\) that agrees with \(\sigma^{\prime}\) does not reach such a state, we argue that Player 1's run also does not reach such a state. Indeed, towards a contradiction, assume that after some finite play \(\pi\), Player 2's run has reached a state \(q_{2}\) that is not final, while Player 1's run has reaches a state \(q_{1}\) that is final. Then, \(\pi\) is also a play prefix that agrees with \(\sigma\) in \(G_{1}(S)\). We argue that Player 1 can then win in \(G_{1}(S)\), a contradiction: since \(q_{2}\) is not almost final, he can play letters so that Player 2 does not build an accepting run; meanwhile, from \(q_{1}\), he can use the strategy witnessing that it is almost final to build an accepting run on any word. (2) Let \(\sigma^{\prime}\) be Player 2's winning strategy in the letter-game on \(S^{\prime}\) on finite words. In the letter-game on \(S\) on infinite words, Player 2 follows \(\sigma^{\prime}\) until the prefix of the word is in \(L(S^{\prime})\), at which point her run must have reached an almost final state of \(S\), due to \(\sigma^{\prime}\) being winning. From there, she can play the strategy witnessing that the state is almost final, and win. If Player 1 never plays a prefix in \(L(S^{\prime})\) then the word he plays is not in \(L(S)\), and Player 2 wins. **Theorem 6.5**.: _Given a fair LTS \(S\) with a reachability acceptance condition, Player 2 wins \(G_{1}(S)\) if and only if \(S\) is history-deterministic._ Proof.: One direction is immediate since if Player 2 wins in the letter game, she can use the same strategy to win in \(G_{1}(S)\) by ignoring Player 1's run. For the other direction, if Player 2 wins \(G_{1}(S)\), she also wins \(G_{1}(S^{\prime})\) on finite words, for \(S^{\prime}\) defined as in Lemma 6.4 above. Then, \(S^{\prime}\) is history-deterministic from Lemma 6.3, and, again from Lemma 6.4, \(S\) is also history-deterministic. We now consider the problem of deciding whether a given safety or reachability TA is history-deterministic. We use the observation that the \(k\)-token games played on LTSs induced by TA can be expressed as a timed parity game from [1] played on the \((k+1)\)-fold product. **Theorem 6.6**.: _Given a safety or reachability TA, deciding whether it is history-deterministic is decidable in ExpTime._ Proof.: From Lemma 6.3 and Lemma 6.4, deciding the history-determinism of a safety or reachability TA \(\mathcal{T}\) reduces to solving \(G_{1}(\mathcal{T})\), which is a timed game played on the product of two copies of \(\mathcal{T}\) (plus some intermediate states to encode the interaction in each round). The winning condition consists of a Boolean combination of safety or reachability conditions. In the same way as done in the proof of Theorem 5.4, we can implement \(G_{1}(\mathcal{T})\) as a timed parity game where the set states grows exponentially only in the number of colours in the parity condition. The resulting timed parity game can be solved in time exponential in the number of clocks \(c\) (see Theorem 5.3). As explained in the introduction, this also solves the good-enough synthesis problem of deterministic safety and reachability TA. ## 7. Synthesis We show that as is the case in the regular [10], pushdown [11], cost function [15], and quantitative [1] settings, synthesis games with winning conditions given by history-deterministic TA are no harder to solve than those with for winning condition given by deterministic TA. **Definition 7.1** (Timed synthesis game).: Given a timed language \(L\subseteq(\Sigma_{I}\times\Sigma_{O})_{T}^{\omega}\), the synthesis game for \(L\) proceeds as follows. At turn \(i\): * Player 1 plays a delay \(d_{i}\) and a letter \(a_{i}\in\Sigma_{I}\) * Player 2 plays a letter \(b_{i}\in\Sigma_{O}\). Player 2 wins if \(d_{0}\binom{a_{0}}{b_{0}}d_{1}\binom{a_{1}}{b_{1}}...\in L\) or if time does not progress. If Player 2 has a winning strategy in the synthesis game, we say that \(L\) is _realisable_. **Theorem 7.2**.: _Given a history-deterministic timed parity automaton \(\mathcal{T}\), the synthesis game for \(L_{T}(\mathcal{T})\) is decidable and ExpTime-complete._ The proof below follows a similar reduction to one in [10], in which the nondeterminism of the automaton is moved into Player 2's output alphabet, forcing her to simultaneously build a word in the winning condition and an accepting run witnessing this. Since accepting runs are recognised by deterministic automata, this reduces the problem to the synthesis problem for deterministic timed automata. The lower bound follows from the ExpTime-completeness of synthesis for deterministic TA [1]. The ExpTime decidability of universality for history-deterministic TA follows both from the decidability of language inclusion in the previous section and from the decidability of synthesis: the universality of \(\mathcal{T}\) reduces to deciding the winner of the synthesis game over \(\{\binom{w}{w}\mid w\in L_{T}(\mathcal{T})\}\), recognised by a history-deterministic TA if \(\mathcal{T}\) is history-deterministic. Proof of Theorem 7.2.: For the upper bound, we reduce the problem to solving synthesis games for deterministic timed parity automata, which is in ExpTime[1]. Let \(\mathcal{T}=(S,\iota,C,\Delta,\Sigma,Acc)\) be a timed automaton. Let \(\mathcal{T}^{\prime}\) be the deterministic timed automaton \((S,\iota,C,\Delta^{\prime},\Sigma\times\Delta,Acc)\) where: \[\Delta^{\prime}=\{(s,g,(\sigma,(s,g,\sigma,c,s^{\prime})),c,s^{\prime})|(s,g, \sigma,c,s^{\prime})\in\Delta\}\] In other words, \(\mathcal{T}^{\prime}\) is a deterministic automaton with the state space of \(\mathcal{T}\), over the alphabet \(\Sigma\times\Delta\), where the transition in the input letter dictates the transition in the automaton. The language of \(\mathcal{T}^{\prime}\) is the set of words \((w,\rho)\) such that there is an accepting run of \(\mathcal{T}\) over \(w\) along the transitions of \(\rho\). We now claim that given a history-deterministic automaton \(\mathcal{T}\) with resolver \(r\), Player 2 wins the synthesis game on \(\mathcal{T}\) if and only if she wins it on \(\mathcal{T}^{\prime}\). First assume that Player 2 wins the synthesis game for \(\mathcal{T}\) with a strategy \(s\). Then, to win the synthesis game for \(\mathcal{T}^{\prime}\), at each turn \(i\), after Player 1 plays \(d_{i}\) and \(a_{i}\), she needs to make two choices: she must choose both a response letter \(b_{i}\) and a transition in \(\mathcal{T}\) over \((a_{i},b_{i})\). Given Player 1's move and the (first component of the) word built so far, she can use the strategy \(s\) to choose the response letter \(b_{i}\); this guarantees that the first component of the play is a word accepted by \(\mathcal{T}\). To choose the transition of \(\mathcal{T}\), she can use the resolver \(r\): given the run \(\rho\) built from the delays (including \(d_{i}\)) and transitions played so far, she plays \(r(\rho,(a_{i},b_{i}))\). Since \(r\) is a resolver, this strategy guarantees that the resulting run is accepting, and hence that she wins the synthesis game on \(\mathcal{T}^{\prime}\). On the other hand, if Player 1 wins the synthesis game on \(\mathcal{T}\), he has a strategy \(s\) which guarantees a play \(w\in(\Sigma_{i}\times\Sigma_{O})^{T}\) that is not in the language of \(\mathcal{T}\). He can use the same strategy in the synthesis game of \(\mathcal{T}^{\prime}\) to guarantee a play \((w,\rho)\) such that \(w\) is not in the language of \(\mathcal{T}\), and by extension \((w,\rho)\) is not in the language of \(\mathcal{T}^{\prime}\), as there are no accepting runs over \(w\) in \(\mathcal{T}\). The lower bound follows from the ExpTime-completeness of synthesis for deterministic TA [1]. ## 8. Conclusion We introduced history-determinism for timed automata and showed that several natural decision problems - timed language inclusion, universality and synthesis - that are undecidable in general, remain decidable for such automata. We proved that for the important subclasses of timed safety and timed reachability automata, history-determinism can be checked (and therefore good-enough synthesis of deterministic reachability and safety automata can be solved) in exponential time. We also showed that every history-deterministic timed safety or reachability automaton can be determinized, based on pruning the region automaton, and therefore these classes of automata are strictly less expressive than their fully non-deterministic counterparts. We further establish that in terms of recognising languages over infinite timed words, HD timed automata are strictly in between fully deterministic and the more general non-deterministic TA. More precisely, we present a history-deterministic one-clock coBuchi timed automata whose language is not recognised by any \(k\)-clock deterministic parity TA, and a non-deterministic 1-clock reachability TA whose language is not recognised by any HD \(k\)-clock parity TA. We leave open the comparative expressiveness of history-determinism Buchi timed automata: are they determinisable? Already in the untimed case [1], positional resolvers for HD Buchi may not exist, but a finite amount of memory suffices. We conjecture that HD Buchi timed automata can still be determinized via resolvers that are based on regions and a finite amount of additional memory. Let us conclude with another conjecture. We showed that history-deterministic timed automata are "good" for solving turn-based timed games, where in each turn of the game, one of the two players chooses a time delay or an action. A more general, concurrent setting for timed games is presented in [1]. In their concurrent version, both players simultaneously choose permissible pairs of time delays and actions, and the player who has picked the shorter time delay gets to move. While concurrent games may not be determined, we conjecture that these concurrent timed games can again be solved by composing the (timed) arena with the (timed) winning condition, as long as the winning condition is history-deterministic.
2305.05803
Segment Anything Model (SAM) Enhanced Pseudo Labels for Weakly Supervised Semantic Segmentation
Weakly supervised semantic segmentation (WSSS) aims to bypass the need for laborious pixel-level annotation by using only image-level annotation. Most existing methods rely on Class Activation Maps (CAM) to derive pixel-level pseudo-labels and use them to train a fully supervised semantic segmentation model. Although these pseudo-labels are class-aware, indicating the coarse regions for particular classes, they are not object-aware and fail to delineate accurate object boundaries. To address this, we introduce a simple yet effective method harnessing the Segment Anything Model (SAM), a class-agnostic foundation model capable of producing fine-grained instance masks of objects, parts, and subparts. We use CAM pseudo-labels as cues to select and combine SAM masks, resulting in high-quality pseudo-labels that are both class-aware and object-aware. Our approach is highly versatile and can be easily integrated into existing WSSS methods without any modification. Despite its simplicity, our approach shows consistent gain over the state-of-the-art WSSS methods on both PASCAL VOC and MS-COCO datasets.
Tianle Chen, Zheda Mai, Ruiwen Li, Wei-lun Chao
2023-05-09T23:24:09Z
http://arxiv.org/abs/2305.05803v4
# Segment Anything Model (SAM) Enhanced Pseudo Labels for Weakly Supervised Semantic Segmentation ###### Abstract Weakly Supervised Semantic Segmentation (WSSS) with only image-level supervision has garnered increasing attention due to its low annotation cost compared to pixel-level annotation. Most existing methods rely on Class Activation Maps (CAM) to generate pixel-level pseudo labels for supervised training. However, it is well known that CAM often suffers from partial activation -- activating the most discriminative part instead of the entire object area, and false activation -- unnecessarily activating the background around the object. In this study, we introduce a simple yet effective approach to address these limitations by harnessing the recently released Segment Anything Model (SAM) to generate higher-quality pseudo labels with CAM. SAM is a segmentation foundation model that demonstrates strong zero-shot ability in partitioning images into semantically meaningful segments but lacks semantic labels for these regions. To circumvent this, we employ the initial seed mask from CAM or the post-processed pseudo labels for a specific class as the signal to select the most relevant masks and label them to generate the refined pseudo labels for this class. The segments generated by SAM are highly precise, leading to substantial improvements in both partial activation and false activation. Moreover, existing post-processing modules in WSSS for producing pseudo labels, such as AffinityNet, are often computationally heavy, with a significantly long training time. Surprisingly, we discovered that using the initial CAM in conjunction with SAM can achieve on-par performance as the post-processed pseudo label generated from these modules with much less computational cost. Our approach is highly versatile, and capable of seamless integration into existing WSSS models without modification to base networks or pipelines. Despite its simplicity, our approach improves the mean Intersection over Union (mIoU) of pseudo labels from five state-of-the-art WSSS methods by 6.2% on average on the train set of PASCAL VOC 2012 dataset. Our code is available at [https://github.com/cskyl/SAM_WSSS](https://github.com/cskyl/SAM_WSSS). ## 1 Introduction Semantic segmentation, a task aimed at assigning a semantic label to each pixel in an image[28], has found wide applications in various fields, such as medical imaging[cite], remote sensing[cite], autonomous driving[cite], and facial recognition[cite]. The success of deep learning techniques combined with the availability of large-scale pixel-level annotations has greatly boosted the performance of semantic segmentation in recent years. However, acquiring pixel-level annotations is a daunting task due to its laborious and time-consuming nature. As an alternative, Weakly Supervised Semantic Segmentation (WSSS) aims to train a segmentation model with cheaper yet weaker annotations such as bounding boxes[29; 23; 34; 35], scribbles, points[19; 5], or image-level class labels[22; 43; 16; 37]. Among various WSSS approaches, image-level WSSS has gained widespread popularity due to the abundance of image-level annotations in various vision datasets[31; 27] and the availability of strong pre-trained classification models[15; 32]. Image-level WSSS presents a significant challenge due to the lack of precise object location information in image-level labels, which are solely indicative of the presence of object categories. To address this challenge, most of the current methods rely on Class Activation Mapping (CAM)[44] to provide location cues for the target objects. These approaches typically follow a three-stage learning process. Initially, they train a classification network with image-level labels, from which CAMs are generated to provide a coarse estimation of the target objects' location. Subsequently, the initial CAMs are refined with various post-processing techniques, such as pixel affinity-based methods[2; 26] or additional saliency maps [24; 6; 41] to create pseudo labels. Finally, a fully supervised semantic segmentation network [9; 8] is trained using the pseudo labels as ground-truth annotations. The efficacy of WSSS greatly relies on the accuracy of pseudo labels. Nevertheless, it is widely recognized that the CAM-derived pseudo labels frequently suffer from partial activation [3; 36] -- activating the most discriminative part instead of the entire object area, and false activation [39; 17] -- unnecessarily activating the background around the object. Examples of partial and false activation are shown in Figure 1. Despite significant research efforts dedicated to improving the quality of pseudo labels, refining them in an efficient and effective manner remains an open and challenging research problem. The advent of the recently released segmentation foundation models has opened up promising avenues for addressing this crucial problem for WSSS. Among these models, the Segment Anything Model (SAM) stands out as the most well-known one, which is trained on over 11 million images with one billion masks (SA-1B). The richness of this training data enables SAM to perform zero-shot segmentation on unseen images based on various prompts such as bounding boxes, points and texts. Figure 1: Demonstration of how SAM addresses partial and false activation on PASCAL VOC 2012 train set. (A) Original images (B) Pseudo labels generated by a state-of-the-art method (CLIMS) (C) Segments from SAM (D) SAM enhanced pseudo labels (E) Ground truth labels. As mentioned above, CAM-derived pseudo labels can roughly show the localization of an object for a specific class but lack an accurate boundary for this object. On the contrary, SAM is able to precisely segment most parts or objects without knowing their class labels. As a result, SAM can be a strong supplementary tool to improve the quality of pseudo labels in WSSS. To this end, we propose SAM Enhanced Pseudo Labels (SEPL) as an initial attempt to leverage SAM for pseudo label enhancement. Specifically, we use CAM-derived pseudo labels as the seed signals to select the most relevant segments from SAM, which are then labelled to generate new pseudo labels for the class. The segments generated by SAM exhibit high precision, leading to substantial improvements in both partial and false activation issues in existing pseudo labels. In addition, conventional WSSS methods typically incorporate post-processing modules, such as AffinityNet, to refine the initial CAM. However, the training of these modules incurs high computational costs and prolonged training times, thus impeding the widespread application of WSSS on large datasets. Intriguingly, our investigation revealed that by directly utilizing raw CAM with SAM, we achieved comparable performance to that of post-processed pseudo labels enhanced by SAM. This finding suggests that SAM can be used as a substitute for post-processing modules, resulting in a marked acceleration of the entire WSSS training pipeline. SEPL exhibits remarkable versatility, as it can be seamlessly integrated into existing WSSS methods without requiring any modification of the original methods. Despite its simplicity, SEPL achieves a considerable improvement in the mean Intersection over Union (mIoU) of pseudo labels compared to five state-of-the-art WSSS methods, with an average increase of 6.2% on the train set of PASCAL VOC 2012 dataset[13]. As far as we are aware, this is the first study to investigate the potential of SAM in the context of WSSS. We hope this work will pave the way for applying segmentation foundation models in diverse computer vision applications. ## 2 Related Work ### Weakly Supervised Semantic Segmentation (WSSS) Recent approaches in weakly supervised semantic segmentation (WSSS) often rely on Class Activation Maps (CAM) [45] to generate pixel-level pseudo labels. These pseudo labels are then used to train the segmentation model in a fully supervised manner. However, CAM often exhibits a bias towards the most discriminative regions of the target object which limits the quality of the pseudo labels. To overcome this challenge, recent works mainly focus on generating high-quality CAMs with integral activation on the entire object regions. Early-stage works [33; 38; 25] encourage the network to discover less activated object parts via adversarial erasing. In addition to the classification loss typically used in the WSSS framework, specific loss functions such as SEC loss [21], equivariance regularization [36], and contrastive loss [18; 12] have been exploited in narrowing the gap between the pixel-level and image-level supervisions. Some works also introduce network modules to address the partial activation problem of CAM: SEAM [36] leverages pixel-level semantic affinities with a pixel correlation module; CIAN [14] exploits the additional information from related images with a cross-image affinity module. Recent methods based on Vision Transformer [11] including [26; 40] aim to uncover more comprehensive object regions by exploring the global information from the attention of the transformer network. Most of these works follow the multi-stage framework, where a post-processing step is necessary for refining and improving the initial pseudo labels generated from CAM. ### Post-Processing in WSSS Although there are end-to-end WSSS solutions [30; 4; 42] available, most of the recent works still rely on some post-processing techniques to enhance the initial pseudo labels to achieve superior performance. Among these techniques, two widely utilized methods for refining pseudo labels are AffinityNet [3] and IRNet [1]. AffinityNet trains a network from CAM to predict the semantic affinities and uses it for propagating local activations, whereas IRNet [1] learns and predicts semantic affinities more effectively by leveraging class boundary maps. Despite the substantial improvement in the pseudo labels, these methods require training a separate network. The computational cost involved can be a significant barrier, especially when working with large-scale datasets. Additionally, the careful tuning of hyperparameters to obtain accurate foreground and background pixels for training these networks can slow down the entire training pipeline. This requirement for meticulous parameter tuning not only adds complexity to the process but also limits the applicability and scalability of these post-processing techniques. ## 3 SAM Enhanced Pseudo Labels In this section, we will describe how SAM can be seamlessly applied to the typical three-stage WSSS learning pipeline. As depicted in Figure 2, the pipeline involves training a classification network on image-level labeled data to generate CAM that provides a rough estimate of object locations. These CAM are then refined by post-processing methods, such as AffinityNet, to produce pseudo labels, which are subsequently used to train a fully supervised segmentation network. While CAMs, pseudo labels, and final predictions can localize objects for a specific class label, they lack accurate boundaries. In contrast, SAM can provide highly precise segments but without the corresponding class labels. Therefore, we propose SAM as a strong enhancement tool to improve the quality of the initial seed area from CAM, the post-processed pseudo labels, or final predictions. The general idea is that these intermediate results are used as the seed signals for selecting the most relevant segments from SAM, which are then labeled to generate new pseudo-labels for the class. Specifically, we use pseudo labels for a single image as an example to illustrate how SAM can enhance them. We first identify the pseudo labels for each class and then calculate the overlap area between the pseudo labels and each SAM-generated segment. Two ratios are then computed, namely, the overlap area over the area of the segment and the overlap area over the area of the pseudo labels for that class. If either ratio exceeds a predetermined threshold, the corresponding segment is included as the new pseudo labels for that class. If no overlap is found between the pseudo labels and any SAM-generated segment, we retain the original pseudo labels for that class. We refer to this method as SAM Enhanced Pseudo Labels (SEPL), which is summarized in Algorithm 1. Since the purpose of pseudo labels is to provide an estimation of object localization for a specific class, both the seed mask and the final predictions can also be enhanced by SAM in the same way as described in Algorithm 1. As mentioned in Section 2.2, post-processing methods like AffinityNet are computationally heavy and require a significant amount of training time. Since SAM can provide high-quality segmentation without any additional training, it presents an attractive alternative for improving the quality of intermediate results in the WSSS pipeline. By using SAM directly on the seed mask from the raw CAM, we can eliminate the need for post-processing methods and significantly reduce the training time and computational resources required. By incorporating SAM into the WSSS pipeline, we not only enhance the quality of intermediate results such as the seed mask and pseudo labels but also significantly streamline the process. The elimination of computationally expensive post-processing techniques enables a more efficient and effective WSSS pipeline, while still leveraging the power of SAM for high-quality segmentation. This approach opens up new possibilities for integrating SAM and other segmentation foundation models into a wide range of computer vision applications. ## 4 Experiment ### Experimental setup Datasets and Evaluation MetricThe PASCAL VOC 2012 segmentation dataset[13], the standard benchmark for WSSS, is used to evaluate our proposed method. The dataset contains 21 classes, including a background, and 1,464, 1,449, and 1,456 images, respectively, for the train, validation, Figure 2: The typical three-stage WSSS learning pipeline. SAM can be applied to the original CAM, post-processed pseudo labels, and the final predictions to enhance their quality. and test sets. In accordance with the standard practice in semantic segmentation, we employ the augmented train set of 10,582 images for training. We report the mean Intersection-over-Union (mIoU), precision, and recall for evaluation. To demonstrate the quality of the pseudo masks we generate for training the segmentation model, we evaluate the pseudo masks generated by our framework which leveraged the ground truth labels within the PASCAL segmentation training set. Framework Design and ImplementationOur framework aims to address the problem of weakly supervised semantic segmentation by leveraging the Segment Anything Model (SAM)[20] to improve mask quality. In order to obtain the pseudo masks, we utilized a range of state-of-the-art (SOTA) methods on the PASCAL VOC2012 dataset, including: * **CLIMS:[39]** A Cross-Language Image Matching framework leveraging natural language supervision to activate complete object regions and suppress related open background regions for improved CAM quality in WSSS. * **SIPE:[10]** Self-supervised Image-specific Prototype Exploration, which tailors prototypes for each image to capture complete regions, optimizing feature representation and enabling self-correction for improved WSSS performance. * **Wseg:[12]** A weakly-supervised pixel-to-prototype contrast method providing pixel-level supervisory signals, executed across and within different views of an image to enhance the quality of pseudo masks for WSSS. * **TransCAM:[26]** A Conformer-based solution that refines CAM by leveraging attention weights from the transformer branch of the Conformer, capturing both local features and global representations for WSSS. * **RecurSeed:[17]** An approach that alternately reduces non-detections and false-detections through recursive iterations, implicitly finding an optimal junction and leveraging a novel data augmentation method, EdgePredictMix, for improved WSSS performance. The default setting of these methods above were adhered to in the generation of pseudo-labeled masks. Furthermore, we acquired the pseudo labels from Wseg (with EPS) and RecurSeed via their respective repositories. For SAM, we began by adopting the hyperparameters used in another repository[7], which is specifically designed to provide semantic contexts for SAM's segments, and we also consulted the SAM's repository for additional reference. Then, we fine-tuned several parameters visually, gauging the output quality. Specifically, two thresholds were introduced in the Algorithm 1: threshold1, which indicates the intersection over SAM's mask, and threshold2, indicating the intersection over the pseudo label mask. We set threshold1 at 0.5 and threshold2 at 0.85 after an iterative process of visual evaluation and adjustment, aiming for a situation where the majority of the pseudo-label mask is covered by the SAM masks. We then employ the merging algorithm described in the previous section to combine the masks generated by SAM with pseudo masks generated by the SOTA CAM-based methods to produce improved pseudo masks. Note that while our experimental results focus on applying our method to pseudo labels, the same approach can also be applied to the seed mask and the final predictions, emphasizing the versatility of our framework. This combination of SAM-generated masks and CAM-based masks yields more robust pseudo-mask quality. ### Quantitative Evaluation and Comparison In Tables 1 and 2, we present a quantitative evaluation and comparison of the pseudo labels enhanced through our proposed framework with various SOTA CAM-based methods. By measuring the mean Intersection over Union (mIoU) scores of 20 object classes for each method, we assess the improvements achieved by our framework in generating superior pseudo masks. Our observations reveal a consistent improvement in mask quality across all cases, further establishing the effectiveness of our framework. The analysis also reveals enhancements in both precision and recall. Notably, the improvement in recall is particularly notable as the CAM methods predominantly face challenges in capturing only the most discriminative regions of the object classes. This outcome emphasizes the capacity of our approach to mitigate such limitations and further bolster the performance of the CAM-based methods. By showcasing these comprehensive results, we aim to provide a more robust and holistic understanding of the efficacy of our proposed framework in refining and enhancing the pseudo labels generated from various CAM-based methods. Additionally, we explore the performance of our method when applied to the seed mask from raw CAM without postprocessing. By merging the SAM's masks with these unprocessed pseudo labels, we find a consistent improvement in mask quality as Figure 3. Interestingly, the resulting masks can reach and even surpass the quality of pseudo masks obtained after conventional postprocessing. This finding suggests that SAM has the potential to replace time-consuming postprocessing steps, offering a more efficient solution to weakly supervised semantic segmentation tasks. Quality ResultsBuilding upon the previous analysis, we look into the quality improvements in mIoU, precision, and recall cases for both unprocessed and post-processed pseudo labels. For the pseudo labels with post-processing, Figures 4 display the improvements in recall, precision, and mIoU cases, respectively. In recall cases, the increase occurs when CAM exhibits partial activation due to focusing on the most discriminative parts, which results in incomplete object coverage. SAM \begin{table} \begin{tabular}{l c c} \hline \hline Methods & Pseudo Mask & Enhanced \\ \hline CLIMS [39] & 70.8 & 77.4-6.6 \\ SIPE [10] & 69.1 & 74.7-5.6 \\ Wseg w/ SEAM [12] & 68.4 & 76.5-8.1 \\ TransCAM. [26] & 71.1 & 77.0-5.9 \\ RecurSeed. [17] & 75.7 & 81.1-5.4 \\ \hline \hline \end{tabular} \end{table} Table 1: Mask quality (mIoU (%)) of various segmentation methods, pseudo masks performance, and enhanced masks performance. The improvement in performance after enhancement is indicated in red. \begin{table} \begin{tabular}{l|c c|c c} \hline \hline Methods & Pseudo Precision & Pseudo Recall & Enhanced Precision & Enhanced Recall \\ \hline CLIMS [39] & 80.9 & 83.2 & 83.1 & 89.1 \\ SIPE [10] & 76.2 & 87.5 & 79.5 & 90.8 \\ Wseg w/ SEAM [12] & 78.9 & 83.5 & 81.5 & 90.3 \\ TransCAM. [26] & 80.5 & 84.7 & 81.6 & 90.1 \\ RecurSeed. [17] & 85.1 & 85.9 & 85.5 & 91.6 \\ \hline \hline \end{tabular} \end{table} Table 2: Precision and recall of various segmentation methods, pseudo masks performance, and enhanced masks performance. The improvement in performance after enhancement is indicated in red. enhances CAM's capability to identify the entire object by extending the pseudo labels. For high-precision cases, the pseudo labels suffer from false activation and exceed the object boundary. SAM helps the pseudo labels by refining the object boundary, resulting in improved precision. For the seed mask without post-processing from raw CAM, Figures 5 illustrate the improvements in the recall, precision, and mIoU cases, respectively. In most high mIoU improvement cases, the improvement comes from enhanced recall, as unprocessed CAM is even more focused on the most discriminative areas. Nonetheless, SAM can also improve precision in some cases, particularly when the object is small or non-convex, and the object's pixel count is limited. In these cases, CAM may falsely activate pixels near but outside the object, and SAM is able to improve mIoU by refining precision. Figure 4: Improvements for pseudo labels with post-processing: (a) Recall, (b) Precision, (c) mIoU. Figure 3: comparison of the final pseudo labels generated w/wo SAM by different methods - This plot presents a comparative analysis of the performance of various CAM-based methods: TransCAM, wseg (seam), and CLIMS, in terms of mIoU. The results are showcased for four distinct scenarios: 1) CAM with post-processing, 2) CAM with post-processing and SAM enhancement, 3) CAM without any post-processing, and 4) CAM integrated with SAM alone. ## 5 Future Directions Upon analyzing cases where our framework did not yield optimal results, we have identified several findings that highlight areas for improvement. In this section, we discuss these findings, which have allowed us to pinpoint the limitations of our current approach and outline possible directions for future work to address these challenges and enhance the performance of our system. We observe that the SAM masks are not exclusive, which means an object can be included in multiple SAM segments. Consequently, there could be several very large SAM masks that include more than one object. Our current algorithm does not have a design to filter out these kinds of masks that include multiple objects as in Fig 6. Moreover, we find that overcoming this problem by simply revising the merging algorithm may be challenging. In many cases, the CAM focuses only on the most discriminative parts and is partially activated. We hope to use these SAM segments that are larger than the object (which includes the object) to make the enhanced mask fit the boundary in the merging algorithm. However, we have no information about whether the masks contain multiple objects or just one. It might require more consideration and future experiments if we hope to filter out these kinds of SAM masks using a better merging algorithm. Another potential idea is to avoid generating such masks during SAM inference by modifying some hyperparameters used for inferencing. We noticed that adjusting the argument 'box-nms-thresh' might be useful for controlling the number of overlapping masks. However, we have not conducted further experiments on this aspect. Figure 5: Improvements for CAM without post-processing: (a) Recall, (b) Precision, (c) mIoU. Figure 6: An illustration of the issue with non-exclusive SAM masks. The three SAM segments demonstrate that a mask can include other masks and multiple objects. (a) shows the pseudo masks, (b) displays the SAM masks, (c) presents the enhanced masks, and (d) provides the ground truth. (e), (f), and (g) represent the three different SAM segments, indicating that SAM masks can be non-exclusive, and a single SAM mask can contain multiple objects. Additionally, we find that the CAM still exhibits miss-detection problems, where some objects are either wrongly detected or not detected at all. In such cases, we can hardly improve the quality using SAM, as the reference we follow from the CAM is incorrect as in Fig 7. Our proposed framework demonstrates promising results, but there remains room for further investigation and refinement. We will continue our research efforts to address the limitations identified in this work, such as the non-exclusive nature of SAM masks and the miss-detection problems in CAM. By exploring more sophisticated merging algorithms, adjusting hyperparameters, and incorporating the DeepLab training for final segmentation results, we aim to push the boundaries of our current framework and further improve its performance. ## 6 Conclusion In this study, we presented a simple yet efficient approach for improving Weakly Supervised Semantic Segmentation (WSSS) by combining the recently released Segment Anything Model (SAM) with seed mask, post-processed pseudo labels, and predictions. The primary goal of WSSS is to train a segmentation model with cheaper yet weaker annotations. However, CAM-derived pseudo labels often suffer from partial activation and false activation issues. Our proposed method, SAM Enhanced Pseudo Labels (SEPL), addresses these problems by using the initial seed mask from CAM or post-processed pseudo labels as an initial signal to select and label the most relevant segments from SAM, resulting in significant improvements in both partial and false activation. Our method is highly adaptable and can be easily integrated into existing WSSS methods without requiring any modifications to the underlying networks or pipelines. Moreover, our investigation revealed that by using the initial pseudo label without post-processing with SAM, we achieved comparable performance to that of post-processed pseudo labels enhanced by SAM, suggesting the potential to replace computationally expensive post-processing modules and accelerate the WSSS training pipeline. SEPL achieves a significant improvement in the mean Intersection over Union (mIoU) of pseudo labels when incorporating five cutting-edge WSSS methods, with an average increase of 6.2% on the PASCAL VOC 2012 train set. As the first study to investigate the potential of SAM in the context of WSSS, we hope that this work will pave the way for the implementation of segmentation foundation models in a variety of computer vision applications. Figure 7: An example of CAM miss-detection issues, where some objects are wrongly detected. In these cases, it is difficult to improve the quality using SAM, as the reference we follow from the CAM is incorrect.
2310.10706
Harnessing the Power of LLMs: Evaluating Human-AI Text Co-Creation through the Lens of News Headline Generation
To explore how humans can best leverage LLMs for writing and how interacting with these models affects feelings of ownership and trust in the writing process, we compared common human-AI interaction types (e.g., guiding system, selecting from system outputs, post-editing outputs) in the context of LLM-assisted news headline generation. While LLMs alone can generate satisfactory news headlines, on average, human control is needed to fix undesirable model outputs. Of the interaction methods, guiding and selecting model output added the most benefit with the lowest cost (in time and effort). Further, AI assistance did not harm participants' perception of control compared to freeform editing.
Zijian Ding, Alison Smith-Renner, Wenjuan Zhang, Joel R. Tetreault, Alejandro Jaimes
2023-10-16T15:11:01Z
http://arxiv.org/abs/2310.10706v2
Harnessing the Power of LLMs: Evaluating Human-AI text Co-Creation through the Lens of News Headline Generation ###### Abstract To explore how humans can best leverage LLMs for writing and how interacting with these models affects feelings of ownership and trust in the writing process, we compared common human-AI interaction types (e.g., guiding system, selecting from system outputs, post-editing outputs) in the context of LLM-assisted news headline generation. While LLMs alone can generate satisfactory news headlines, on average, human control is needed to fix undesirable model outputs. Of the interaction methods, guiding and selecting model output added the most benefit with the lowest cost (in time and effort). Further, AI assistance did not harm participants' perception of control compared to freeform editing. ## 1 Introduction Recent advancements in Large Language Models (LLMs), including the Generative Pre-trained Transformer (GPT) series Brown et al. (2020), have shattered the previous ceiling of human-like text generation Bommasani et al. (2021). This has led to a paradigm shift in NLP tasks, with task-agnostic LLMs surpassing the performance of other state-of-the-art task-specific models Lee et al. (2022). However, LLM-backed systems are not without their flaws, often suffering from hallucinations, bias, or occasional generation of inappropriate content, such as toxic, discriminatory, or misleading information Wang et al. (2022). Human-AI text co-creation--or writing with AI assistance--allows some control over the generation process and the opportunity to overcome some LLM deficiencies. Co-creation approaches have shown tremendous promise in areas such as summarization Goyal et al. (2022); Bhaskar et al. (2022) and creative writing Moore et al. (2023); Cao (2023); Yuan et al. (2022); Ding et al. (2023). Yet, the lack of _accountability_ of AI Shneiderman (2022) shifts the burden of responsibility to humans when mistakes arise in the text co-creation process. There are several methods of human-AI interaction for writing, which vary in terms of effort and control afforded by each Cheng et al. (2022). For example, humans might simply select from alternative model outputs or have the flexibility to revise them (e.g., post-editing). Recent research by Dang et al. (2023) explored the dynamics of interaction between crowdworkers and Large Language Models (LLMs) within the scenario of composing short texts. Extending from their work, we evaluate human-AI interaction in the context of news headline creation, focusing on three of the interaction types identified by Cheng et al. (2022)--_guidance_, _selection_, and _post-editing_. Our study advances Cheng et al.'s Wizard-of-Oz framework by using an implemented AI system, providing more realistic insights into human-AI co-creation for news headline generation. We explore three main aspects of co-creation with LLMs, through the lens of headline generation: which type of assistance is most effective for helping humans write high-quality headlines (RQ1)? Which type of assistance can reduce perceived effort for such task (RQ2)? And, how does interacting with these models affect trust and feelings of ownership of the produced headlines (RQ3)? In a controlled experiment, 40 participants wrote news headlines-either manually or assisted by one of three variants of an LLM-powered headline writing system: (1) _selection_, (2) _guidance + selection_, and (3) _guidance + selection + post-editing_. To explore the balance between control and effort in tasks involving human-AI interaction (Figure 1), we examined the different interaction types in combination rather than separately. Next, 20 evaluators rated the headlines generated by five methods--manual, AI-only, and the three assisted conditions. While LLMs alone generated the highest quality headlines on average, they were not perfect, and human control is needed to overcome errors. The simpler interactive condition _guidance + selec tion_ resulted in the rapid creation of high-quality headlines compared to conditions involving further human intervention (i.e., post-editing or manual writing). All conditions yielded similar perceived trust and control in our task. This work contributes: (1) a systematic evaluation of three variants of human-AI interaction for headline generation, (2) design guidelines for builders of these systems, and (3) a dataset1 of \(840\) human-rated news headlines-written by humans, AI, or the combination, which could be used as an evaluation set for further leveraged with reinforcement learning, e.g., RLHF (Stiennon et al., [n. d.]), to adapt LLMs for news headline generation tasks. Footnote 1: [https://github.com/JsnDg/EMNLP23-LLM-headline](https://github.com/JsnDg/EMNLP23-LLM-headline). ## 2 Related Work Our study draws upon a convergence of prior research on news headline generation and evaluation, and human-AI interactions for text summarization. ### News Headline Generation and Evaluation News headline generation, conventionally considered as a paradigm of text summarization tasks, has been extensively researched Tan et al. (2017); Goyal et al. (2022). Advances in automation range from heuristic approaches like parse-and-trim Dorr et al. (2003) to sophisticated machine learning algorithms like recurrent neural networks Lopyrev (2015), Universal Transformer Gavrilov et al. (2019), reinforcement learning Song et al. (2020); Xu et al. (2019), large-scale generation models trained with a distant supervision approach Gu et al. (2020), and large language models Zhang et al. (2023). Zhang et al. (2023) demonstrated that news summaries generated by freelance writers or Instruct GPT-3 Davinci received an equivalent level of preference from human annotators. In these studies, human involvement is for evaluation, not creation. For human-AI co-creation in the context of journalism, a recent study explored the potential of harnessing the common sense reasoning capabilities inherent in LLMs to provide journalists with a diverse range of perspectives when reporting on press releases Petridis et al. (2023). ### Human-AI Interactions for Text Summarization The taxonomy from Cheng et al. (2022) of human-AI interactions for text generation serves as the framework for our investigation. We focus on the first three types - _guiding model output_, _selecting or rating model output_, and _post-editing_, which offer varying degrees of control over the output and require different levels of human effort. Other identified types, such as interactive writing, are outside the scope of this study. _Guiding model output_ entails human intervention in the automatic text generation process, such as specifying crucial parts of input text to be included in the output Gehrmann et al. (2019), or providing semantic prompts or style parameters Osone et al. (2021); Chang et al. (2021); Strobelt et al. (2022). Our study involves human guidance in headline generation by specifying keyword prompts. _Selecting or rating model output_ involves humans choosing from multiple model outputs Kreutzer et al. (2018); Rosa et al. (2020) or rating them Lam et al. (2018); Bohn and Ling (2021). In our experiment, participants chose from three generated headlines. _Post-editing_ refers to humans editing generated text Chu and Komlodi (2017); Huang et al. (2020), which could be accompanied by additional supports, such as suggestions for edits Liu et al. (2011); Weisz et al. (2021). Our study investigates inline post-editing of selected headlines without any additional support. Figure 1: Human-AI interactions for text generation can fall within a range of no human control effort (AI-only) to full human control effort (manual methods) Ding and Chan (2023). Selecting from model outputs (_Selection_) alone provides less control (but also is easier) than when adding additional interactions in the form of guiding the model (_Guidance_) and _post-editing_ the outputs. ## 3 Assessing Human-AI Co-creation of News Headlines We compared three variants of human-AI co-creation for news headlines against fully manual and fully automatic approaches. Specifically, we explored three research questions (RQs): **RQ1**: Which type of assistance is most effective for helping humans write high-quality headlines? **RQ2**: Which type of assistance can reduce perceived effort for such task? **RQ3**: How does interacting with these models affect trust and feelings of ownership of the produced headlines? We chose news headline creation as it is a familiar task--many participants would already have some intuition about what constitutes a compelling (or not) news headline--of manageable complexity--both in task and evaluation, compared to free-form creative writing or long-form summarization (e.g., authoring paper abstracts). ### Co-creation Methods, Implementation, and Prompts Participants wrote headlines in one of four ways: manually or aided by AI using one of three common human-AI interaction types identified by previous research (Cheng et al., 2022): _guidance_ of the LLM's headline generation using perspectives (keywords or facets), _selection_ from three LLM-generated headlines, and _post-editing_ the LLM-generated headlines. Specifically, we developed a system for human-AI co-creation of news headlines with four variants of human-AI interactions: * _Selection_: The LLM generates three headlines for each news article (_generate headlines_), and the user selects the most appropriate one; * _Guidance + Selection_: The LLM extracts several potential perspectives (keywords) from each news article (_extract perspectives_), the user chooses one or more perspectives to emphasize in the headline, the LLM then generates three headlines for each news article based on the selected perspectives (_generate headlines w/ perspectives_), and finally, the user selects the best one; * _Guidance + Selection + Post-editing_: This is similar to _Guidance + Selection_, but the user can further edit the selected headline (post-editing); * _Manual only_: The participant writes the news headline without any AI assistance. Our goal was to create a continuum of human-AI interaction methods ranging from less to more control. The layering approach of selection, guidance, and post-editing was informed by prior literature (Cheng et al., 2022) and allowed us to assess the impact of each new layer by comparing adjacent conditions. Specifically, for these variants, we add additional functionality as opposed to comparing each interaction individually to allow for a more explicit comparison of conditions that afford less control (and require less effort)--e.g., _selection_ only--opposed to more control (more effort)--e.g., _guidance + selection + post-editing_ (Figure 1). We used OpenAI's GPT-3.5 _text-davinci-002_ model, the state-of-the-art LLM as of the study period (July 2022). Our study system directly interfaced with OpenAI's GPT-3.5 API, employing prompt programming to _extract perspectives_ (keywords) and _generate headlines_--with and without supplied perspectives--and presented the results to the participants (see Figure 2). To determine the optimal prompts for _extract perspectives_, _generate headlines_, and _generate headlines w/ perspectives_ with the GPT-3.5 API, we conducted multiple configuration tests, ultimately selecting the zero-shot learning paradigm (with no examples of news headlines included) for extensive coverage of news topics. The GPT-3.5 API takes Figure 2: Interface for human-AI news headline co-creation for _guidance + selection + post-editing_ condition: (A) news reading panel, (B) perspectives (keywords) selection panel (multiple keywords can be selected), (C) headline selection panel with post-editing capability, and (D) difficulty rating slider. Note: (B), (C) and (D) are hidden from the user until the requisite step is finished (e.g., the user does not see the difficulty rating slider until after finalizing the headline). prompts (provided in Appendix A) and generates the requested outputs. To rigorously examine the LLM's reliability, we used a temperature setting of 1, ensuring maximum output variety. Additionally, we set a token count of 400, providing sufficient space for both input data--which included prompts and news articles--and output data, encompassing several keywords or news headlines. ### Study Design The study was structured into two phases: (1) human-AI co-creation of news headlines and (2) human evaluation of the generated headlines. The first phase utilized a between-subjects experimental design with four conditions, ranging from minimal to maximal human control: (1) _Selection_, (2) _Guidance + Selection_, (3) _Guidance + Selection + Post-editing_, and (4) _Manual only_. The second phase involved human evaluators who ranked the headlines generated in the first phase alongside the _original_ article headlines and headlines generated by _GPT-only_ without any human input. #### 3.2.1 Phase I: Human-AI Co-creation of Headlines _Participants_ We enlisted 40 participants from the freelancing platform, Upwork,2 ensuring diversity in gender (28 females, 11 males, 1 non-binary). All the participants, who read news articles at least a few times a week if not daily, had experience in journalism, editing, or writing. Their educational backgrounds were diverse: 18 with Bachelor's Degrees, 11 with Master's Degrees, 10 with High School education, and 1 chose not to disclose. Footnote 2: [https://www.upwork.com/](https://www.upwork.com/) _Procedure_ Each participant was asked to create headlines for 20 articles,3 published in June 2022 on Yahoo! News,4 the most popular news website in the US during Q1 2022.5 The articles were carefully chosen across four distinct themes: World Politics, COVID-19, Climate Change, and Science, with five articles each. The overall study procedure included an instruction round, a practice round, main study, and a post-task survey. Participants were randomly assigned to one of the four study conditions and compensated $20 for their 1-hour participation. Footnote 3: The articles varied in length from 300 to 1000 words. Footnote 4: [https://news.yahoo.com/](https://news.yahoo.com/) Footnote 5: [https://today.yougov.com/ratings/entertainment/popularity/news-websites/all](https://today.yougov.com/ratings/entertainment/popularity/news-websites/all) Participants first learned how to utilize the system with various types of AI assistance (_instruction round_) and wrote headlines for two practice articles (_practice round_). During the _main study_, participants wrote headlines for the designated 20 task articles with assistance based on their assigned condition. The order of news articles was randomized for each participant. For each article, participants first read the article and clicked "Continue to Write Headline". Participants then either wrote the headline manually (_manual_) or with some assistance as described in Section 3.1. After completing each headline, participants rated the difficulty. Figure 2 demonstrates the system for the _Guidance + Selection + Post-editing_ condition. Finally, after writing headlines for all 20 articles, participants completed a _post-task survey_. See the Appendix for all study figures, including instructions, all conditions, and post-task survey. #### 3.2.2 Phase II: Human Evaluation of Headline Quality _Participants_ Another 20 evaluators were recruited through Upwork. We required that they were experienced in reviewing or editing news headlines to ensure high-quality evaluations. _Procedure_ In total there were 840 headlines requiring evaluation, including 800 headlines from Phase I (20 articles x 4 conditions x 10 participants per condition), 20 _Original_ headlines, and 20 headlines generated by _GPT only_. To ensure every headline was reviewed by at least two evaluators, we asked each evaluator to review 120 headlines-the six headlines for 20 articles--including 80 from Phase I, 20 _Original_, and 20 _GPT only_. We provided the evaluators with Excel forms containing instructions, news articles, and news headline ranking tables. For each article, the evaluators ranked the quality of six different headlines (the _Original_, _GPT Only_, and a headline generated by each of the four study conditions) using the Taste-Attractiveness-Clarity-Truth (TACT) test 6 from 1 (best) to 6 (worst) and shared their reasons: Footnote 6: [http://www.columbia.edu/itc/journalism/isaacs/client_edit/Headlines.html](http://www.columbia.edu/itc/journalism/isaacs/client_edit/Headlines.html) * Taste: Is it in good taste? Is there anything possibly offensive in the headline? Can anything in the headline be taken the wrong way? * Attractiveness: Is it attractive to the reader? Can it be improved so it is more engaging and interesting, without sacrificing accuracy? * Clarity: Does it communicate the key points of the article? Is it clear and simple? Does it use the active voice and active verbs? Are there any odd words or double meanings that could confuse the reader? * Truth: Is it accurate? Are the proper words or terms from the article used in the headline? Is the headline factually correct? When two or more headlines for the same article were of similar quality, evaluators were able to assign the same ranking to them.7 Each evaluator was compensated $20 each for this task, estimated to take around an hour.8 Footnote 7: A small amount of headlines (1.3%, or 11/840) were found to be identical to another headline. These were due to participants selecting an identical GPT-generated headline without making further edits. Footnote 8: Based on our pilot, evaluating one news article with six headlines took approximately three minutes. ### Measures We measured headline quality (Section 3.2.2), perceived task difficult, headline creation time, and perceived control and trust of the system. For comparing efficiency between conditions, we care most about the headline creation time (from clicking "Continue to Write Headline" to "Submit Headline"). We further compute the article reading time (from starting to read the news article to clicking "Continue to Write Headline") and overall time (article reading time + headline creation time). For user experience, we measured the task difficulty--after creating each headline (instance-level) and upon completing all headlines (overall), as well as perceived control and trust of the system. For instance-level task difficulty, participants answered "How difficult was it to create this headline for the news article with the system?" on a slider from 0 (very easy) to 100 (very difficult) after each headline writing task. The remaining subjective measures were collected in the post-task survey: For overall task difficulty, participants agreed or disagreed with "It was difficult to create headlines for the news articles with the system." on a Likert scale from 1 (strongly disagree) to 5 (strongly agree) and answered a follow-up question, "why do you feel this way?". We operationalized perceived _control_ and _trust_ of the system based on prior research: the question to gauge participants' perceived _control_ was "I could influence the final headlines for the news articles" (Rader et al., 2018). The question to gauge perceived _trust_ was "I trusted that the system would help me create high-quality headlines" (Cramer et al., 2008), via 5-point Likert rating scales (from strongly disagree to strongly agree). ### Data Analysis We employed the Kruskal-Wallis H test (Kruskal and Wallis, [n. d.]), the non-parametric version of a one-way ANOVA test for data with three or more conditions, to assess the quantitative results of headline quality rankings, task efficiency, and average and overall task difficulties, and perceived control and trust ratings. We then conducted a pairwise comparison of headline quality using the Mann-Whitney U test (Mann and Whitney, 1947), the non-parametric version of the Student t-test for two conditions. For headline quality, we compared the four conditions (_Manual_, _Selection_, _Guidance + Selection_, and _Guidance + Selection + Post-editing_) against the original headlines (_Original_) and the automatic method (_GPT only_). For task efficiency and user experience, we compared only within the four human study conditions. Finally, we analyzed participants' general comments and feedback on task difficulty. ## 4 Results We compared headline quality (RQ1), task effort (RQ2) and perceived control and trust (RQ3) between the different headline writing conditions. ### RQ1: Headline Quality Rankings from _Phase II: Human Evaluation of Headline Quality_ (Section 3.2.2) were used to assess headline quality across six conditions. Table 1 shows number of ranking data points and average rankings for each condition.9 Footnote 9: Rankings (from 1st to 6th place) for a total of 120 headlines (20 article x 6 conditions) were obtained (as shown in Table 1). However, ten were missed due to annotator error, resulting in 2390 total data points for analysis. As shown in Table 1 and 2, an approximate ranking of headline quality is as follows: _Original \(\sim\) GPT only_ (best) > _Guidance + Selection \(\sim\) Guidance + Selection + Post-editing > Selection \(\sim\) Manual only_. The variance in quality rankings across the six conditions was significant, as confirmed by Kruskal-Wallis H tests, with an \(H-value=50.8\) and a \(p-value<0.01\). Additional pairwise comparisons (detailed in Table 2) offered more granular insights into the discrepancies: headlines penned by participants in the _Manual_ condition were deemed the lowest quality, whereas the original headlines and headlines generated by _GPT only_ were deemed highest quality. Within the AI-assisted conditions, human guidance, in the form of selecting keyword perspectives (under _Guidance + Selection_ or _Guidance + Selection + Post-editing_), improved headline quality compared to direct _Selection_ of AI-generated options. Interestingly, post-editing had no significant effect on the quality of the headlines: no noticeable difference in ranking was identified between the headlines generated by _Guidance + Selection_ and _Guidance + Selection + Post-editing_ conditions. To delve deeper into group comparisons, we examined the count of "top" (ranked 1st) and "bottom" (ranked 6th) rankings (as shown in Table 1). As anticipated, the _Original_ headlines of the news articles received the most "top" rankings (87) and the fewest "bottom" rankings (38). The _GPT only_ condition had fewer "top" rankings than _Original_ (70), but a comparable number of "bottom" rankings (37). The _Manual_ and _Selection_ conditions had the fewest "top" rankings (56), while _Manual_ also had the most "bottom" rankings (98). Evaluators comments gave additional insight into the reasons behind their rankings. For example, one annotator favored the concise headline generated by _GPT only_ and critiqued the errors that undermined the _Manual_ headline. For more details, see Appendix B. ### RQ2: Task Effort #### 4.2.1 Headline Creation Time We used task time as an indicator of task effort because it often reflects the workload someone experiences. Specifically, we compare _headline creation time_--the time spent to generate the news headline, exclusive of the time it took to read the news article. As expected, the _Selection_ and _Guidance + Selection_ conditions were markedly faster (\(H-value=13.0,p-value=0.005\)) than conditions that involved manual editing, such as _Manual_ and _Guidance + Selection + Post-editing_ (Figure 3). A detailed comparison of creation time, reading time, and total time is provided in Appendix C. #### 4.2.2 Perceived Task Difficulty On the whole, participants felt the headline writing task was easy (e.g., average overall difficulty rating 1.9 out of 5). And, although _Guidance + Selection_ had the lowest percieved instance-level difficulty (M=19.7 out of 100, SD=24.3) and _Manual_ had the highest (M=30.9, SD=17.1), these differences were not statistically significant--in part due to high variance between participants and our small sam \begin{table} \begin{tabular}{l c c c c c c c} \hline **Condition / Ranking** & **1** & **2** & **3** & **4** & **5** & **6** & **Mean** & **Std** \\ \hline _Manual only_ & **56** & 60 & 56 & 56 & 71 & **98** & 3.8 & 1.8 \\ _Selection_ & **56** & 53 & 65 & 67 & 83 & 75 & 3.7 & 1.7 \\ _Guidance + Selection_ & 71 & 69 & 64 & 75 & 63 & 58 & 3.4 & 1.7 \\ _Guidance + Selection + Post-editing_ & 70 & 71 & 74 & 60 & 65 & 58 & 3.4 & 1.7 \\ _Original_ & **87** & 75 & 68 & 65 & 64 & **38** & 3.2 & 1.7 \\ _GPT only_ & 70 & 73 & 89 & 79 & 51 & **37** & 3.2 & 1.6 \\ \hline \end{tabular} \end{table} Table 1: Counts of rankings from 1 (best) to 6 (worst) across six conditions, with mean rank and standard deviation. Lower values indicate superior rankings. Bold counts represent the highest number of 1st place rankings and the fewest 6th place rankings. The original headlines (_Original_) received the most 1st place rankings and the fewest 6th place rankings, while the _Manual_ condition received the fewest 1st place and the most 6th place rankings. \begin{table} \begin{tabular}{l c c} \hline **Condition 1 \textgreater{} Condition 2** & **U-value** & **p-value** \\ \hline _Original \textgreater{} Manual only_ & 61766.0 & \textless{}0.01 \\ _Original \textgreater{} Selection_ & 63550.0 & \textless{}0.01 \\ _Original \textgreater{} Guidance + Selection_ & 72364.5 & \textless{}0.028 \\ _Original \textgreater{} Guidance + Selection + Post-editing_ & 72688.5 & \textless{}0.048 \\ _GPT only \textgreater{} Manual only_ & 63168.5 & \textless{}0.01 \\ _GPT only \textgreater{} Selection_ & 64751.5 & \textless{}0.01 \\ _Guidance + Selection \textgreater{} Manual only_ & 89877.5 & \textless{}0.01 \\ _Guidance + Selection \textgreater{} Selection_ & 88536.5 & \textless{}0.01 \\ _Guidance + Selection + Post-editing \textgreater{} Manual only_ & 89972.5 & \textless{}0.01 \\ _Guidance + Selection + Post-editing \textgreater{} Selection_ & 88734.0 & \textless{}0.01 \\ \hline \end{tabular} \end{table} Table 2: Pairwise comparison of headline quality rankings with significant difference (p\textless{}0.05, Mann-Whitney U). ple size. Additional results for instance-level and overall task difficulties are detailed in Appendix D. ### RQ3: Perceived Trust and Control We did not observe significant difference in perceived trust or control across the conditions (see Appendix E for more details). We had expected participants with more "control" over the final output (i.e., those writing manually or with option to _post-edit_ the final output) to have more perceived control than the other conditions. Yet, even while most participants (9/10) in the _Guidance + Selection + Post-editing_ condition did edit at least one headline during the task, with an average of 8.5 headlines edited (\(median=7.5,std=6.0\)), post-editing did not seem to enhance participants' sense of ownership over the generated headlines. We discuss possible reasons for this in Section 5.3. ## 5 Discussion Our study explored how diverse forms of AI assistance for news headline writing influence headline quality (RQ1), task effort (RQ2), and perceived trust and control (RQ3). We interpret our results for each of the RQs and discuss further implications in the following. ### RQ1: Which type of assistance helped humans write high-quality headlines? Headlines produced by the _GPT only_ condition were, on average, of the highest quality, whereas headlines written manually were of the lowest. And, all types of AI assistance explored in the study helped participants achieve higher quality as compared to when no assistance was available. Therefore, system designers should consider leveraging these types of assistance to better support humans performing similar writing tasks. While our findings could imply that completely automated methods are adequate for generating headlines, we urge system designers to critically evaluate when and where is appropriate to fully rely on AI assistance. Headlines by _GPT only_ were not flawless, receiving the lowest ranking in some instances. Human intervention remains necessary for editing and final decisions, particularly in high-stakes tasks. Upon reflection, this study focused on overall headline quality in the evaluation task. Additional evaluation on the type, quantity, and severity of errors from headlines could reveal valuable insights. While multiple reasons could lead to a headline ranked as low quality, problematic hallucinations that are commonly seen with large language models (Bender et al., 2021; Ji et al., 2023) could be more harmful than omission of details. Among the AI assistance, providing _Guidance_ (using keywords) resulted in higher headline quality compared to _Selection_ alone, yet did not outperform the _GPT only_ condition. Participants comments revealed that the quality of keywords given in the _Guidance_ interaction played a crucial role in the final headline quality. When the keyword quality was poor, e.g., vague or not well aligned with key points of the article, the generated captions were lower quality as well. The design of the _Guidance_ interaction may have limited its utility. One participant noted _"it would have been nice if I could go back and change the keywords and see how the headlines change"_, suggesting a more interactive experience could help them better understand how their guidance influences the model's output. Headline evaluation is a subjective task, which can influence these outcomes. To mitigate this, we used the TACT evaluation framework as a standardized rubric, which was shared with both headline generators and evaluators. Importantly, human control over the final headline is a promising direction given the subjective nature (i.e., simply relying on the system to pick the _best_ headline is problematic if what is _best_ differs by the audience and their desired headline qualities. Figure 3: News headline creation time across four human conditions (standard deviation as error bars): _Selection_ and _Guidance + Selection_ are faster than the other two conditions which required more human editing (_Manual_ and _Guidance + Selection + Post-editing_). ### RQ2: Which type of assistance reduced perceived effort? In terms of headline creating time, interactions such as guiding the model output and selecting generated headlines were quicker than post-editing, as expected. This means when efficiency is key to a system's performance, these type of interactions would be most effective, especially when the model outputs are generally satisfactory. Post-editing would likely be more critical if the quality of the model's output was suboptimal, or when humans need the ability to fully control the outcome. Interestingly, in perceived task difficulty did not differ across all conditions. This is likely because most participants were experienced in headline writing. While their experience level varied greatly, from working for school newspaper to being in journalism for more than 40 years, the hands-on experience made the task in our study easy for them. Future research direction might evaluate if such AI assistance can lower the expertise requirements for headline writing task, enabling people with less experience to write good quality headlines. RQ3: How does interacting with these models affect feelings of trust and ownership of the produced headlines? We did not observe any significant difference in perceived trust or control. While surprising, we expect this might be from a flaw in our operationalization of perceived control; the statement _"I could influence the final headlines for the news articles"_ does not specify the level or type of control one had over the system, and the participants could "influence the final headlines" to some extent no matter which condition they experienced. We recommend future research investigate more robust methods for measuring perceived control or sense of ownership. Another reason for the lack of significant difference in this (and other subjective measures) is that each participant only experienced one condition and had no way to compare with other conditions. A within-subject study would be more effective for comparing perceived difficulty, control, and trust across multiple experiences. However, we chose a between-subject design to avoid the fatigue of having to complete similar tasks with multiple interfaces and the confusion from switching from one experience to another. Finally, participants may have felt they did not need to exert much control for the system to perform well and, therefore, did not think about the level of control critically. Counter to what the results showed, we had expected post-editing to improve the sense of control. Analysis of their post-editing process (Table 3) revealed that changes made by participants during post-editing were typically minor, falling into three categories: 1. "Hedging": Participants generally avoided sensationalism or "click-bait" language, favoring more measured phrasing. For instance, "ruining the environment" was altered to "contributing to climate change", and "expulsion" was revised to "penalties". 2. "Catering": Participants frequently tweaked the headlines to make them more relevant to the target audience, for instance, changing "completion" to "climate change". 3. "Clarifying": Some edits involved minor text adjustments, such as replacing "the state's" with "New Mexico". Participants may have associated smaller degree of changes with smaller level of control to the system output. However, we highlight that these revision types could be supported natively by future systems. ### Control, Transparency, and Guardrails in Co-Creation The interaction options we introduced extend beyond mere control, instead providing both \begin{table} \begin{tabular}{c l l} \hline \hline **Type** & **Before Revision** & **After Revision** \\ \hline 1) & Hedging & Is your lawn **ruining the environment?** & Is your lawn **contributing to climate change**? \\ 2) & Hedging & Soliders given second change to get vaccinated and avoid **expulsion** & Soliders given second chance to get vaccinated and avoid **penalties** \\ 3) & Catering & Mapping the Seafloor: New technologies crucial for **completion** & Mapping the Seafloor: New technologies crucial for **climate change** \\ 4) & Clarifying & Forest Service employees made several mistakes & Forest Service mistakes when planning a controlled burn in New Mexico, resulting in the largest fire in **the state’s** history \\ \hline \hline \end{tabular} \end{table} Table 3: Examples of humans’ post-editing on AI-generated headlines. guardrails and a way to expose what is possible with LLMs. First, both the _selection_ and _guidance_ interactions provide a set of guardrails toward better output opposed to freeform prompting of the LLM. Next, as participants can guide the model via theme or keyword inputs, they are also afforded a level of _transparency_ to comprehend the influence they may exert on LLMs to yield headlines with varying focuses. These approaches which combine control and transparency seem critical to successful co-creation settings Liao and Vaughan (2023). We did not evaluate a setting where users prompted the LLM themselves (this is a suggestion for future work); instead, our settings--which incorporated optimized prompts (Appendix A) --are meant to provide a more _directed_ scenario, without requiring the user to come up with possible keywords from scratch. This scenario progressively exploded users to how the model works and how they can control it. Future iterations of the _guidance_ interaction could benefit from additionally offering users a choice between system-suggested and user-defined keywords to flexibly steer the model. ## 6 Conclusion We explore human-AI text co-creation through the context of news headline generation: how can humans best harness the power of LLMs in writing tasks, in terms of both quality and efficiency? And, how does interacting with these models affect trust and feelings of ownership of the produced text? 40 participants with editing backgrounds wrote news headlines--either manually or assisted by a variant of our LLM-powered headline generation system--utilizing some combination of three common interaction types (guiding model outputs, selecting from model outputs, and post-editing). Following this, 20 expert evaluators rated the headlines generated by five methods (human-only, AI-only, and the three assisted conditions). While the LLM on its own can generate satisfactory news headlines on average, human intervention is necessary to correct undesirable model outputs. Among the interaction methods, guiding model output provided the most benefit with the least cost (in terms of time and effort). Furthermore, AI assistance did not diminish participants' perception of control compared to freeform editing. Finally, while we focus on a simpler, low-stakes text generation scenario, this work lays the foundation for future research in more complex areas of human-AI co-creation. ## Limitations This study, while comprehensive, acknowledges several limitations that outline areas for potential future research. One of the constraints pertains to the evaluation of different models. Specifically, the research primarily focused on types of human-AI interaction, utilizing GPT-3.5 due to its state-of-the-art performance during the research phase. The choice was made to prevent the complication of the study design by introducing performance as an additional variable, which could have overshadowed the main objective of analyzing three interaction variants. However, understanding the impact of varying models and their performances remains an essential prospect for subsequent studies and system design. Moreover, in terms of evaluation measures, the study employed the Taste-Attractiveness-Clarity-Truth (TACT) framework to assess headline quality. Despite its standardization, this framework may not fully encapsulate all the nuances of a model's performance, such as verbosity, indicating a need for future work to refine these evaluation standards. Additionally, the research's scope was limited in terms of the number and types of articles, as it prioritized the examination of interaction differences over article diversity. This limitation underscores the importance of future studies exploring a broader array of articles and involving more participants for an extended period. Similarly, the study did not evaluate the efficiency of the model in generating diverse perspectives, maintaining consistency in article selection based on length and domain instead. The assessment of content generation difficulty is identified as a crucial element for more in-depth research. Concerning participant expertise, the study conducted was a concise, one-hour session centered on human-AI collaboration in news headline creation, taking into account participant expertise without enforcing strict control. This approach points to the necessity for more extensive research, particularly in more complex and specialized domains. Finally, regarding study design and comparison, the research adopted a between-subjects approach, preventing participants from making direct comparisons of the different human-AI interaction methods. As a result, certain participants might not have fully grasped how AI could potentially enhance the task, as their experience was confined to their specific condition. This highlights an oppor tunity for future studies to possibly implement a within-subjects design for a more holistic participant experience. ## Ethics Statement Although the performance of headlines generated by _GPT only_ were comparable to the original headlines and those headlines did not demonstrate evident biases or ethical issues, there is an open question of to what extent AI should be relied upon. Shneiderman proposes that "we should reject the idea that autonomous machines can exceed or replace any meaningful notion of human intelligence, creativity, and responsibility" Shneiderman (2022). We echo that serious consideration is needed before AI is relied upon for creating text, even for low stakes tasks. It is important for humans to use their expertise and judgment when working on content generation with AI and ensure that the content produced is fair, ethical, and aligned with societal values.
2307.13818
Gradient-Based Spectral Embeddings of Random Dot Product Graphs
The Random Dot Product Graph (RDPG) is a generative model for relational data, where nodes are represented via latent vectors in low-dimensional Euclidean space. RDPGs crucially postulate that edge formation probabilities are given by the dot product of the corresponding latent positions. Accordingly, the embedding task of estimating these vectors from an observed graph is typically posed as a low-rank matrix factorization problem. The workhorse Adjacency Spectral Embedding (ASE) enjoys solid statistical properties, but it is formally solving a surrogate problem and can be computationally intensive. In this paper, we bring to bear recent advances in non-convex optimization and demonstrate their impact to RDPG inference. We advocate first-order gradient descent methods to better solve the embedding problem, and to organically accommodate broader network embedding applications of practical relevance. Notably, we argue that RDPG embeddings of directed graphs loose interpretability unless the factor matrices are constrained to have orthogonal columns. We thus develop a novel feasible optimization method in the resulting manifold. The effectiveness of the graph representation learning framework is demonstrated on reproducible experiments with both synthetic and real network data. Our open-source algorithm implementations are scalable, and unlike the ASE they are robust to missing edge data and can track slowly-varying latent positions from streaming graphs.
Marcelo Fiori, Bernardo Marenco, Federico Larroca, Paola Bermolen, Gonzalo Mateos
2023-07-25T21:09:55Z
http://arxiv.org/abs/2307.13818v2
# Gradient-Based Spectral Embeddings of ###### Abstract The Random Dot Product Graph (RDPG) is a generative model for relational data, where nodes are represented via latent vectors in low-dimensional Euclidean space. RDPGs crucially postulate that edge formation probabilities are given by the dot product of the corresponding latent positions. Accordingly, the _embedding_ task of estimating these vectors from an observed graph is typically posed as a low-rank matrix factorization problem. The workhorse Adjacency Spectral Embedding (ASE) enjoys solid statistical properties, but it is formally solving a surrogate problem and can be computationally intensive. In this paper, we bring to bear recent advances in non-convex optimization and demonstrate their impact to RDPG inference. We advocate first-order gradient descent methods to better solve the embedding problem, and to organically accommodate broader network embedding applications of practical relevance. Notably, we argue that RDPG embeddings of directed graphs loose interrelability unless the factor matrices are constrained to have orthogonal columns. We thus develop a novel feasible optimization method in the resulting manifold. The effectiveness of the graph representation learning framework is demonstrated on reproducible experiments with both synthetic and real network data. Our open-source algorithm implementations are scalable, and unlike the ASE they are robust to missing edge data and can track slowly-varying latent positions from streaming graphs. Graph representation learning, gradient descent, manifold optimization, random dot product graphs. ## I Introduction During the last few years the Random Dot Product Graph (RDPG) model has emerged as a popular generative model for random graphs [3]. This latent position model associates each node \(i\in\{1,\ldots,N\}\) with a vector \(\mathbf{x}_{i}\in\mathcal{X}\subset\mathbb{R}^{d}\); a so-termed node embedding where typically \(d\ll N\). In its simplest rendition for undirected and unweighted graphs without self loops, RDPGs specify an edge exists between nodes \(i\) and \(j\) with probability given by the inner-product of the corresponding embeddings; independently of all other edges. That is to say, entry \(A_{ij}\) of the random adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\) has the Bernoulli\((\mathbf{x}_{i}^{\top}\mathbf{x}_{j})\) distribution. This apparent simplicity does not compromise expressive power. Indeed, one can verify that Erdos-Renyi (ER) graphs or Stochastic Block Models (SBMs) with a positive semi-definite (PSD) probability matrix are particular cases of an RDPG. Several other more sophisticated models may be included as particular cases of RDPG [3], being this expressiveness one of the main reasons behind its popularity. A second reason is the model's interpretability. Since the connection probability is given by the dot product of the embeddings, the affinity between the corresponding nodes is directly captured by their alignment. We may for instance rely on visual inspection of the nodes' vector representations (possibly after further dimensionality reduction if \(d>3\)) to screen for community structure, or carry out angle-based clustering of nodes [4, 5]. The restriction to undirected graphs is overcome by considering the directed version of RDPG, where each node is assigned _two_ vectors \(\mathbf{x}_{i}^{l},\mathbf{x}_{i}^{r}\in\mathcal{X}\subset\mathbb{R}^{d}\). A directed edge from node \(i\) to \(j\) exists with probability \((\mathbf{x}_{i}^{l})^{\top}\mathbf{x}_{j}^{r}\)[3]. The interpretation is analogous to the undirected case, with \(\mathbf{x}_{i}^{l}\) and \(\mathbf{x}_{i}^{r}\) now representing the propensity of node \(i\) towards establishing outgoing and accepting incoming directed edges, respectively. For extensions to weighted graphs, see e.g., [6]. Rather than generating random graphs from vectors, focus in this paper is on the inverse _embedding problem_ at the heart of graph representation learning (GRL) [7]: given the adjacency matrix \(\mathbf{A}\) of a graph adhering to an RDPG model, estimate the latent position vectors for each node. Because of the RDPG definition in terms of dot products, the latent position vectors are only identifiable up to a common rotation. For both undirected and directed graphs (digraphs), the workhorse approach is based on a spectral decomposition of \(\mathbf{A}\) - an estimator known as Adjacency Spectral Embedding (ASE) [3]. ### _Challenges facing the ASE_ Although the ASE is widely adopted and its statistical properties (such as consistency and its limiting distribution as \(N\rightarrow\infty\)) are well documented [3], it does present drawbacks which we seek to overcome. **Large data.** The first challenge pertains to scalability. Computing the spectrum of a large adjacency matrix \(\mathbf{A}\), even only the \(d\) dominant components, is computationally intensive and constitutes a bottleneck of state-of-the-art ASE implementations [8], especially when multiple graphs are to be embedded. Recent work explicitly comments on the difficulty of scaling spectral-based inference of RDPGs to large graph settings [9]. **Missing data.** A second drawback of ASE is its inability to properly account for missing data, meaning unobserved entries in \(\mathbf{A}\). On a related note, the ASE neglects the all-zeros diagonal in the adjacency matrix. These limitations were recognized more than a decade ago [4], yet to the best of our knowledge they have not been satisfactorily addressed in the RDPG literature. Indeed, repeated ASE computation to iteratively impute the unknown entries of \(\mathbf{A}\) using the inner-product of the embeddings estimated in the previous step lacks convergence guarantees, and multiplies the ASE complexity by the number of iterations [4]. **Streaming data.** A third scenario that ASE cannot address satisfactorily arises with streaming data from a dynamic network; i.e., when we observe a sequence of graphs over time and would like to track the evolution of the corresponding embeddings, ideally without having to store past observations. Network dynamics may include changes in the edges between a fixed set of nodes (e.g., monitoring a wireless network), the addition of new information (e.g., a user that ranks an item in a recommender system), or the deletion/addition of nodes (e.g., a new user in a social network). Especially for large graphs, re-computing the ASE from scratch each time step is computationally demanding. Given the rotational ambiguity inherent to RDPGs, independently obtaining the ASE after each modification to the graph will likely result in misaligned embeddings that can hinder the assessment of changes. ### _Contributions and paper outline_ We seek to address these limitations by (i) re-considering the underlying optimization problem of which ASE is a solution (Section II); and (ii) developing iterative embedding algorithms for the refined formulations (Sections III and IV). Unlike the ASE recipe of performing a spectral decomposition of \(\mathbf{A}\), borrowing recent low-rank matrix-factorization advances we advocate solving the non-convex embedding problem using gradient descent (GD) [10, 11]. Explicitly solving the optimization problem facilitates more precise and flexible GRL; e.g., it is straightforward to accommodate unobserved edges by including a mask matrix. Given the iterative nature of GD, warm-restarting the estimates of either new or existing nodes allows to embed multiple (possibly streaming) graphs, while preserving the alignment of consecutive embeddings as a byproduct. Discarding the residuals corresponding to the diagonal of \(\mathbf{A}\) offers better nodal representations as well as favorable problem structure, which we leverage in Section III-B to derive block-coordinate descent (BCD) iterations for efficient undirected RDPG inference. Applying GD to embed digraph nodes requires special care. As we argue in Section IV-A, inherent ambiguities in the directed RDPG model extend beyond a global rotation, and they may compromise representation quality and the interpretability benefits discussed earlier. We show that an effective way of retaining these desirable features is to impose orthogonality constraints on the matrix factors in the decomposition of \(\mathbf{A}\) - a novel GRL formulation for digraphs. This constraint in turn defines a smooth manifold, over which we optimize using a custom-made feasible method. We stress this is not the well-known Stiefel manifold, where matrices are constrained to be _orthonormal_ (and not just orthogonal as here1). This is no minor point. Algorithm construction thus requires careful definition of the tangent space, the Riemannian gradient and the retraction [12, 13], all of which we derive in Section IV-B. Comprehensive synthetic and real-world (wireless network and United Nations voting) data experiments in Section V demonstrate the interpretability, robustness, and versatility of the novel GRL framework. In the interest of reproducible research, the code and datasets used to generate all figures in this paper is publicly available at [https://github.com/marfiori/efficient-ASE](https://github.com/marfiori/efficient-ASE). Concluding remarks are in Section VI. Non-essential mathematical details are deferred to the Appendix. Footnote 1: We will henceforth use the term _orthonormal matrix_ to refer to any matrix \(\mathbf{T}\) such that \(\mathbf{T}^{\top}\mathbf{T}=\mathbf{I}\) (i.e., the columns of \(\mathbf{T}\) are orthonormal vectors). The term _orthogonal matrix_ will be reserved for those matrices whose columns are mutually orthogonal, but not necessarily of unit norm. All in all, relative to prior art our RDPG embedding framework offers a _better_ representation at a _competitive_ computational cost, and it is applicable to _more general_ settings. This full-blown journal paper extends our preliminary results [1, 2] in multiple significant ways. In addition to a more thorough treatment with extended discussions, unpublished proofs, accompanying software, and expanded numerical studies with new datasets; the BCD algorithm for undirected graphs as well as the treatment of digraphs in Section IV are completely new. ## II Problem statement and related work Let us formulate the embedding problem, beginning with undirected graphs. Consider stacking all the nodes' latent position vectors in the matrix \(\mathbf{X}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{N}]^{\top}\in\mathbb{R}^{N \times d}\). Given an observed graph \(\mathbf{A}\) and a prescribed embedding dimension \(d\) (typically obtained using an elbow rule on \(\mathbf{A}\)'s eigenvalue space plot [8]), the goal is to estimate \(\mathbf{X}\). Recalling the definition of the RDPG model, the edge-wise formation probabilities are the entries \(P_{ij}=\mathbf{x}_{i}^{\top}\mathbf{x}_{j}\) of the rank-\(d\), PSD matrix \(\mathbf{P}=\mathbf{X}\mathbf{X}^{\top}\). Since we do not allow for self-loops, the diagonal entries in \(\mathbf{A}\) should be zero and we thus have \(\mathbb{E}\left[\mathbf{A}\,\middle|\,\mathbf{X}\right]=\mathbf{M}\circ \mathbf{P}\), where \(\circ\) is the entry-wise or Hadamard product and \(\mathbf{M}=\mathbf{1}_{N}\mathbf{1}_{N}^{\top}-\mathbf{I}_{N}\) is a mask matrix with ones everywhere except in the diagonal where it is zero. In lieu of a maximum likelihood estimator that is computationally infeasible beyond uninterestingly simple problems, here we advocate a least-squares (LS) approach [4] to obtain \[\hat{\mathbf{X}}\in\operatorname*{argmin}_{\mathbf{X}\in\mathbb{R}^{N\times d }}\left\|\mathbf{M}\circ(\mathbf{A}-\mathbf{X}\mathbf{X}^{\top})\right\|_{F}^ {2}. \tag{1}\] In words, \(\hat{\mathbf{P}}=\hat{\mathbf{X}}\hat{\mathbf{X}}^{\top}\) is the best rank-\(d\) PSD approximation to the off-diagonal entries of the adjacency matrix \(\mathbf{A}\), in the Frobenius-norm sense. The RDPG model is only identifiable up to rotations, and accordingly the solution of (1) is not unique. Indeed, for any orthonormal matrix \(\mathbf{T}\in\mathbb{R}^{d\times d}\) we have that \(\mathbf{X}\mathbf{T}(\mathbf{X}\mathbf{T})^{\top}=\mathbf{X}\mathbf{X}^{\top }=\mathbf{P}\). Entrywise multiplication with \(\mathbf{M}=\mathbf{1}_{N}\mathbf{1}_{N}^{\top}-\mathbf{I}_{N}\) effectively discards the residuals corresponding to the diagonal entries of \(\mathbf{A}\). If suitably redefined, the binary mask \(\mathbf{M}\) may be used for other purposes, such as modeling unknown edges if data are missing. For instance, in a recommender system we typically have the rating of each user over a very limited number of items. This (dis)information may be captured in (1) by zeroing out the entries of \(\mathbf{M}\) corresponding to the unknown edges. **Remark 1** (Adjacency Spectral Embedding): Typically the mask \(\mathbf{M}\) is ignored (and sometimes non-zero values are iteratively imputed to the diagonal of \(\mathbf{A}\)[4]), which results in a closed-form solution for \(\hat{\mathbf{X}}\). Indeed, if we let \(\mathbf{M}=\mathbf{1}_{N}\mathbf{1}_{N}^{\top}\) in (1), we have that \(\hat{\mathbf{X}}=\hat{\mathbf{V}}\hat{\mathbf{A}}^{1/2}\), where \(\mathbf{A}=\mathbf{V}\mathbf{A}\mathbf{V}^{\top}\) is the eigendecomposition of \(\mathbf{A}\), \(\hat{\mathbf{A}}\in\mathbb{R}^{d\times d}\) is a diagonal matrix with the \(d\) largest-magnitude eigenvalues of \(\mathbf{A}\), and \(\hat{\mathbf{V}}\in\mathbb{R}^{N\times d}\) are the associated eigenvectors. This workhorse estimator is known as the Adjacency Spectral Embedding (ASE); see also [3] for consistency and asymptotic normality results, as well as applications of statistical inference with RDPGs. Given the ASE limitations outlined in Section I-A, we develop efficient gradient-based iterative solvers for the embedding problem (1). Beyond scalability, the algorithmic framework offers a natural pathway to facilitate embedding graph sequences. In the applications we study in Section V, said dynamic network data may be only partially observed, they could be acquired in a streaming fashion, while both the number of nodes and edges are allowed to vary across time. **Embedding digraphs.** Moving on to digraphs [14], recall that we now embed each node with two vectors, \(\mathbf{x}_{i}^{l},\mathbf{x}_{i}^{r}\in\mathcal{X}\subset\mathbb{R}^{d}\). Existence of a directed edge from node \(i\) to \(j\) is modeled as the outcome of a Bernoulli trial with success probability \((\mathbf{x}_{i}^{l})^{\top}\mathbf{x}_{j}^{r}\)[3]. Again, vertically stacking the embeddings into two matrices \(\mathbf{X}^{l},\mathbf{X}^{r}\in\mathbb{R}^{N\times d}\), we introduce the probability matrix \(\mathbf{P}=\mathbf{X}^{l}(\mathbf{X}^{r})^{\top}\) such that the expected value of the random adjacency matrix is \(\mathbb{E}\left[\mathbf{A}\mid\mathbf{X}^{l},\mathbf{X}^{r}\right]=\mathbf{M} \circ\mathbf{P}\). If we ignore the mask \(\mathbf{M}\), the embedding problem boils down to finding the best rank-\(d\) approximation to the (possibly asymmetric) adjacency matrix. One such solution may be obtained via the singular value decomposition (SVD) of \(\mathbf{A}\); i.e., \(\mathbf{A}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}\). We thus have that \(\hat{\mathbf{X}}^{l}=\hat{\mathbf{U}}\hat{\mathbf{\Sigma}}^{1/2}\) and \(\hat{\mathbf{X}}^{r}=\hat{\mathbf{V}}\hat{\mathbf{\Sigma}}^{1/2}\), with \(\hat{\mathbf{\Sigma}}\) containing only the \(d\) largest singular values, and \(\hat{\mathbf{U}}\) and \(\hat{\mathbf{V}}\) the associated singular vectors. This procedure is also known as ASE. As noted in [6], ambiguities with directed RDPGs can be more problematic than in the undirected case. Now given _any_ invertible matrix \(\mathbf{T}\) (not necessarily orthonormal), the embeddings \(\mathbf{Y}^{l}=\mathbf{X}^{l}\mathbf{T}\) and \(\mathbf{Y}^{r}=\mathbf{X}^{r}\mathbf{T}^{-\top}\) generate the same probability matrix as before; i.e., \(\mathbf{Y}^{l}(\mathbf{Y}^{r})^{\top}=\mathbf{X}^{l}\mathbf{T}(\mathbf{X}^{r} \mathbf{T}^{-\top})^{\top}=\mathbf{X}^{l}(\mathbf{X}^{r})^{\top}=\mathbf{P}\). As we show in Section IV-A, constraining matrices \(\mathbf{X}^{l}\) and \(\mathbf{X}^{r}\) to being orthogonal and having the same column-wise norms2 effectively limits this ambiguity without compromising the model's expressivity (now an admissible \(\mathbf{T}\) may only be orthonormal; see Proposition 2), all while preserving its interpretability. Given these considerations, our approach to embedding digraphs is to solve the following manifold-constrained optimization problem Footnote 2: Let \(\mathbf{x}_{i}^{l},\mathbf{x}_{i}^{r}\in\mathbb{R}^{N}\) be the \(i\)-th columns of \(\mathbf{X}^{l}\) and \(\mathbf{X}^{r}\), respectively. When we say \(\mathbf{X}^{l}\) and \(\mathbf{X}^{r}\) have the same column-wise norms we mean that \(\|\hat{\mathbf{x}}_{i}^{l}\|_{2}=\|\hat{\mathbf{x}}_{i}^{r}\|_{2}\) holds for all \(i=1,\ldots,d\). \[\{\hat{\mathbf{X}}^{l},\hat{\mathbf{X}}^{r}\}\in\operatorname*{ argmin}_{\{\mathbf{X}^{l},\mathbf{X}^{r}\}\in\mathbb{R}^{N\times d}}\left\| \mathbf{M}\circ(\mathbf{A}-\mathbf{X}^{l}(\mathbf{X}^{r})^{\top})\right\|_{F}^ {2}\] \[\text{s. to }(\mathbf{X}^{l})^{\top}\mathbf{X}^{l}=(\mathbf{X}^{r} )^{\top}\mathbf{X}^{r}\text{ diagonal.} \tag{2}\] In the absence of a mask, a solution of (2) is the legacy ASE. Indeed, \(\hat{\mathbf{X}}^{l}\) and \(\hat{\mathbf{X}}^{r}\) are obtained from orthonormal singular vectors and have the same column-wise norms because both \(\hat{\mathbf{U}}\) and \(\hat{\mathbf{V}}\) are right-multiplied by \(\hat{\mathbf{\Sigma}}^{1/2}\). To tackle the general case, a novel Riemannian GD method over the manifold of matrices with orthogonal columns is developed in Section IV-B. ### _Related work_ The low-rank matrix factorization problem (1) has a long history, with applications to recommender systems (where the objective is to complete a matrix of user-item ratings which is assumed to have low rank) [15]; or, in sensor localization from pairwise distances (the so-called Euclidean distance matrix) [16], just to name a couple of examples. Solution methods typically rely on spectral decomposition of the full data matrix (as in ASE), or by considering a convex relaxation via nuclear-norm minimization [17]. The latter is not best suited for our problem, where we are interested in the actual factors (not in \(\mathbf{P}\)), and their dimensionality could change with time due to e.g., node additions. Alternatively, over the last few years we have witnessed increased interest in non-convex optimization approaches for matrix factorization problems [10]. Our work may be seen as an effort in this direction. In particular, we bring to bear recent advances in first-order methods for matrix factorization problems and demonstrate impact to GRL (specifically, RDPG inference). The formulation (2) is novel to the best of our knowledge. To solve it we derive GD iterations over the manifold of orthogonal matrices, which is different from the Stiefel manifold and thus requires careful treatment given the unique geometric properties of our problem. The scalability of ASE, or any other spectral embedding method for that matter, has long been considered an issue [18]. This challenge is compounded when multiple graphs are to be embedded, especially in _batch_ settings where all graphs in the sequence are stored in memory [9]. Existing approaches seeking aligned embeddings rely on the spectral decomposition of some matrix whose dimension(s) grow linearly with the number of graphs [9, 19, 20]. In addition to increasing the computation cost of ASE, these methods are not applicable in streaming scenarios, where a possibly infinite sequence of graphs \(\{\mathbf{A}_{t}\}\) is observed and we want to recursively update the embeddings 'on-the-fly' as new graphs are acquired. There is an extensive collection of numerical linear algebra approaches to recursively update an eigendecomposition or SVD when the (adjacency) matrix is perturbed; e.g., [18]. However, these do not offer major computational savings except for specific types of changes (e.g., rank-\(1\) updates), and they may be prone to error accumulation as \(t\) increases [21]. Moreover, they can yield misaligned embeddings due to the rotational ambiguity of RDPGs. The sketching literature offers highly-scalable randomized algorithms [22]. Other than to initialize our iterative methods we do not consider those here, because we are interested in exact solutions to (1) and (2). In dynamic environments, not only does \(\mathbf{A}_{t}\) change over time, but new nodes may be added to the graph and current ones removed. Embedding previously unseen nodes corresponds to an inductive learning setting, generally regarded as beyond the capabilities of shallow embeddings as the one we are discussing here [7, Ch. 3.4], [23]. Previous efforts in this direction (that avoid re-computing eigendecompositions from scratch) either assume that the connections between the existing nodes, or, their current embeddings, do not change [24, 25]. In the latter case, a projection scheme onto the space of current embeddings produces an asymptotically \((N\rightarrow\infty)\) consistent ASE for the new node [25]. However, even if latent positions were time invariant, the estimation error of current nodes' embeddings propagates to the new estimates. We will use the projection-based estimate in [25] as initialization for new nodes in our GD algorithms, demonstrating benefits in accuracy and stability especially as several nodes are added, while at the same time refining previous nodes' embeddings. ## III Embedding Algorithms for Undirected Graphs We start with the embedding problem for undirected graphs. Recognizing limitations of state-of-the-art ASE implementations, here we first review a GD algorithm with well-documented merits for symmetric matrix completion, yet so far unexplored in RDPG inference. GD offers flexible computation of embeddings and a pathway towards tracking nodal representations in a streaming graph setting. We then show that the particular structure of the problem lends itself naturally to more efficient BCD iterations, and discuss the relative merits of the different approaches in terms of convergence properties, complexity, and empirical execution time. ### _Back to basics: Estimation via gradient descent_ Recall the embedding problem for undirected graphs in (1), and denote by \(f:\mathbb{R}^{N\times d}\to\mathbb{R}\) its smooth objective function \(f(\mathbf{X})=\|\mathbf{M}\circ(\mathbf{A}-\mathbf{X}\mathbf{X}^{\top})\|_{F}^ {2}\). Although the problem is not convex with respect to \(\mathbf{X}\), it is convex with respect to \(\mathbf{P}=\mathbf{X}\mathbf{X}^{\top}\). In the broad context of matrix factorization problems where the objective function depends on the product \(\mathbf{X}\mathbf{X}^{\top}\), the GD approach is often referred to as _factored GD_[26]. The workhorse GD algorithm generates embedding updates \[\mathbf{X}_{k+1}=\mathbf{X}_{k}-\alpha\nabla f(\mathbf{X}_{k}),\quad k=0,1,2,\ldots \tag{3}\] where \(\alpha>0\) is the stepsize, and the gradient of \(f\) is \(\nabla f(\mathbf{X})=4\left[\mathbf{M}\circ(\mathbf{X}\mathbf{X}^{\top}- \mathbf{A})\right]\mathbf{X}\). Recall that \(\mathbf{A}\) and \(\mathbf{M}\) are known symmetric matrices that specify the problem instance. There have been several noteworthy advances in the study of GD's convergence (including rates) for this non-convex setting, as well as accelerated variants [10, 26, 27, 28, 29, 11, 12]. For the RDPG embedding problem dealt with here, it follows that if the initial condition \(\mathbf{X}_{0}\) is close to the solution of (1), the GD iteration (3) converges linearly to \(\hat{\mathbf{X}}\)[10, 26, 11]. **Proposition 1**: _Let \(\hat{\mathbf{X}}\) be a solution of (1). Then there exist \(\delta>0\) and \(0<\kappa<1\) such that, if \(d(\mathbf{X}_{0},\hat{\mathbf{X}})\leq\delta\), we have_ \[d(\mathbf{X}_{k},\hat{\mathbf{X}})\leq\delta\kappa^{k},\quad\forall\;k>0 \tag{4}\] _where \(\{\mathbf{X}_{k}\}\) are GD iterates (3) with appropriate constant stepsize, and \(d(\mathbf{X},\hat{\mathbf{X}}):=\min_{\mathbf{W}}\|\mathbf{X}\mathbf{W}-\hat {\mathbf{X}}\|_{F}^{2}\) s. to \(\mathbf{W}^{\top}\mathbf{W}=\mathbf{W}\mathbf{W}^{\top}=\mathbf{I}_{d}\) is a matrix distance accounting for the rotational invariance._ Although there are specific \(\mathbf{X}_{0}\) which correspond to sub-optimal stationary points, in our experience GD converges to the global optimum when initialized randomly. **Remark 2** (Warm restarts to embed multiple graphs): On top of being flexible to handle missing data encoded in \(\mathbf{M}\), this approach also allows to track the latent positions of a graph sequence \(\{\mathbf{A}_{t}\}\) using warm restarts. That is, after computing the embeddings \(\mathbf{X}_{t}\) of the graph with adjacency matrix \(\mathbf{A}_{t}\), we initialize the GD iterations (3) with \(\mathbf{X}_{t}\) to compute \(\mathbf{X}_{t+1}\) corresponding to \(\mathbf{A}_{t+1}\). This way, unless the graphs in times \(t\) and \(t+1\) are uncorrelated, the embedding of each node will transition smoothly to its new position (up to noise). Experiments demonstrating this longitudinal stability property [9] are presented in Sections V-B and V-C. ### _Block coordinate descent_ Here we develop a BCD method for solving (1), which turns out to be quite efficient. The algorithm updates one row of \(\mathbf{X}\) at a time in a cyclic fashion, by minimizing the objective function \(f\) with respect to the corresponding row (treating all other entries of \(\mathbf{X}\) as constants, evaluated at their most recent updates). In general, this row-wise sub-problem may not admit a simple solution; however, we show that due to the structure of the mask matrix \(\mathbf{M}\) the updates are given in closed form. Let \(f(\mathbf{X})=\|\mathbf{M}\circ(\mathbf{A}-\mathbf{X}\mathbf{X}^{\top})\|_{F }^{2}\) and recall \(\mathbf{x}_{i}^{\top}\) is the \(i\)-th row of \(\mathbf{X}\). The gradient \(\nabla_{i}f\) of \(f\) with respect to \(\mathbf{x}_{i}\) is \[\nabla_{i}f(\mathbf{X}) =\left(-(\mathbf{M}\circ\mathbf{A})_{i}\mathbf{X}+((\mathbf{M} \circ\mathbf{X}\mathbf{X}^{\top})\mathbf{X})_{i}\right)^{\top}\] \[=-\mathbf{X}^{\top}(\mathbf{M}\circ\mathbf{A})_{i}^{\top}+ \mathbf{X}^{\top}(\mathbf{M}\circ\mathbf{X}\mathbf{X}^{\top})_{i}^{\top}, \tag{5}\] where \((\cdot)_{i}\) stands for the \(i\)-th row of the matrix argument. Note that since the graph has no self-loops (i.e., the diagonal entries of \(\mathbf{A}\) are zero), then the entry-wise product of \(\mathbf{A}\) with \(\mathbf{M}\) is vacuous over the diagonal. Also because \(A_{ii}=0\), the term \(\mathbf{X}^{\top}(\mathbf{M}\circ\mathbf{A})_{i}^{\top}=\mathbf{X}^{\top}( \mathbf{A})_{i}^{\top}\) in (5) does not depend on \(\mathbf{x}_{i}\). More importantly, \(\mathbf{X}\mathbf{X}^{\top}\) clearly depends on \(\mathbf{x}_{i}\), and this would challenge solving \(\nabla_{i}f(\mathbf{X})=\mathbf{0}_{d}\) to obtain a minimizer [due to the trilinear form of the second term in (5)]. However, a close re-examination of \(\mathbf{X}^{\top}(\mathbf{M}\circ\mathbf{X}\mathbf{X}^{\top})_{i}^{\top}\) suggests this purported challenge can be overcome. First, observe that \[(\mathbf{M}\circ\mathbf{X}\mathbf{X}^{\top})_{i}=\mathbf{x}_{i}^{\top}\mathbf{ X}^{\top}-\mathbf{r}_{i}^{\top},\] where \(\mathbf{r}_{i}\in\mathbb{R}^{N}\) is a column vector with zeros everywhere except in entry \(i\), where it takes the value \(\mathbf{x}_{i}^{\top}\mathbf{x}_{i}\). Hence, \[\mathbf{X}^{\top}(\mathbf{M}\circ\mathbf{X}\mathbf{X}^{\top})_{i}^{\top}= \mathbf{X}^{\top}(\mathbf{X}\mathbf{x}_{i}-\mathbf{r}_{i})=(\mathbf{X}^{\top} \mathbf{X}-\mathbf{x}_{i}\mathbf{x}_{i}^{\top})\mathbf{x}_{i}.\] All in all, we have a simplified expression for the gradient \[\nabla_{i}f(\mathbf{X})=-\mathbf{X}^{\top}(\mathbf{A}_{i})^{\top}+(\mathbf{X} ^{\top}\mathbf{X}-\mathbf{x}_{i}\mathbf{x}_{i}^{\top})\mathbf{x}_{i}. \tag{6}\] Now, define \(\mathbf{R}=\mathbf{X}^{\top}\mathbf{X}-\mathbf{x}_{i}\mathbf{x}_{i}^{\top}\) and notice this matrix does not depend on \(\mathbf{x}_{i}\). Therefore, from (6) it follows that the equation \(\nabla_{i}f(\mathbf{x}_{i})=\mathbf{0}_{d}\) is _linear_ in \(\mathbf{x}_{i}\), namely \(\mathbf{R}\mathbf{x}_{i}=\mathbf{X}^{\top}(\mathbf{A}_{i})^{\top}\). The pseudo-code of the algorithm is tabulated under Algorithm 1. The \(d\times d\) matrix \(\mathbf{R}\) is invertible provided that \(\mathbf{X}\) has rank \(d\). This also implies that the row-wise sub-problem has a unique minimizer, which is key to establish convergence of BCD to a stationary point [30, Prop. 2.7.1]. It is worth reiterating that this favorable linear structure is lost in the absence of a mask matrix \(\mathbf{M}\) (cf. ASE in Remark 1). Since in RDPG embeddings we typically have \(d\ll N\), solving multiple \(d\times d\) linear systems is affordable; especially when compared to matrix-vector operations of order \(\Theta(N^{\gamma})\), \(\gamma>1\), like in GD. ### _Complexity and execution time analyses_ We compare four computational methods to obtain RDPG embeddings of undirected graphs: the ASE based on (i) full eigendecomposition, and (ii) truncated SVD as implemented in Graspologic[8]; (iii) GD initialized with the randomized-SVD (RSVD) [22] (we account for the RSVD in the execution time); and (iv) randomly initialized BCD as in Algorithm 1. The full eigendecomposition of \(\mathbf{A}\) has worst-case \(\Theta(N^{3})\) complexity, while for sparse graphs the \(d\)-dominant components can be obtained with \(\Theta(Nd)\) per-iteration cost. For GD, the per-iteration computational cost incurred to evaluate \(\nabla f(\mathbf{X})\) is dominated by the matrix multiplications, which is \(\Theta(N^{2}d)\) for a naive implementation. The number of iterations depends on \(\mathbf{X}_{0}\), but even with a favorable initialization the runtime is still higher than the \(\Theta(Nd)\) of truncated SVD-based ASE. A refined convergence-rate analysis of GD for the symmetric matrix completion problem is presented in [11]. Although in general it is tricky to compare the complexity of GD against BCD approaches, we can evaluate the per-iteration computational cost of both methods (for BCD this means a whole cycle over the rows of \(\mathbf{X}\) is completed). In both cases, each entry of the matrix \(\mathbf{X}\) is updated exactly once. Each cycle consist of \(N\) instances of \(d\times d\) linear systems, so this is \(\Theta(Nd^{3})\) in the worst case. In addition, in our experience Algorithm 1 converges in fewer iterations than the GD method. ``` 0: Initial \(\mathbf{X}\leftarrow\mathbf{X}_{0}\) 1: Compute \(\mathbf{R}=\mathbf{X}^{\top}\mathbf{X}\) 2:repeat 3:for\(i=1:N\)do 4:\(\mathbf{R}\leftarrow\mathbf{R}-\mathbf{x}_{i}\mathbf{x}_{i}^{\top}\) 5:\(\mathbf{b}\leftarrow\mathbf{X}^{\top}(\mathbf{A}_{i})^{\top}\) 6:\(\mathbf{x}_{i}\leftarrow\) solution of \(\mathbf{R}\mathbf{x}=\mathbf{b}\) 7:\(\mathbf{R}\leftarrow\mathbf{R}+\mathbf{x}_{i}\mathbf{x}_{i}^{\top}\) 8:endfor 9:until convergence 10:return\(\mathbf{X}\). ``` **Algorithm 1** Block coordinate descent (BCD) In Fig. 1 we compare the execution times of the aforementioned methods (i)-(iv) as a function of \(N\). For ASE, we use the SciPY optimized implementation of the eigendecomposition in Python, as in state-of-the-art RDPG inference packages such as Graspologic [8]. Our GD and BCD implementations are in pure Python, not optimized for performance. For each \(N\), we sampled several 2-block SBM graphs, with connection probabilities of \(p=0.5\) (within block) and \(q=0.2\) (across blocks). Community sizes are \(N/3\) and \(2N/3\). We let \(d=2\) in all cases. Results are averaged over \(10\) Monte Carlo replicates, and corresponding standard deviations are depicted in Fig. 1. In all cases, the methods converge to a solution of (1). The obtained cost function is very similar for each run, with slightly lower values for the GD and BCD methods because they are solving the problem with the zero-diagonal restriction. As expected, BCD exhibits competitive scaling with the truncated SVD-based ASE, and can embed graphs with \(N=20000\) nodes in just over a minute. ## IV Embedding Algorithms for Digraphs Shifting bears to embedding nodes in digraphs, we start with a close examination of the ambiguities inherent to the directed RDPG model and justify the need for orthogonality constraints on the factors' columns. To compute the desired nodal representations, we then develop a first-order feasible optimization method in the manifold of matrices with orthogonal columns. ### _On the interpretability of the directed RDPG_ Recall that if \(\{\hat{\mathbf{X}}^{l},\hat{\mathbf{X}}^{r}\}\) is a minimizer of \(f(\mathbf{X}^{l},\mathbf{X}^{r})=\|\mathbf{M}\circ(\mathbf{A}-\mathbf{X}^{l} (\mathbf{X}^{r})^{\top})\|_{F}^{2}\), then so is \(\{\hat{\mathbf{X}}^{l}\mathbf{T},\hat{\mathbf{X}}^{r}\mathbf{T}^{-\top}\}\) for any invertible matrix \(\mathbf{T}\). Let us now discuss why \(\mathbf{X}^{l}\) and \(\mathbf{X}^{r}\) should be constrained to be orthogonal and have the same column-wise norms. In other words, why do we need the constraints in (2) to obtain useful embeddings when the graph is directed. To gain some insights, suppose we ignore these constraints altogether and use GD to minimize \(f(\mathbf{X}^{l},\mathbf{X}^{r})\). Similar to (3), at iteration \(k+1\) we update \(\{\mathbf{X}^{l}_{k+1},\mathbf{X}^{r}_{k+1}\}\) as follows \[\mathbf{X}^{l}_{k+1} =\mathbf{X}^{l}_{k}-\alpha\nabla_{\mathbf{X}^{l}}f(\mathbf{X}^{l }_{k},\mathbf{X}^{r}_{k}), \tag{7}\] \[\mathbf{X}^{r}_{k+1} =\mathbf{X}^{r}_{k}-\alpha\nabla_{\mathbf{X}^{r}}f(\mathbf{X}^{l }_{k},\mathbf{X}^{r}_{k}), \tag{8}\] where \(\nabla f_{\mathbf{X}^{l}}(\mathbf{X}^{l},\mathbf{X}^{r})=4\left[\mathbf{M} \circ(\mathbf{X}^{l}(\mathbf{X}^{r})^{\top}-\mathbf{A})\right]\mathbf{X}^{l}\) and a similar expression holds for \(\nabla f_{\mathbf{X}^{l}}(\mathbf{X}^{l},\mathbf{X}^{r})\). The ASE offers an alternative baseline, which requires to discard the mask M. ASE estimates \(\{\hat{\mathbf{X}}^{l},\hat{\mathbf{X}}^{r}\}\) have orthogonal columns because they are derived from the SVD of \(\mathbf{A}\). Same index columns in \(\hat{\mathbf{X}}^{l}\) and \(\hat{\mathbf{X}}^{r}\) have the same norm as well, since the orthonormal matrices \(\hat{\mathbf{U}}\) and \(\hat{\mathbf{V}}\) are _both_ right-multiplied by \(\hat{\mathbf{\Sigma}}^{1/2}\). However, if we minimize \(f(\mathbf{X}^{l},\mathbf{X}^{r})\) iteratively as in (7)-(8) to accommodate missing and streaming data, we may loose column-wise orthogonality with detrimental consequences we illustrate in the following example. **Example 1 (Bipartisan senate)** Consider a synthetic political dataset of votes cast by senators in support of laws, over a certain period of time. This may be regarded as a bipartite directed graph where nodes correspond to senators and laws, and the fact that senator \(i\) has voted affirmatively for law \(j\) is indicated by the existence of edge \((i,j)\). We examine a polarized scenario, where two political parties put forth laws for voting. Affirmative votes are more likely for senators from the party that introduced the law, and less likely for senators from the opposing party. There are also a few bipartisan laws, for which affirmative votes tend to be more balanced across parties. We simulated such a graph with 50 senators of each party and a total of 290 proposed laws (\(N=340\)). We compare the embeddings estimated through ASE and by GD [i.e., iterating (7) and (8) until convergence]. ASE results using \(d=2\) are shown in Figure 2 (left). As expected, the outward embeddings for laws and inward embeddings for senators are zero (since the former do not vote and the latter are not voted). Furthermore, the outward embeddings corresponding to senators of each party are close and roughly orthogonal to senators of the other party, reflecting the polarized landscape. Finally, the inward embeddings of laws submitted by each party are aligned with the corresponding cluster of senators, whereas bipartisan laws lie somewhere between both groups. Fig. 1: Execution time for embedding SBM graphs with up to \(N=24000\) nodes. As \(N\) grows, BCD exhibits competitive scaling to the state-of-the-art ASE algorithm implemented in the Graspologic package. On the other hand, inspection of Fig. 2 (right) reveals that GD converges to a solution where laws are not aligned with the corresponding party senators. Accordingly, the affinity of parties to their laws is less evident than before. In fact, it appears as if Party 1 is not as supportive of its laws as in the ASE-based visualization. While the input graph is the same for both methods and the total cost \(f(\hat{\mathbf{X}}^{l},\hat{\mathbf{X}}^{r})\) is smaller for the GD method, interpretability is hindered because of the larger ambiguity set in the absence of additional constraints. **Example 2** (Digraph with symmetric expectation): To further justify why orthogonality constraints are essential, consider a digraph sampled from a symmetric \(\mathbf{P}\) (i.e., the probability of both edge directions is the same, but each arc is independently sampled). It would be desirable that in this case the model enforced \(\mathbf{X}^{l}=\mathbf{X}^{r}\), since the outgoing and incoming behaviour of the nodes is the same. The general directed model should recover the subsumed undirected one and, naturally, the same should hold for the embedding method. However, these desiderata are not necessarily met. As stated earlier, given an invertible matrix \(\mathbf{T}\), the embeddings \(\mathbf{Y}^{l}=\mathbf{X}^{l}\mathbf{T}\) and \(\mathbf{Y}^{r}=\mathbf{X}^{r}\mathbf{T}^{-\top}\) yield the same probability matrix \(\mathbf{P}=\mathbf{X}^{l}(\mathbf{X}^{r})^{\top}\). This implies that unless \(\mathbf{T}=\mathbf{T}^{-\top}\) (meaning \(\mathbf{T}\) is an orthonormal matrix corresponding to a rotation), we could apply _different_ transformations to the inward and outward embeddings and still obtain the same RDPG. Given these observations, consider a directed RDPG model where the embedding matrices \(\mathbf{X}^{l}\) and \(\mathbf{X}^{r}\) are constrained to be orthogonal and of the same column-wise norm. The following result asserts that this suffices to ensure an admissible \(\mathbf{T}\) is orthonormal, hence reducing the model's ambiguity to a global rotation - just like in the undirected case. **Proposition 2**: _Let \(\mathbf{P}=\mathbf{X}^{l}(\mathbf{X}^{r})^{\top}\) be the probability matrix of a directed RDPG model, where \(\{\mathbf{X}^{l},\mathbf{X}^{r}\}\) are \(N\times d\) matrices with rank \(d\) such that \((\mathbf{X}^{l})^{\top}\hat{\mathbf{X}}^{l}=(\mathbf{X}^{r})^{\top}\mathbf{X}^ {r}=\mathbf{D}_{X}\) is diagonal. Let \(\mathbf{T}\in\mathbb{R}^{d\times d}\) be an invertible matrix such that \(\mathbf{Y}^{l}=\mathbf{X}^{l}\mathbf{T}\) and \(\mathbf{Y}^{r}=\mathbf{X}^{r}\mathbf{T}^{-\top}\) are also orthogonal with the same column-wise norms; i.e. \((\mathbf{Y}^{l})^{\top}\mathbf{Y}^{l}=(\mathbf{Y}^{r})^{\top}\mathbf{Y}^{r}= \mathbf{D}_{Y}\) is diagonal. Then, \(\mathbf{T}\) is an orthonormal matrix._ **Proof :** Combining \(\mathbf{Y}^{r}=\mathbf{X}^{r}\mathbf{T}^{-\top}\) with \((\mathbf{X}^{r})^{\top}\mathbf{X}^{r}=\mathbf{D}_{X}\) and \((\mathbf{Y}^{r})^{\top}\mathbf{Y}^{r}=\mathbf{D}_{Y}\) we find that \(\mathbf{D}_{X}=\mathbf{T}\mathbf{D}_{Y}\mathbf{T}^{\top}\). Proceeding analogously with \(\mathbf{Y}^{l}\) we further obtain that \(\mathbf{D}_{X}=\mathbf{T}^{-\top}\mathbf{D}_{Y}\mathbf{T}^{-1}\). Multiplying both identities results in \(\mathbf{D}_{X}^{2}=\mathbf{T}\mathbf{D}_{Y}^{2}\mathbf{T}^{-1}\). Thus, the columns of \(\mathbf{T}\) are linearly independent eigenvectors of a diagonal matrix. Furthermore, given \(\mathbf{D}_{X}=\mathbf{T}\mathbf{D}_{Y}\mathbf{T}^{\top}\) it follows that the above eigendecomposition is necessarily one with orthonormal eigenvectors. The constraints in (2) do not limit the expressiveness of the model, since they are compatible with those ASE implicitly imposes. Next, we develop a feasible first-order method that enforces the orthogonality constraint at all iterations. After convergence it is straightforward to equalize the resulting column-wise norms so that they are the same for both \(\hat{\mathbf{X}}^{l}\) and \(\hat{\mathbf{X}}^{r}\), without affecting the generated \(\mathbf{P}\); see Remark 3. ### _Optimizing on a manifold_ We have concluded that for the sake of interpretability and quality of the representation, it is prudent to impose the matrices \(\mathbf{X}^{l}\) and \(\mathbf{X}^{r}\) have orthogonal columns. One classical way to tackle this is by adding these constraints to the optimization problem as in (2), and solve it via Lagrangian-based methods. For some constraints with geometric properties, a more suitable and timely approach is to pose the optimization problem on a smooth manifold. One can then resort to feasible methods that search exclusively over the manifold, i.e., the constraints are satisfied from the start and throughout the entire iterative minimization process [12, 13]. This way, we can think of the optimization as being _unconstrained_ because the manifold is all there is. In the sequel we explore this last idea. Interestingly, the space of matrices having orthogonal columns does not form any known and well-studied manifold. Yet, we show the required geometric structure is present in our problem and thus we have to define several objects as well as compute various operators to facilitate optimization [12, 13]. The conceptual roadmap is as follows. Recall that a smooth manifold \(\mathcal{M}\) can be locally approximated by a linear space, the so-called _tangent space_. If we consider the objective function \(f:\mathcal{M}\mapsto\mathbb{R}\) defined from the (Riemannian) manifold to \(\mathbb{R}\), then the Riemannian gradient of the function is an element of the tangent space. This _Riemannian gradient_, which will be denoted as \(\mathrm{grad}\,f\), can be computed as the projection of the Euclidean gradient \(\nabla f\) to the tangent space. Having computed the gradient, a classical descent method consists of taking a certain step in the opposite direction. However, this step likely results in a point outside of the manifold, so we have to project it back to \(\mathcal{M}\). This projection might be computationally intensive, so the _retraction_ alternative is used instead. Next, we define more precisely our manifold, and derive the tangent space, the projection and finally the retraction. The manifold that resembles the most to ours is the so-called Stiefel manifold, which consist of matrices with orthogonal and unit-norm (i.e., orthonormal) columns \[St(d,N):=\{\mathbf{X}\in\mathbb{R}^{N\times d}:\mathbf{X}^{\top}\mathbf{X}= \mathbf{I}_{d}\}. \tag{9}\] But here we do not require unit-norm columns. Thus, let \(\mathbb{R}_{\star}^{N}=\mathbb{R}^{N}\setminus\{\mathbf{0}_{N}\}\) be the set of \(N\) dimensional vectors without the null vector, and let \(\mathbb{R}_{\star}^{N\times d}\) be the product of \(d\) copies of \(\mathbb{R}_{\star}^{N}\). This open set is the set of \(N\times d\) matrices without null columns. Fig. 2: Bipartisan senate example. ASE (left) and GD (right). Since ASE implicitly imposes equally normed orthogonal columns (as it is derived from an SVD), it produces interpretable embeddings. On the other hand, GD may result in laws and parties that are not aligned, and thus loses interpretability if no further constrains are imposed in the formulation. We are interested in matrices with orthogonal columns, namely \[\mathcal{M}(d,N) :=\{\mathbf{X}\in\mathbb{R}_{\mathrm{s}}^{N\times d}:\mathbf{X}^{ \top}\mathbf{X}\text{ is diagonal}\} \tag{10}\] \[=\{\mathbf{X}\in\mathbb{R}_{\mathrm{s}}^{N\times d}:\mathbf{M} \circ(\mathbf{X}^{\top}\mathbf{X})=\mathbf{0}_{d\times d}\},\] where once more \(\mathbf{M}=\mathbf{1}_{d}\mathbf{1}_{d}^{\top}-\mathbf{I}_{d}\) is a particular mask matrix, with zeros in the diagonal and ones everywhere else. The following proposition establishes that \(\mathcal{M}\) is actually a manifold (for notational convenience, we henceforth use \(\mathcal{M}\) instead of \(\mathcal{M}(d,N)\) since both \(d\) and \(N\) are fixed throughout). Moreover, \(\mathcal{M}\) is a Riemannian manifold since \(\mathcal{M}\subset\mathbb{R}^{N\times d}\) is a vector space equipped with the usual trace inner product. The proofs of subsequent results can be found in the Appendix. **Proposition 3**: _The set \(\mathcal{M}\) in (10) is a differential manifold and its dimension is \(Nd-d(d-1)/2\). Furthermore, the tangent space at \(\mathbf{X}\in\mathcal{M}\) is given by_ \[\mathrm{T}_{\mathbf{X}}\mathcal{M}=\{\boldsymbol{\zeta}\in\mathbb{R}^{N\times d }:\mathbf{M}\circ\left(\boldsymbol{\zeta}^{\top}\mathbf{X}+\mathbf{X}^{\top} \boldsymbol{\zeta}\right)=\mathbf{0}_{N\times d}\}.\] To perform a manifold GD step, one needs to compute the Riemmanian gradient of the function defined in \(\mathcal{M}\). We obtain \(\mathrm{grad}\,f\) as the projection of the Euclidean gradient [cf. (7)-(8)] onto the tangent space. A natural way to compute said projection is to first characterize and compute the projection to the normal space. Given \(\mathbf{X}\in\mathcal{M}\), the normal space at \(\mathbf{X}\) is \(T_{\mathbf{X}}\mathcal{M}^{\perp}=\{\mathbf{N}\in\mathbb{R}^{N\times d}: \langle\mathbf{N},\boldsymbol{\zeta}\rangle=\text{tr}(\mathbf{N}^{\top} \boldsymbol{\zeta})=0,\forall\,\boldsymbol{\zeta}\in T_{\mathbf{X}}\mathcal{M}\}\). A useful alternative characterization is given next. **Lemma 1**: _The normal space at \(\mathbf{X}\) is_ \[T_{\mathbf{X}}\mathcal{M}^{\perp}=\{\mathbf{X}\boldsymbol{\Lambda}\in\mathbb{ R}^{N\times d}:\boldsymbol{\Lambda}\in\mathcal{S}_{d}\},\] _where \(\mathcal{S}_{d}=\{\mathbf{X}\in\mathbb{R}^{d\times d}:\mathbf{X}=\mathbf{X}^{ \top},\text{diag}(\mathbf{X})=\mathbf{0}_{d\times d}\}\) is the set of \(d\times d\) symmetric matrices with null diagonal._ Computing the projection to the normal space requires some work. The result is given in the next lemma. **Lemma 2**: _Let \(\mathbf{X}\in\mathcal{M}\) and let \(\pi_{\mathbf{X}}^{\perp}:\mathbb{R}^{N\times d}\mapsto T_{\mathbf{X}}\mathcal{ M}^{\perp}\) be the projection to the normal space. Then_ \[\pi_{\mathbf{X}}^{\perp}(\mathbf{Z})=\mathbf{X}s(2\mathbf{D}\mathbf{L}), \tag{11}\] _where \(s:\mathbb{R}^{d\times d}\mapsto\mathcal{S}_{d}\) is a symmetrizing function \(s(\mathbf{Z})=\frac{\mathbf{Z}+\mathbf{Z}^{\top}}{2}-\text{diag}(\mathbf{Z})\), \(\mathbf{D}=\left(\mathbf{X}^{\top}\mathbf{X}\right)^{1/2}\) and \(\mathbf{L}=\left(\mathbf{D}^{-1}\mathbf{X}^{\top}\mathbf{Z}\right)\circ\mathbf{F}\), where \(\mathbf{E}=\mathbf{1}_{d}\mathbf{1}_{d}^{\top}\mathbf{D}^{2}+\mathbf{D}^{2} \mathbf{1}_{d}\mathbf{1}_{d}^{\top}\) and \(\mathbf{F}\) has entries \(F_{ij}=E_{ij}^{-1}\)._ Note that (11) is of the form \(\mathbf{X}\boldsymbol{\Lambda}\), with \(\boldsymbol{\Lambda}\in\mathcal{S}_{d}\). It thus belongs to the normal space by virtue of the characterization in Lemma 1. The calculations to show it is indeed the projection are detailed in the Appendix, and boil down to proving that \(\mathbf{Z}-\pi_{\mathbf{X}}^{\perp}(\mathbf{Z})\) lives in the tangent space. Specifically, to establish (11) we take \(\mathbf{X}\boldsymbol{\Lambda}\) and derive conditions that \(\boldsymbol{\Lambda}\) had to verify when imposing that \(\mathbf{Z}-\mathbf{X}\boldsymbol{\Lambda}\in\mathrm{T}_{\mathbf{X}}\mathcal{M}\). After some derivations, we find \(\boldsymbol{\Lambda}=s(2\mathbf{D}\mathbf{L})\), with the auxiliary matrices \(\mathbf{D},\mathbf{E},\mathbf{F}\) and \(\mathbf{L}\) defined in Lemma 2. Finally, the desired projection to the tangent space is given as follows. **Proposition 4**: _Let \(\mathbf{X}\in\mathcal{M}\). The projection to the tangent space \(\pi_{\mathbf{X}}:\mathbb{R}^{N\times d}\mapsto T_{\mathbf{X}}\mathcal{M}\) can be computed as:_ \[\pi_{\mathbf{X}}(\mathbf{Z})=\mathbf{Z}-\pi_{\mathbf{X}}^{\perp}(\mathbf{Z}) =\mathbf{Z}-\mathbf{X}s(2\mathbf{D}\mathbf{L}).\] **Algorithm 2** Riemannian Gradient Descent (GD) on \(\mathcal{M}\) When we take a small step in the opposite direction of \(\mathrm{grad}\,f\), in general we fall outside \(\mathcal{M}\) and we have to project back to it. We need a projection from the tangent bundle to the manifold, or a retraction, which is more efficient in general. Given a full rank matrix \(\mathbf{Z}\in\mathbb{R}^{N\times d}\), consider its decomposition \(\mathbf{Z}=\widetilde{\mathbf{Q}}\widetilde{\mathbf{R}}\), where \(\widetilde{\mathbf{Q}}\) is a matrix with orthogonal columns and \(\widetilde{\mathbf{R}}\) is upper triangular with ones in the diagonal. This decomposition is unique. Indeed, one may obtain \(\widetilde{\mathbf{Q}}\) by a Gram-Schmidt process, but skipping the normalization steps. A more efficient approach is to consider the classical QR decomposition (\(\mathbf{Z}=\mathbf{Q}\mathbf{R}\), with \(\mathbf{Q}\) orthonormal and \(\mathbf{R}\) upper triangular), and compute \(\widetilde{\mathbf{Q}}=\mathbf{Q}\mathbf{D}_{R}\), where \(\mathbf{D}_{R}=\text{diag}(\mathbf{R})\) is the diagonal matrix with the diagonal entries of \(\mathbf{R}\). In a way, this modification of the QR decomposition shifts the "normalization" of the columns from the upper triangular factor towards the orthogonal factor. Note that \(\widetilde{\mathbf{Q}}\in\mathcal{M}\), and this decomposition will serve to define a retraction to the manifold in the next proposition. **Proposition 5**: _Let \(\mathbf{X}\in\mathcal{M}\) and \(\boldsymbol{\zeta}\in T_{\mathbf{X}}\mathcal{M}\) a tangent vector. Then, the mapping_ \[R_{\mathbf{X}}(\boldsymbol{\zeta})=\widetilde{q}\widetilde{f}(\mathbf{X}+ \boldsymbol{\zeta})\] _is a retraction, where \(\widetilde{q}\widetilde{f}(\mathbf{A})\) denotes the \(\widetilde{\mathbf{Q}}\) factor of the modified QR decomposition described above, and the sum \(\mathbf{X}+\boldsymbol{\zeta}\) stands for the usual abuse of notation for embedded manifolds on vector spaces._ We now have all the ingredients for the GD method to minimize \(f(\mathbf{X}^{l},\mathbf{X}^{r})=\|\mathbf{M}\circ(\mathbf{A}-\mathbf{X}^{l}( \mathbf{X}^{r})^{\top})\|_{F}^{2}\) over \(\mathcal{M}\), which is tabulated under Algorithm 2. The convergence rate of Riemannian GD is the same as the unconstrained counterpart (i.e., producing points with \(\mathrm{grad}\,f\) smaller than \(\varepsilon\) in \(\mathcal{O}(1/\varepsilon^{2})\) iterations) [31]. The computational complexity of each iteration is dominated by the QR decomposition in the retraction. We extended the Pymanopt package [32] with a class for the manifold \(\mathcal{M}\) defined in Proposition 3, which forms part of the code available for this paper. **Remark 3** (Rescaling the factors' columns): Algorithm 2 does not quite solve (2). While both \(\{\mathbf{X}_{k}^{l},\mathbf{X}_{k}^{r}\}\) belong to \(\mathcal{M}\), the constraint \((\mathbf{X}_{k}^{l})^{\top}\mathbf{X}_{k}^{l}=(\mathbf{X}_{k}^{r})^{\top} \mathbf{X}_{k}^{r}\) will in general not be satisfied upon convergence. Dropping the iteration index for simplicity, let \(\bar{\mathbf{x}}_{i}^{l},\bar{\mathbf{x}}_{i}^{r}\in\mathbb{R}^{N}\) be the \(i\)-th columns of \(\mathbf{X}^{l}\) and \(\mathbf{X}^{r}\), respectively. To obtain a feasible solution from the output of Algorithm 2, for each dimension \(i=1,\ldots,d\) we define scaling factors \(s_{i}=\|\bar{\mathbf{x}}_{i}^{l}\|_{2}/\|\bar{\mathbf{x}}_{i}^{r}\|_{2}\) and collect them in the diagonal matrix \(\mathbf{S}=\text{diag}(s_{1},\ldots,s_{d})\). We then rescale the columns of the embedding matrices via \(\mathbf{X}_{k}^{l}\leftarrow\mathbf{X}_{k}^{l}\mathbf{S}^{-1/2}\) and \(\mathbf{X}_{k}^{r}\leftarrow\mathbf{X}_{k}^{r}\mathbf{S}^{1/2}\), without affecting the value of the objective function but now satisfying the constraint in (2). **Example 3** (Bipartisan senate revisited): Going back to the bipartisan senate from Example 1, Fig. 3 depicts the solution of (2) for the same simulated bipartite senator-law digraph (imposing the orthogonality constraints and rescaling in Remark 3). Unlike in Example 1, the Riemannian GD algorithm on the manifold \(\mathcal{M}\) is able to recover the same structure as the ASE. Laws are now correctly aligned with their corresponding party, thus faithfully revealing the structure in the data. ## V Numerical Experiments and Applications In this section we illustrate our embedding algorithms' ability to produce accurate and informative estimates of nodal latent position vectors. We explore a variety of GRL applications and consider synthetic and real network data. Our test cases are designed to target ASE challenges outlined in Section I-A, namely: i) missing data; ii) embedding multiple networks; and iii) graph streams (with fixed and varying number of nodes). For each case we assess the results with respect to estimation accuracy, interpretability, and stability/alignment in dynamic environments. The code for all these experiments is available at [https://github.com/marfiori/efficient-ASE](https://github.com/marfiori/efficient-ASE). ### _Inference with missing data_ First we illustrate how GD-based inference can be useful for GRL with missing data. The setup is similar to that of Example 1, but here we rely on real United Nations (UN) General Assembly voting data [33]. For each roll call and country, the dataset includes if the country was present and if so the corresponding vote (either 'Yes', 'No', or 'Abstain') for each proposal. We analyze the associated bipartite digraph pertaining to a particular year, where nodes correspond to countries and roll calls, and an edge from a country to a roll call exists if it voted affirmatively. If the country was absent or abstained, we will tag that edge as unknown (\(M_{ij}=0\)). Fig. 4 depicts the node embeddings (\(d=2\)) of the graph from 1955, estimated by ASE (naively assuming unknown edges do not exist, \(A_{ij}=0\)) and Riemannian GD. Consider the countries, which are displayed as circles. We highlight four interesting cases: Russia, USA, France, and South Africa. At the time, the first two represented two poles of the world, and are naturally almost orthogonal to each other for both methods. Note furthermore how the ASE seems to indicate that South Africa is less likely to vote in agreement with Russia than (even) the USA, whereas the opposite is true for France. The problem comes from equating an absence or abstention to a negative vote. For instance, South Africa was only present in roughly one third of the roll calls, and it voted almost identically to the USA. The Riemannian GD method, which acknowledges unknown edges via the mask \(\mathbf{M}\), provides an embedding that reflects this agreement. Something similar happens with France, which differed from the USA only in six roll calls. Four correspond to USA abstentions and France voting 'Yes', another one where the opposite happened (and thus both cases should not be accounted for in the optimization problem), and finally the roll call 10036 where France was one of only two countries to vote 'No' (the USA voted 'Yes'). Regarding the embeddings of roll calls marked with a cross in Fig. 4, note how 10036 is aligned with the Russian block of countries by ASE, but it is better placed as an intermediate proposal in Fig. 4 (right) - equally likely to be voted by all countries. Something similar occurs with roll call 10035, which dealt with the same subject of 10036, but met resistance from more countries (roughly a dozen, including the USA and France). In both cases several countries were not present or abstained during the voting. Incorrectly assuming these votes as negative by ASE leads to biased results. Much more can be said about the roll calls and their associated UN resolutions, but let us conclude the discussion by noting that roll call embeddings generated by Algorithm 2 form three clusters reflecting the geopolitical landscape at the time. There is a cluster for each pole (American and Russian), plus an intermediate one where both poles tend to vote similarly. On the other hand, ASE generates roll call embeddings that are incorrectly aligned (e.g., 10036), and a loose grouping of intermediate roll calls with shared voting from both poles. Fig. 4: UN General Assembly voting data for 1955. ASE (left) and Riemannian GD with mask matrix encoding present and absent (or abstained) voters (right). Our approach is able to assign the absent voters to the correct group (e.g., South Africa) and offers a more clear clustering of roll calls. Fig. 3: Solution to the embedding problem (2) for the bipartisan senate example. ASE (left) and Riemannian GD (right). Notice how both solutions are nearly identical [cf. unconstrained GD in Fig. 2 (right)], underscoring the importance of the orthogonality constraints. ### _Embedding multiple graphs: the batch case_ Suppose now that we observe \(m>1\) graphs \(\{\mathbf{A}_{t}\}_{t=1}^{m}\) and we are interested, for instance, in testing whether they are drawn from the same RDPG model, or, in tracking the embeddings over time. Assume that we can identify nodes across different observations; e.g., they correspond to labeled users in a social network and so a matching algorithm is not needed. Independently obtaining the ASE for each graph is undesirable because it yields arbitrarily rotated embeddings, a challenge that has motivated several recent research efforts. Indeed, a hypothesis test which involves solving a Procrustes problem to align the embeddings was put forth in [34]. An alignment alternative is to jointly embed all \(m\) graphs via a single'super-matrix' decomposition. The so-called _Omnibus_ embedding first forms an \(mN\times mN\) matrix derived from all \(\{\mathbf{A}_{t}\}_{t=1}^{m}\), and then computes its ASE which enjoys asymptotic normality [19]. The Unfolded ASE (UASE) also constructs an auxiliary matrix, but by horizontally stacking all \(\{\mathbf{A}_{t}\}_{t=1}^{m}\)[9, 20]. Nodal representations are then extracted from the SVD of this \(N\times mN\) matrix. Under some technical assumptions, the UASE provably offers desirable longitudinal and cross-sectional stability [9]. However, the complexity and memory footprint of these _batch_ approaches grow linearly with \(m\), and they are only applicable to undirected graphs. In the context of the algorithms proposed in this paper, we may leverage their iterative nature and initialize them using the estimated embeddings of another related (e.g., contiguous in time) graph. Unless radical changes take place from one graph to the other, this so-called warm restart is expected to produce embeddings that are closely aligned, with the added benefit of converging in few iterations. **Stability of GD estimates.** Let us illustrate this (desirable) behaviour through a numerical example. We borrow the setting and code from [9]. Consider two graph samples drawn from a dynamic SBM with inter-community probability matrices \[\mathbf{P}_{1}=\left(\begin{smallmatrix}0.08&0.02&0.18&0.10\\ 0.02&0.20&0.04&0.10\\ 0.18&0.04&0.02&0.06\\ 0.10&0.10&0.02&0.06\\ \end{smallmatrix}\right),\ \mathbf{P}_{2}=\left(\begin{smallmatrix}0.16&0.16&0.04&0.10\\ 0.16&0.16&0.04&0.10\\ 0.04&0.04&0.09&0.02\\ 0.10&0.10&0.02&0.06\\ \end{smallmatrix}\right).\] Initially there are four communities. At time \(2\), the first two communities merge, community \(3\) moves, and community \(4\) has its connection probabilities unchanged. Ideally, when embedding both graphs: i) the representations of nodes in community \(4\) should not change (longitudinal stability); and ii) the time \(2\) embeddings of members of communities \(1\) and \(2\) should be similar, up to noise (cross-sectional stability). Fig. 5 displays the results for UASE [9], Omnibus embedding [19], independent ASE for each graph, and GD (warm-restarted at time \(2\)). As expected, independent ASE lacks longitudinal stability, and the Omnibus embedding fails to exhibit cross-sectional stability. Note how the time 2 Omnibus estimates of communities 1 and 2 remain separate, due to time 1 'interference' affecting this joint embedding. UASE and GD produce embeddings that fulfill both stability requirements i) and ii). However, GD yields a better overall representation for both graphs. This is quantified via the cost function (1) evaluated at each solution; see above each plot for the numerical values. Unlike the batch UASE, GD offers a pathway towards tracking nodal representations in a streaming graph setting - the subject dealt with next. ### _Model tracking for graph streams_ Consider now a monitoring scenario, where we observe a stream of time-indexed graphs \(\{\mathbf{A}_{t}\}\) and the goal is to track the underlying model. Different from the batch setting of the previous section, we are now unable to jointly process the entire graph sequence. This may be due to memory constraints or stringent delay requirements. We will still assume that nodes are identifiable across time, but the algorithm's computational cost and memory footprint may not increase with \(t\). #### V-C1 Fixed vertex set We first consider the setting where \(N\) is fixed and we would like to track the latent vectors \(\mathbf{X}_{t}\in\mathbb{R}^{N\times d}\).3 Previous efforts in this direction have been mainly motivated by the change-point detection problem; i.e., detecting if and when the generative model of the observed graph sequence changes [35, 36, 6, 37]. Our focus is on the related problem of estimating the embeddings' evolution. A couple noteworthy applications include recommender systems (where rankings are revealed, or even change, over time) [38] or, as we discuss below, monitoring wireless networks [39]. Footnote 3: We stick to undirected graphs for ease of exposition, but extensions to digraphs are straightforward and presented in the numerical experiments. Independent ASE computation for each \(\mathbf{A}_{t}\) suffers from the alignment issue already discussed. Instead, and supposing for now that \(\mathbf{M}\) can be ignored, it may well be the case that recursive methods to update the SVD of a perturbed matrix \(\mathbf{A}_{t}=\mathbf{A}_{t-1}+\mathbf{\Delta}_{t}\) suffice [18]. However, as we show in the following synthetic example, these approaches may also produce arbitrarily rotated estimates from one time-step to the next, and suffer from catastrophic error accumulation [21]. Our idea is instead to proceed as in Remark 2, and warm-restart the GD iterations with the previous time-step's estimate \(\hat{\mathbf{X}}_{t-1}\). We may further leverage the fact that \(\mathbf{X}_{t}\) is typically correlated with \(\mathbf{X}_{t-1}\) in order to improve the embeddings' accuracy over time. For example, suppose \(\mathbf{X}_{t-m}=\ldots=\mathbf{X}_{t}=\mathbf{X}\) over some interval of length \(m\). It is then prudent to estimate \(\mathbf{X}\) by solving (1), but using the average \(\tilde{\mathbf{A}}_{t}=1/m\sum_{k=t-m}^{t}\mathbf{A}_{k}\) instead [40]. Note that \(\tilde{\mathbf{A}}_{t}\) may Fig. 5: Embeddings of two SBM graph realizations, where communities \(1\) and \(2\) merge, while community \(4\) keeps the connection probabilities with other groups. Observe how the GD approach (far right) manages to capture this behaviour, while providing the best representation for each graph individually (quantified by the smallest cost function values). Example adapted from [9]. be interpreted as the adjacency matrix of a weighted graph. Edge weights can also be modeled by an RDPG, where now the embeddings are such that \(\mathbf{X}\mathbf{X}^{\top}=\mathbb{E}\left[\mathbf{A}\right]\). Unlike the unweighted case, \(\mathbb{E}\left[\mathbf{A}\right]\) are not probabilities. Still, under mild assumptions on the weights' distribution, the solution of (1) for weighted \(\mathbf{A}\) is a consistent estimator of \(\mathbf{X}\) as \(N\rightarrow\infty\)[6]. These observations motivate well the two-stage tracking system depicted in Fig. 6. The stream of adjacency matrices \(\{\mathbf{A}_{t}\}\) is fed to the entry-wise filter \(\mathbf{F}(z)\), which outputs \(\{\mathbf{B}_{t}\}\). For instance, \(\mathbf{F}(z)\) may be a moving average of fixed length \(m\) as before. If memory is at a premium, we may use a single-pole IIR filter instead so that \(\{\mathbf{B}_{t}\}\) is an exponentially-weighted moving average of the input adjacency matrices. We may even drop the filtering stage altogether (setting \(m=1\)) to yield a least mean squares (LMS)-type online GD algorithm. We empirically demonstrate this simple tracking system yields accurate and stable embeddings of dynamic RDPG graphs. **Tracking of a dynamic SBM.** Consider a dynamic SBM graph with \(N=200\) nodes and two communities. At each time-step \(t=0,1,2,\ldots\) a single randomly chosen node changes its community affiliation. We compare the tracking performance of (warm-restarted) GD and the fast, recursive SVD algorithm in [18]. The nodal embeddings for \(t=0\) and \(1\) (i.e., a single node changed affiliation) are depicted in Fig. 7 (top). Notice how online GD produces stable results, with a single vector moving from one cluster to the other. The rest of the nodes' embeddings remain virtually unchanged. On the other hand, the recursive SVD in [18] fails to preserve a common angular reference for \(\hat{\mathbf{X}}_{0}\) and \(\hat{\mathbf{X}}_{1}\). Another well-documented drawback of these incremental SVD methods is that, since they update only the \(d\) most significant components, the error \(\|\hat{\mathbf{X}}_{t}\hat{\mathbf{X}}_{t}^{\top}-\mathbf{P}_{t}\|_{F}\) increases with \(t\)[21]. Fig. 7 (bottom) illustrates this error-accumulation behavior, to be contrasted with online GD that keeps the error in check for all \(t\geq 0\). **Wireless network monitoring.** To showcase an application of the proposed tracking algorithms, consider a Wi-Fi network from which a monitoring system periodically acquires the Received Signal Strength Indicator (RSSI) between Access Points (APs) - a feature typically available in enterprise-level deployments. We will use our GRL framework to flag network changes and eventually diagnose them. We analyze graphs \(\mathbf{A}_{t}\) whose nodes are the APs and the edge weights are the measured RSSI values (plus a constant offset so that all values are positive). Since these measurements are typically not symmetric, we have a digraph sequence. We rely on the dataset described in [41], which consists of hourly measurements between \(N=6\) APs at a Uruguayan school, over almost four weeks (\(m=655\) graphs). During the monitoring period, the network administrator moved an AP (\(i=4\)) at \(t\approx 310\). To track the AP embeddings, we run an online version of Algorithm 2. This entails a retraction after the Riemannian GD step, not shown in the diagram of Fig. 6. We use an IIR filter \(\mathbf{F}(z)\) with a pole at \(0.9\). Furthermore, we adopt a fix stepsize \(\alpha=0.01\) instead of choosing it via the Armijo rule. The evolution of the online Riemannian GD estimates \(\hat{\mathbf{X}}_{t}^{l}\) and \(\hat{\mathbf{X}}_{t}^{r}\) for \(d=2\) is shown in Fig. 8. Different color palettes are used to distinguish the nodes, and as \(t\) increases the colors become lighter. Note how at all times there are two (almost) orthogonal cluster of nodes: c1) APs \(1\) and \(2\) (in the lower part of the plots); and c2) APs \(3\), \(5\) and (to a lesser extent) \(6\). AP \(4\) is embedded between both communities for all \(t\). Moreover, note how the _trajectory of each AP_ can be split into a couple clear states, discernable as the colors transition from darker to lighter. This is indicative of the change in AP \(4\)'s position at roughly the middle of the monitoring period. Finally, movement within both AP clusters is mostly radial, hence dot products between cluster members are preserved. On the other hand, AP \(4\) moves tranversally closer to c2, consistent with the information provided by the network administrator. It appears as if it was moved closer to AP \(5\), and AP \(1\) remains its closest member from c1. #### V-A2 Time-varying node set In dynamic environments it is not uncommon for nodes to join or leave the network. Going back to the wireless network test case, the question remains on how to proceed should an AP fail, or if the administrator decides to add a new one to improve coverage. Dealing with the former case is straightforward; if a node leaves the network at time \(t\), we simply drop the corresponding row in \(\hat{\mathbf{X}}_{t-1}\) and Fig. 6: A diagram of the proposed tracking system. The entry-wise filter \(\mathbf{F}(z)\) implements an averaging operator, e.g., a fixed-length moving average. Fig. 7: Two-block dynamic SBM in which a single node changes affiliation at each \(t\). Comparison between online GD and recursive SVD [18]. (top) Embeddings for the first two time-steps \(d=2\); the node that changed communities is highlighted in green. Best viewed in a color display. Note how the change of a single node produces marked) different results for [18], whereas online GD offers stable estimates. (bottom) Evolution of \(\|\hat{\mathbf{X}}_{t}\mathbf{X}_{t}^{r}-\mathbf{P}_{t}\|_{F}\). Solid line indicates median across ten realizations, with the range between first and third quartiles shown in a lighter color. Online GD exhibits uniformly bounded error, whereas [18] accumulates error as \(t\) grows. re-run the GD algorithm (warm-restarted from there). Node additions require more thought. Suppose that a single node \(i=N+1\) joins the network at time \(t\). Let \(\mathbf{a}_{N+1}=[A_{1,N+1},\ldots,A_{N,N+1}]^{\top}\in\{0,1\}^{N}\) be the \((N+1)\)-th column of \(\mathbf{A}_{t}\in\{0,1\}^{N+1\times N+1}\), excluding \(A_{N+1,N+1}=0\) and dropping the subindex \(t\) for notational convenience. Then given \(\hat{\mathbf{X}}_{t-1}\in\mathbb{R}^{N\times d}\), we can embed node \(i\) by solving \[\hat{\mathbf{x}}_{N+1}=\operatorname*{argmin}_{\mathbf{\theta}\in\mathbb{R}^{d}} \|\mathbf{a}_{N+1}-\hat{\mathbf{X}}_{t-1}\mathbf{\theta}\|_{2}^{2}. \tag{12}\] This simple but intuitive out-of-sample embedding procedure was studied in [25], and shown to recover the true latent positions as \(N\to\infty\). If several nodes are added at a given time-step, they can all be embedded by solving multiple LS problems like (12). However, this procedure disregards the information from the connections between new nodes. Furthermore, if the embeddings of existing nodes are not updated, their growing inaccuracies as \(\mathbf{A}_{t}\) evolves will negatively impact future nodes' representations. As we show in the following numerical experiments, these drawbacks can be overcome by running our online GD-based algorithms to update _all embeddings_\(\hat{\mathbf{X}}_{t}\), initializing existing nodes with \(\hat{\mathbf{X}}_{t-1}\) and new one(s) with \(\hat{\mathbf{x}}_{N+1}\) as in (12). **Dynamic random graph with growing vertex set.** Consider an Erdos-Renyi graph with a fixed connection probability \(p=0.1\), and initial number of nodes \(N_{0}=100\). At each time-step \(t\) we add a single node so that \(N_{t}=N_{t-1}+1\). The evolution of the error \(\|\hat{\mathbf{X}}_{t}\hat{\mathbf{X}}_{t}^{\top}-\mathbf{P}_{t}\|_{F}/\sqrt{ N_{t}}\) is shown in Fig. 9. Note how (carefully warm-restarted) online GD exhibits bounded error behavior, in stark contrast with repeated LS-based embeddings as in [25]. Admittedly, this gain in accuracy comes with a modest increase in computation (few GD steps), and identical memory footprint (i.e., storing the current embeddings and the new adjacency matrix) as the baseline in [25]. **Tracking international relations from UN voting data.** Here we revisit the UN General Assembly voting data from Section V-A. Following the same bipartite digraph construction procedure, we study all yearly graphs from 1955 to 2015. In this dynamic network we have a time-varying node set. Roll calls change from one year to the next, and also several countries joined the UN later (while others have ceased to exist). We embed the first graph from 1955 using Riemannian GD initialized at random (as before, using \(d=2\)). For each successive year, we warm-restart Algorithm 2 with the embeddings from the previous year, while new nodes are initialized using the LS solution (12). Fig. 10 depicts the embeddings of four countries: USA, Israel, Cuba, and the USSR (later, the Russian Federation). We use a similar visualization style as in Fig. 8, with different color palettes used to distinguish among countries, and lighter tones indicating more recent years. Observe how the representations for the USA and Israel remain strongly aligned over the entire time horizon, which is consistent with their longstanding agreement on UN resolution matters. The embedding for the USSR is initially (nearly) orthogonal to the USA and Israel, with Cuba initially showing a greater affinity to the USA/Israel block. This is consistent with Cold War geopolitics of the time. Then, after 1959, Cuba's position shifts to the lower half-plane, becoming more aligned with the USSR. This is expected given Cuba's sharp shift in foreign policy as a result of the Cuban revolution, with its ideology being in agreement with that of the USSR. This polarized scenario remained unchanged until 1991. That year the embedding for the USSR (now the Russian Federation) moves closer to the USA/Israel block, which reflects the politics of the Russian Federation in the aftermath of USSR's dissolution. Cuba remains at an (almost) orthogonal position from the USA/Israel block, with Russia eventually shifting to a middle ground after the mid-2000's. ## VI Concluding Remarks We developed a gradient-based spectral-embedding framework to estimate latent positions of RDPGs. Relative to prior art our algorithmic approaches offer better representation at a competitive computational cost, and they are more broadly applicable to settings with incomplete, dynamic, and directed network data. We motivated and proposed a novel manifold-constrained formulation to embed directed RDPGs, and developed novel Riemannian GD iterations to estimate Fig. 8: Embeddings \(\hat{\mathbf{X}}_{t}^{l}\) (left) and \(\hat{\mathbf{X}}_{t}^{r}\) (right) for the RSSI digraph (\(d=2\)). Color palettes distinguish the APs and a lighter tone indicates larger values of \(t\). Best viewed in a color display. The network’s change at \(t\approx 310\) is apparent. AP \(4\) was moved (\(i=4\)) closer to the upper cluster of APs. Fig. 9: Dynamic Erdős-Rényi graph in which a single node is added at each \(t\). Comparison between online GD and out-of-sample LS embedding [25]. Evolution of \(\|\hat{\mathbf{X}}_{t}\hat{\mathbf{X}}_{t}^{\top}-\mathbf{P}_{t}\|_{F}/\sqrt{N_{t}}\). Solid line indicates median across ten realizations, with range between first and third quartiles shown in a lighter color. Once more, online GD exhibits uniformly bounded error, whereas the baseline method [25] accumulates error as \(t\) grows. interpretable latent nodal positions. The effectiveness of the GRL framework is demonstrated via reproducible experiments with both synthetic and real (wireless network and United Nations voting) data. We made all our codes publicly available. This work and its current limitations open up several exciting and challenging directions for future research. With regards to the streaming scenario, it would be of interest to develop lightweight online rules to adaptively determine the embedding dimension. Performing dynamic regret analyses of the online GD methods would be a valuable contribution, since such guarantees in non-convex settings are so far quite rare.
2308.05627
CoBaIR: A Python Library for Context-Based Intention Recognition in Human-Robot-Interaction
Human-Robot Interaction (HRI) becomes more and more important in a world where robots integrate fast in all aspects of our lives but HRI applications depend massively on the utilized robotic system as well as the deployment environment and cultural differences. Because of these variable dependencies it is often not feasible to use a data-driven approach to train a model for human intent recognition. Expert systems have been proven to close this gap very efficiently. Furthermore, it is important to support understandability in HRI systems to establish trust in the system. To address the above-mentioned challenges in HRI we present an adaptable python library in which current state-of-the-art Models for context recognition can be integrated. For Context-Based Intention Recognition a two-layer Bayesian Network (BN) is used. The bayesian approach offers explainability and clarity in the creation of scenarios and is easily extendable with more modalities. Additionally, it can be used as an expert system if no data is available but can as well be fine-tuned when data becomes available.
Adrian Lubitz, Lisa Gutzeit, Frank Kirchner
2023-08-10T15:15:26Z
http://arxiv.org/abs/2308.05627v1
# CoBaIR: A Python Library for Context-Based Intention Recognition in Human-Robot-Interaction ###### Abstract Human-Robot Interaction (HRI) becomes more and more important in a world where robots integrate fast in all aspects of our lives but HRI applications depend massively on the utilized robotic system as well as the deployment environment and cultural differences. Because of these variable dependencies it is often not feasible to use a data-driven approach to train a model for human intent recognition. Expert systems have been proven to close this gap very efficiently. Furthermore, it is important to support understandability in HRI systems to establish trust in the system. To address the above-mentioned challenges in HRI we present an adaptable python library in which current state-of-the-art Models for context recognition can be integrated. For Context-Based Intention Recognition a two-layer Bayesian Network (BN) is used. The bayesian approach offers explainability and clarity in the creation of scenarios and is easily extendable with more modalities. Additionally, it can be used as an expert system if no data is available but can as well be fine-tuned when data becomes available. ## I Introduction Our day-to-day lives are becoming increasingly involved with robotic devices. The industry is currently changing from static robotic environments to dynamic environments where humans collaborate with robots instead of operating robots. In private homes, robotic applications like robot vacuums and digital assistants for home automation are becoming regular household items [1]. Although this movement is drastically changing our society, interactions between humans and robots are very command-driven and unnatural in contrast to Human-Human Interaction (HHI) [2, 3]. While Large Language Models (LLM) like _GPT-3_[4], _BERT_[5], _LLaMA_[6], and _LaMDA_[7] show very promising results in general language understanding, they have problems with biases, alignment, uncertainty estimation, and most importantly they lack multimodal understanding of their surroundings which is a key feature for Intention Recognition needed in modern HRI applications [8]. In general, data-driven Intention Recognition systems [9, 10] have the drawback of relying on large and complex (high dimensional) human-robot interaction data. Because in most scenarios data is not available or impractical to record, data-driven approaches are not suitable for this problem. Bayesian context-based Intention Recognition is an approach to overcome those limitations and offer an expert system that can be fine-tuned with data when it becomes available. Existing research in this direction [11, 12, 13, 14, 15, 16] is promising but introduces very complex network structures which induce the need for the designer of an HRI scenario to have a profound knowledge of Bayesian probability theory. Furthermore, it makes adaptations and re-configuration of the network cumbersome. To overcome the aforementioned limitations of current systems we propose a two-layer BN for context-based Intention Recognition. The simple structure enables us to make several optimizations that allow the designer to concentrate on the HRI scenario instead of Bayesian probability theory. In [17] we proposed a first concept on how the design for context-based Bayesian Intention Recognition in HRI scenarios can be described in a more compact and intuitive way. In this paper, we introduce **Co**ntext-**B**ased **I**ntention **R**ecognition (_CoBaI_R), a python software library that comes with the power to infer intentions from the current context -- context describes every observable aspect in an HRI scenario. Furthermore, _CoBaI_R pays great attention to the design process of HRI scenarios. It provides a configuration format that decreases the number of values that need to be set during the design process of the Bayesian Network from an exponential to a linear scale. Additionally, it provides a Graphical User Interface (GUI) which visualizes the two-layer BN with its weights and offers an intuitive way of configuring it. This paper starts by pointing out the key challenges in Intention Recognition for HRI in Section II-A. In Section III we provide a detailed view of the proposed two-layer BN structure and highlight how this structure can make the process of designing HRI scenarios easier and faster. In Section IV we highlight the advantages of the proposed approach. We point out the most important features of the python implementation in Section V. In Section VI we show an example of how the described python library was used in a research project. Finally, in Section VII we conclude with the interpretation of the results and an outlook on future work. ## II Related Work During the extensive literature review conducted for this paper, it was observed that existing intention recognition implementations were often tightly coupled with specific modalities [18, 19] or designed exclusively for particular scenarios [20, 21]. In some cases, these limitations were found to coexist [22], further hindering the applicability of such implementations. However, as part of our objective to provide a generic framework for HRI within the KiMMI Project, we recognized the need for a solution that could exhibit high flexibility and adaptability across various modalities and scenarios. ### _Challenges in Intention Recognition_ HRI depends in many ways on the scenario at hand. For Intention Recognition we define the following challenges that need to be addressed in order to implement natural and meaningful HRI systems: _Hardware constraints:_ Just like humans, robots come in all shapes and colors. More precisely, social robots for HRI have different sensing modalities to perceive the human and its surrounding. While some robots are equipped with stereovision or RGBD-camera systems and multiple microphones for echolocation, as well as LiDAR for navigation, a simple digital assistant may only be equipped with one microphone. The designer of the HRI scenario needs to know which sensory modalities are available for the system in question. [23, 24, 25] _Application specifics:_ Furthermore, the designer needs to know about the application scenario which can vary from space applications over industrial to domestic applications. In all of these different application scenarios gestures, voice commands, etc. can mean different things. [23, 25] _Cultural differences:_ Research on cultural differences in social robotics is an often neglected topic although social robots will be deployed in multi-cultural place e.g. airports in the future in a more and more globalized world. One behavior may have different meanings in different cultures. Therefore the designer must be aware of the cultural differences, and for different cultures, different HRI scenarios must be designed. [25, 26] _Individual differences:_ When we think about developing robots that interact with humans in a very intuitive way we need to ask ourselves what is intuitive for us. Intuitive may be slightly different from person to person even within one cultural group. Humans interact slightly different on an individual level based on their knowledge about the interaction partner. If the interaction partner is not known a default is chosen which allows for adaptation in the future. While this behavior is very subtle and unconscious in HHI it is an important factor while designing HRI scenarios with the possibility for inter-personal adaptation. [24, 25, 26] _Trust & Acceptance:_ Trust in a robotic system is less scenario specific than the aforementioned challenges and can therefore not as obviously be integrated into the design process of an HRI scenario. Trust is primarily connected with the human's expectation of the behavior of a robot. If the robot behaves accordingly to the human's expectations the human can foresee the behavior and build a model of trust for the robot's abilities. Secondarily, it is connected with the explainability of a behavior. If the human is not able to foresee the robot's behavior because it is, e.g. not completely deterministic or too complex to foresee, the human will seek reasons for the observed behavior. If reasons can be found trust can still be maintained, while if no reason can be found the trust will decrease immensely. This model of trust can be very complex in a way that trust exists for specific abilities but not for others. The trust for specific abilities weighted with their specific importance for the human determines the acceptance in the system. [24, 23, 25] In Section III - VI we illustrate how we address these challenges, which advantages arise from the proposed approach, how it is implemented, and how it can be used to model an HRI scenario. ## III Architecture of the two-layer Bayesian Network for Context-based Intention Recognition In Section II-A we highlighted some of the key challenges in Intention Recognition in HRI. We propose a two-layer BN to address these challenges in a computational and data efficient as well as an intuitive way. The general structure of the BN for context-based Intention Recognition is depicted in Figure 1. In [12] and [15] Bayesian Networks (BNs) with 3 or more layers are proposed to additionally model actions. We believe the two-layered structure comes with several advantages over using three or more layers. From a usability point of view, it allows us to assume that the designer of the HRI scenario has no prior knowledge about Bayesian probability in general and BNs in specific. In this way every contexts can be treated in the same way, as an observable phenomenon. Using further layers would give actions, as suggested by [12], a special meaning. This special meaning however is not always valid and furthermore, it introduces a bias towards actions in the design of HRI scenarios. In cases where external context is of more importance than the performed action, the introduced bias towards actions can distort the design of the scenario. Fig. 1: The architecture of the two-layer Bayesian network allows for a high degree of flexibility In cases where external factors like the time have an influence on the intention the action itself should not play a predominant role. The action of grasping a mug could lead to the intention make coffee but at night the intention becomes more unrealistic and a higher probability should be given to the intention store mug. This is a very simple example of how the action bias could lead to the intention make coffee at night where the correct intention should be store mug. Modeling relevant observable phenomena as context reduces the action bias and the complexity of the BN and allows the designer of the scenario to concentrate on the specifics of the scenario without the need for in-depth knowledge about the underlying probabilistic modeling. The final step we took to uncouple probabilistic modeling from the intuitive design of HRI scenarios is to make some assumptions over the given two-layer structure. These assumptions reduce the probabilistic notion and drastically reduce the number of values that need to be set by a human expert to describe a scenario. ### Independent context and intentions The basic assumption is that all intentions are independent of each other as well as all contexts are independent of each other. This allows for the strict two-layer structure which has only connections between context and intentions as depicted in Figure 1. ### Binary intentions We consider all intentions as binary -- an intention is either present or not. This allows us to concentrate on the positive (an intention is present) case while designing the scenario and consider the negative (an intention is not present) case as its complement while calculating the probabilistic model. This small constraint already cuts the number of values that need to be set by the human expert in half. Contexts on the other hand can have as many discrete instantiations as needed to create a meaningful HRI scenario. ### Single Condition Assumption We make the Single Condition Assumption (SCA) which implies that every context has an individual and independent influence on a specific intention. Using this assumption, we can approximate the conditional probability \(P(I_{m}|C_{1,l},C_{2,l},...,C_{k,l})\) as the average over all single condition probabilities \(P(I_{m}|C_{k,l})\), where \(I_{m}\) is the _m_-th intention and \(C_{k,l}\) is the _l_-th instantiation of the _k_-th context. This allows us to calculate the conditional probability of the _m_-th intention given the first instantiation for all contexts in the following way: \[P(I_{m}|C_{1,l},C_{2,l},...,C_{k,l})=\frac{\sum_{k=1}^{k}P(I_{m}|C_{\tilde{k}, l})}{k} \tag{1}\] ### Influence values on a Likert scale We generalize the single condition probability \(P(I_{m}|C_{k,l})\) as an influence value \(v_{k,l,m}\) on a six-point Likert scale [27] for every context-intention-tuple \((C_{k,l},I_{m})\) to make them more manageable. The scale is mapped in the following fashion: \(0\mapsto 0\%;1\mapsto 5\%;2\mapsto 25\%;3\mapsto 50\%;4\mapsto 75\%;5 \mapsto 95\%\) The above-mentioned assumptions reduce the amount of values that need to be set by a human expert from an exponential growth given through \[V(i,j,c,n)=\sum_{j=1}^{j}c_{j}+i\times\prod_{i=1}^{i}n_{i}\times\prod_{j=1}^{j }c_{j} \tag{2}\] to a linear growth given through \[V(i,j,c)=(i+1)\times\sum_{j=1}^{j}c_{j} \tag{3}\] where \(V\) is the number of values to be set, \(i\) is the number of intentions, \(j\) is the number of contexts, \(c_{j}\) is the number of context instantiations for the _j_-th context, \(n_{i}\) is the number of intention instantiations for the _i_-th intention. The \(\sum^{j}c_{j}\) in Equations 2 and 3 describes the amount of _a priori_ probabilities for all context instantiations. The remaining term describes the values needed to fill the Conditional Probability Tables (CPTs) manually for Equation 2 and automatically for Equation 3. The SCA contributes in a huge way to the ease of designing HRI scenarios but ignores cases in which joint probabilities are necessary. An example could be the handling of voice commands through a robot that is able to estimate directed speech over Visual Voice Activity Detection (VVAD) as shown in [28]. The robot should infer the intention pick up tool with a higher probability if the speech command for picking up a tool was emitted **AND** the speech was directed towards the robot than one of them individually. Only the speech command should have a high probability to infer pick up tool but there is still a chance that the robot was picking up noise or the speech was not directed towards the robot. The context of directed speech on the other hand does not have a high probability for any intention individually. For those special cases, we provide the possibility to set (partially) conditioned influence values containing multiple contexts that provide more information when combined. [17] While the two-layer BN was originally not designed to handle temporal dependencies, it is possible to model the previously inferred intention as context. Using a recursive pattern like this, it is possible to model a temporal dependency under the Markov assumption [29]. ## IV Advantages of a two-layer Bayesian Network for Context-Based Intention Recognition _CoBaIR_ uses a two-layer BN to represent the dependencies between contexts and intentions. In this section we want to highlight the key advantages of this structure: ### Flexibility The biggest advantage of the architecture is its flexibility. It allows for the usage of any algorithm for context creation, whether it be probabilistic, heuristic, data-driven or any other approach. The generated contexts will be used as the input to the two-layer BN which fuses the context information to jointly infer a probability distribution over all possible intentions. On the one hand, using _CoBaIR_ as an expert system the simplifications explained in Section III allow the human designer to create a scenario in a fast and intuitive manner. Fast design and adaptation of HRI scenarios helps researchers to concentrate on the specifics of an experiment and therefore reach results faster and more reliably without any deeper knowledge about Bayesian probability and how to configure BNs. On the other hand the two-layer BN can be trained or fine-tuned with data, which allows to gradually shift from a system trained by an expert to a data-driven approach. Furthermore, the simple structure of the BN allows for the easy removal and addition of contexts. This makes an iterative prototyping approach, where the HRI scenario is build up over time, possible. ### Uncertainty Quantification and Explainability Uncertainty Quantification (UQ) is an often neglected topic, especially in the field vision based tasks [30]. The Bayesian approach for Intention Recognition offers the implicit advantage that it comes with a inherently good UQ due to the probabilistic nature of the model. Additionally, the compact structure of the two-layer BN allows users to easily identify the contexts that played the predominant role in the decision making. Using this interpretable compact structure we are able to generate explanations to understand the decisions made by a robot using _CoBaIR_ as its Intention Recognition system. Explainability and UQ in a system strongly increases the trust in the system and therefore the acceptance to use the system in general [31]. ### Modularity Another advantage of the described architecture is its modularity. Using a two-layer BN to fuse the output of different modules that provide contexts makes it possible to use existing solutions for context creation, like PAZ [32] which provides a large variety of models for visual perception, as well as use case specific models that need to be trained from scratch. Furthermore, it is possible to switch seamlessly between different models on the fly for evaluation and optimization of HRI scenarios. ### Handling missing input While most data-driven approaches have problems handling missing input [33] and additional data needs to be recorded or artificially generated, BNs provide the advantage of defining _a priori_ probabilities for the input. The _a priori_ probabilities for the context can be estimated by an expert or calculated from a few observations. In this way knowledge about missing input can be incorporated and during inference time the missing inputs will handled accordingly. The above-mentioned advantages of the two-layer BN highlight why we think a two-layer BN is suitable to enhance the design, inference and explainability of Intention Recognition in HRI scenarios. ## V CoBaIR: a Python Library _CoBaIR_ is a python library for **C**ontext-**B**ased **I**ntention **R**ecognition. The library allows to create complex HRI scenarios in a fast and intuitive manner. Furthermore, it provides a GUI which visualizes the underlying two-layer BN and guides through the configuration procedure. _CoBaIR_ is divided into two parts: ### _Core library_ The core library provides all the key features described in Section III to make the design of HRI scenarios intuitive and fast. It mainly provides the class BayesNet which handles the creation of the two-layer BN from a given configuration. The configuration is provided in YAML and its fields and format is depicted in Listing 1. The format contains the fields _contexts_, _instantiations_, _intentions_ and _decision_threshold_. _contexts_ gives names to the observable phenomena, like _weather_. _instantiations_ are the discrete instantiations of that phenomenon, like _cloudy, rainy, sunny_. The binary _intentions_ are the inferable intentions in the scenario, like _turn on sprinkler_. All _instantiations_ have an a priori probability which needs to be set and furthermore the _instantiations_ have an influence value for each _intention_. Additionally, there is a field _decision_threshold_ which can be a value between 0 and 1 and denotes the threshold an intention's likelihood needs to surpass during inference to be considered as the inferred intention. If the likelihood of the most likely intention is below the threshold, None is returned as the inferred intention. ``` context1: instantiating1:float instantiationm_j:float contextn: instantiating1:float. instantiatingm_j:float. instantiating1:float. instantiatingm_j:float. instantiating1:int#oneoutof[5,4,3,2,1,0]. instantiatingm_j:int#oneoutof[5,4,3,2,1,0]. instantiating1:int#oneoutof[5,4,3,2,1,0]. instantiatingm_j:int#oneoutof[5,4,3,2,1,0]. instantiatingm_j:int#oneoutof[5,4,3,2,1,0]. instantiatingm_j:int#oneoutof[5,4,3,2,1,0]. instantiatingm_j:int#oneoutof[5,4,3,2,1,0]. instantiatingm_j:int#oneoutof[5,4,3,2,1,0]. instantiatingm_j:int#oneoutof[5,4,3,2,1,0]. instantiatingm_j:int#oneoutof[5,4,3,2,1,0]. instantiatingm_j:int#oneoutof[5,4,3,2,1,0]. instantiatingm_j:int#oneoutof[5,4,3,2,1,0]. instantiatingm_j:int#oneoutof[5,4,3,2,1,0]. decision_threshold:float ``` The core library provides the means to validate, load and save the configuration. With the information from the configuration a fully defined two-layer BN will be created. bnlearn [34] is utilized as a backend to handle the two-layer BNs and do inference on them. The Application Programming Interface (API) is fully documented an publicly available under [https://dfki-ric.github.io/CoBaIR/API](https://dfki-ric.github.io/CoBaIR/API). ### Graphical User Interface On top of the optimizations we described in Section III, we provide researchers and practitioners with a helpful GUI depicted in Figure 2. The GUI supports the designer to create a configuration and makes sure it is always valid and provides helpful insights in the case of an invalid configuration. Furthermore, it visualizes the two-layer BN in a live view which helps to keep a good overview of the configuration. The complete Python software package is available on PyPI. Additionally, the code is open sourced on GitHub and open for contributions. ## VI Application of CoBaIR We applied _CoBaIR_ in an interaction scenario in the project KiMMI SF1 to infer the intentions of a human operator. This user controlled a simulated human during the interaction with the simulated robotic arm from Universal Robot UR5 mounted on a movable base. The simulated environment is depicted in Figure 3. The human can be navigated through the environment, which contains shelves in which different tools are stored, a working station with an inactive robotic system MANTIS2 that should be repaired, and on the left back side a dark area. Based on the human behavior the robotic system should infer the human's intention in order to react accordingly. For example, if the human wants to repair the inactive MANTIS robot, the supporting robot should bring the tool which is stored in the shelf, which is shown on the right side in Figure 3. Footnote 1: [https://robotik.dfki-bremen.de/en/research/projects/kiMimi-sf/](https://robotik.dfki-bremen.de/en/research/projects/kiMimi-sf/) Footnote 2: [https://robotik.dfki-bremen.de/en/research/robot-systems/mantils/](https://robotik.dfki-bremen.de/en/research/robot-systems/mantils/) The BN used to infer the human intentions is shown in Figure 4. The human behavior is measured based on four contexts: _hand opening_, _human pose_, _location of interest_, and _speech commands_, where the first three are determined in the simulation and the speech commands are captured from the operator of the simulation with a microphone. Each context is discretized to different values, e.g., _hand opening_ can be either _open_ or _closed_. This is shown on the left side of Figure 4 for each context. In the presented scenario, based on the given contexts, the following five intentions should be inferred: 1. _go work station_, i.e., the human wants to go to the work station; 2. _go dark space_, i.e., the human want to go to the dark area; 3. _robot bring tool_, i.e., the human wants the robot to bring the tool stored in the shelf; 4. _robot stop_, i.e., the human wants the robot to stop its current action; 5. _robot store tool_, the human wants the robot to store the tool back to the shelf. Using _CoBaIR_ and the included GUI shown in Figure 2, the BN shown in Figure 4 could easily be designed and optimized for the described scenario. With the resulting BN all human intentions could be reliably inferred and the correct reactions of the UR5 could be triggered to realize a successful interaction between the human and the robotic system. During the KiMMI SF project we followed an agile development process which led to multiple incremental as well as complete changes in the design of the HRI scenario. _CoBaIR_ enabled us to incorporate these changes fast and effectively in the development process. ## VII Conclusion and future work We presented the python library _CoBaIR_ in this paper. We demonstrated that the concept from [17] using a two-layer BN with the assumptions highlighted in III can be effectively implemented. Additionally, we provided a GUI to visualize and guide the design process for HRI scenarios. In Section VI we showed that _CoBaIR_ was successfully used in the KiMMI SF project. We advise practitioners to use the state-of-the-art models for context recognition from the open source library PAZ [32] which is constantly updated with perception models for autonomous systems. In the future we plan on providing tutorials and examples on how to use _CoBaIR_ with PAZ. While it is theoretically possible to train or fine-tune the two-layer BN with data from the scenario,so far the fine-tuning and data-driven training from scratch was not tested. In future works we want to investigate the capability of data-driven training and fine-tuning within _CoBaIR_. Additionally, we want to quantify the effect in terms of quality and speed which _CoBaIR_ has on the design of HRI scenarios, conducting user studies comparing the time and cognitive load to create a HRI scenario using our solution in contrast to creating a HRI scenario using CPTs for the Fig. 2: The GUI of _CoBaIR_ supports the designer of an HRI scenario. creation of the BN.. We will be incorporating _CoBaIR_ in future projects and by making _CoBaIR_ open source we hope to provide a helpful tool for researchers and practitioners in HRI all over the world. Everyone is welcome to use, give feedback and contribute to _CoBaIR_ through GitHub. ## Acknowledgment This work was supported through a grant of the German Federal Ministry of Economic Affairs and Climate Action (BMWi, FKZ 50 RA 2022).
2301.11359
Discrete multilinear maximal operators and pinned simplices
We prove that any given subset of $\mathbb{Z}^d$ of upper density $\delta>0$ will necessarily contain, in an appropriate sense depending on $\delta$, an isometric copy of all large dilates of any given non-degenerate $k$-simplex, provided $d\geq 2k+3$. This provides an improvement in dimension, from $d\geq 2k+5$, on earlier work of Magyar. We in fact establish a stronger pinned variant. Key to our approach are new $\ell^2$ estimates for certain discrete multilinear maximal operators associated to simplices. These operators are generalizations of the discrete spherical maximal operator and may be of independent interest.
Neil Lyall, Akos Magyar, Alex Newman, Peter Woolfitt
2023-01-26T19:12:51Z
http://arxiv.org/abs/2301.11359v1
# Discrete multilinear maximal operators and pinned simplices ###### Abstract. We prove that any given subset of \(\mathbb{Z}^{d}\) of upper density \(\delta>0\) will necessarily contain, in an appropriate sense depending on \(\delta\), an isometric copy of all large dilates of any given non-degenerate \(k\)-simplex, provided \(d\geq 2k+3\). This provides an improvement in dimension, from \(d\geq 2k+5\), on earlier work of Magyar. We in fact establish a stronger pinned variant. Key to our approach are new \(\ell^{2}\) estimates for certain discrete multilinear maximal operators associated to simplices. These operators are generalizations of the discrete spherical maximal operator and may be of independent interest. 2010 Mathematics Subject Classification: 11B30 The first and second authors were partially supported by grants NSF-DMS 1702411 and NSF-DMS 1600840, respectively. ## 1. Introduction ### Simplices in dense subsets of \(\mathbb{Z}^{d}\) Recall that the _upper Banach density_ of a set \(A\subseteq\mathbb{Z}^{d}\) is defined by \[\delta^{*}(A)=\lim_{N\to\infty}\sup_{t\in\mathbb{Z}^{d}}\frac{|A\cap(t+Q(N))|} {|Q(N)|},\] where \(|\cdot|\) denotes counting measure on \(\mathbb{Z}^{d}\) and \(Q(N)\) the discrete cube \([-N/2,N/2]^{d}\cap\mathbb{Z}^{d}\). In light of the fact that the square of the distance between any two distinct points in \(\mathbb{Z}^{d}\) is always a positive integer we also introduce the convenient notation \(\sqrt{\mathbb{N}}:=\{\lambda\,:\,\lambda>0\text{ and }\lambda^{2}\in\mathbb{Z}\}\). In [14] the second author established the following result on the existence of unpinned two point configurations (distances) in dense subsets of the integer lattice. **Theorem A** (Magyar [14]).: _Let \(A\subseteq\mathbb{Z}^{d}\) with \(d\geq 5\). If \(\delta^{*}(A)>0\), then there exist an integer \(q=q(\delta^{*}(A))\) and \(\lambda_{0}=\lambda_{0}(A)\) such that for all \(\lambda\in\sqrt{\mathbb{N}}\) with \(\lambda\geq\lambda_{0}\) there exist a pair of points \(\{x,x+y\}\subseteq A\) with \(|y|=q\lambda\)._ The approach taken in [14] was an adaptation of Bourgain's in [3] to the analogous problem in the continuous setting of \(\mathbb{R}^{d}\). In [15] the second author adapted this further to establish the following analogous result for non-degenerate \(k\)-simplices. Recall that for any \(1\leq k\leq d\) we refer to a configuration \(\Delta=\{v_{0}=0,v_{1},\ldots,v_{k}\}\subseteq\mathbb{Z}^{d}\) as a non-degenerate \(k\)-simplex if the vectors \(v_{1},\ldots,v_{k}\) are linearly independent. **Theorem B** (Magyar [15]).: _Let \(k\geq 2\), \(A\subseteq\mathbb{Z}^{d}\) with \(d\geq 2k+5\), and \(\Delta=\{0,v_{1},\ldots,v_{k}\}\subseteq\mathbb{Z}^{d}\) be a non-degenerate \(k\)-simplex. If \(\delta^{*}(A)>0\), there exists an integer \(q=q(\delta^{*}(A))\) and \(\lambda_{0}=\lambda_{0}(A,\Delta)\) such that for all \(\lambda\in\sqrt{\mathbb{N}}\) with \(\lambda\geq\lambda_{0}\) there exist \(x\in A\) with \(x+\Delta^{\prime}\subseteq A\) for some \(\Delta^{\prime}=\{0,y_{1},\ldots,y_{k}\}\simeq\lambda q\Delta\)._ In the theorem above, and throughout this article, we say that two configurations \(\lambda\Delta=\{0,\lambda v_{1},\ldots,\lambda v_{k}\}\) and \(\Delta^{\prime}=\{0,y_{1},\ldots,y_{k}\}\) in \(\mathbb{Z}^{d}\) are _isometric_, and write \(\Delta^{\prime}\simeq\lambda\Delta\), if \(|y_{i}-y_{j}|=\lambda|v_{i}-v_{j}|\) for all \(0\leq i,j\leq k\). In this article we establish an improvement on the dimension condition in Theorem B above from \(d\geq 2k+5\) to \(d\geq 2k+3\) and simultaneously establish a _stronger pinned_ variant, namely **Theorem 1**.: _Let \(k\geq 1\), \(A\subseteq\mathbb{Z}^{d}\) with \(d\geq 2k+3\), and \(\Delta=\{0,v_{1},\ldots,v_{k}\}\subseteq\mathbb{Z}^{d}\) be a non-degenerate \(k\)-simplex. If \(\delta^{*}(A)>0\), there exists an integer \(q=q(\delta^{*}(A))\) and \(\lambda_{0}=\lambda_{0}(A,\Delta)\) such that for any \(\lambda_{1}\geq\lambda_{0}\) there exists a fixed \(x\in A\) such that for all \(\lambda\in[\lambda_{0},\lambda_{1}]\cap\sqrt{\mathbb{N}}\) one has \(x+\Delta^{\prime}\subseteq A\) for some \(\Delta^{\prime}=\{0,y_{1},\ldots,y_{k}\}\simeq\lambda q\Delta\)._ _Remark_.: The threshold \(\lambda_{0}\) in the results above cannot be taken to depend on \(\delta^{*}(A)\) only. Indeed, for any positive integers \(q\) and \(M\) the set \((Q_{qM}\cap\mathbb{Z}^{d})+(4dqM\mathbb{Z})^{d}\) will have density \((4d)^{-d}\) but never contain pairs \(\{x,x+y\}\) with \(|y|=qdM\). Since \(A\) could fall entirely into a fixed congruence class of some integer \(1\leq r\leq\delta^{*}(A)^{-1/d}\) the value of \(q\) in the results above must be divisible by the least common multiple of all integers \(1\leq r\leq\delta^{*}(A)^{-1/d}\). Indeed if \(A=(r\mathbb{Z})^{d}\) with \(1\leq r\leq\delta^{-1/d}\) then \(A\) will have upper Banach density at least \(\delta\), but the distance between any two points \(x,y\in A\) will always take the form \(r\lambda\) for some \(\lambda\in\sqrt{\mathbb{N}}\). The approach in [15] also established a quantitative Szemeredi-type variant of Theorem B, namely **Theorem B\({}^{\prime}\)** (Magyar [15]). _Let \(k\geq 2\), \(d\geq 2k+5\), \(\Delta=\{0,v_{1},\ldots,v_{k}\}\subseteq\mathbb{Z}^{d}\) be a non-degenerate \(k\)-simplex, and \(0<\delta\leq 1\). If \(N\geq\exp(C_{\Delta}\delta^{-Ck})\), then any \(A\subseteq\{1,\ldots,N\}^{d}\) with cardinality \(|A|\geq\delta N^{d}\) will necessarily contain a configuration of the form \(x+\Delta^{\prime}\) with \(\Delta^{\prime}=\{0,y_{1},\ldots,y_{k}\}\simeq\lambda q\Delta\) for some \(\lambda\in\sqrt{\mathbb{N}}\)._ For an alternative approach to the proof of Theorem B\({}^{\prime}\) that is more in line with the arguments in this paper, see Section 6.1 in [11]. We note that by combining the main result of this current paper, namely Theorem 2 below and its corollary (Lemma 2 in Section 1), with the arguments and ideas contained in Section 6.1 of [11] one can establish an improvement on the dimension condition above from \(d\geq 2k+5\) to \(d\geq 2k+3\) and also establish the analogous _stronger pinned_ variant. However, for the sake of clarity and brevity we have chosen not to pursue the details of these arguments or statements here and instead focus on just establishing Theorem 1. Let us remark that the dimension bound \(d\geq 2k+3\) seems to be best possible even in case \(A=\mathbb{Z}^{d}\), i.e. when counting embeddings of isometric copies of \(\lambda\Delta\) in \(\mathbb{Z}^{d}\). Indeed, writing \(T\) to be the positive definite integral \(k\times k\) matrix with entries \(t_{ij}=v_{i}\cdot v_{j}\) and \(Y\) for the \(k\times d\) integral matrix with rows \(y_{1},\ldots,y_{k}\) the condition \(\Delta^{\prime}=\{0,y_{1},\ldots,y_{k}\}\simeq\lambda\Delta\) translates to the matrix equation \(Y^{t}Y=\lambda^{2}T\) which has been intesively in the past [17, 16, 7]. The best known results are due to Kitaoka [7] in dimesnions \(d=2k+3\), who also mentions that this condition is best possible to count solutions to the above equation via analytic means, see the remark after Theorem B in [7]. For \(k=1\) and \(d=4\), it is possible to count embeddings under restrictions of \(\lambda\) (say when \(\lambda^{2}\) is odd) via the so-called Kloosterman refinement [9], however for our pinned results in sets of postive density one needs for the discrete spherical maximal function which in dimension \(4\) has only been obtained for particular lacunary sequenses of the radii \(\lambda\), see [6]. The case \(k=1\) of Theorem 1 was already established by the first two authors in [10]. To the best of our knowledge, there have been no previous results addressing _pinned_ simplices in dense subsets \(\mathbb{Z}^{d}\) in any dimension when \(k\geq 2\). ### Discrete multilinear maximal averages associated to simplices An important result in the development of discrete harmonic analysis is the \(\ell^{p}\)-boundedness of the so-called discrete spherical maximal function [13]. For any \(\lambda\in\sqrt{\mathbb{N}}\) we let \(S_{\lambda}=\{y\in\mathbb{Z}^{d}:\;|y|=\lambda\}\) denote the discrete sphere of radius \(\lambda\) centered at the origin. For \(f:\mathbb{Z}^{d}\to\mathbb{R}\) we then define the discrete spherical averages \[\mathcal{A}_{\lambda}f(x)=|S_{\lambda}|^{-1}\sum_{y\in S_{\lambda}}f(x+y).\] noting that if \(d\geq 5\), then \(c_{d}\lambda^{d-2}\leq|S_{\lambda}|\leq C_{d}\lambda^{d-2}\) for some constants \(0<c_{d}<C_{d}<\infty\), see [18]. In [13] it was shown that for \(p>d/(d-2)\) one has the following maximal function estimate \[\big{\|}\sup_{\lambda\geq 1}\left|\mathcal{A}_{\lambda}f\right|\big{\|}_{p}\leq C _{p,d}\left\|f\right\|_{p}\] where \(\|f\|_{p}=(\sum_{x}|f(x)|^{p})^{1/p}\) denotes the \(\ell^{p}(\mathbb{Z}^{d})\) norm of the function \(f\). In [12] the authors gave a new direct proof of \(\ell^{2}\)-boundedness of the discrete spherical maximal function that neither relies on abstract transference theorems nor on delicate asymptotic for the Fourier transform of discrete spheres. Implicit in that paper is the fact that \(\ell^{2}\)-boundedness follows as a consequence of stronger refined "mollified" estimates in which one obtains gains in \(\ell^{2}\) over suitably large scales when applied to functions whose Fourier transform is localized away from rational points with small denominators. Recall that for \(f\in\ell^{1}(\mathbb{Z}^{d})\) we define its Fourier transform \(\widehat{f}:\mathbb{T}^{d}\to\mathbb{C}\) by \(\widehat{f}(\xi)=\sum_{x\in\mathbb{Z}^{d}}f(x)e^{-2\pi ix\cdot\xi}.\) For each \(\eta>0\) we define \[q_{\eta}:=\operatorname{lcm}\{1\leq q\leq\eta^{-2}\}\] and for any \(L\geq q_{\eta}\) we let \[\Omega_{\eta,L}=\{\xi\in\mathbb{T}^{d}:\xi\in[-L^{-1},L^{-1}]^{d}+(q_{\eta}^{ -1}\mathbb{Z})^{d}\}.\] Key to the proof of Theorem 1 is an extension of the approach from [12] to multilinear maximal operators associated to simplices. Given a non-degenerate \(k\)-simplex \(\Delta=\{v_{0}=0,v_{1},\ldots,v_{k}\}\subseteq\mathbb{Z}^{d}\) and \(\lambda\in\sqrt{\mathbb{N}}\), we let \[S_{\lambda\Delta}:=\{(y_{1},\ldots,y_{k})\in\mathbb{Z}^{dk}:\;\Delta^{\prime}= \{0,y_{1},\ldots,y_{k}\}\simeq\lambda\Delta\}.\] For functions \(f_{1},\ldots,f_{k}:\mathbb{Z}^{d}\to\mathbb{C}\) we then define the multilinear averaging operator \[\mathcal{A}_{\lambda\Delta}(f_{1},\ldots,f_{k})(x)=|S_{\lambda\Delta}|^{-1}\sum_ {(y_{1},\ldots,y_{k})\in S_{\lambda\Delta}}f_{1}(x+y_{1})\cdots f_{k}(x+y_{k})\] noting that if \(d\geq 2k+3\) and \(\lambda\in\sqrt{\mathbb{N}}\), then \[c_{\Delta}\,\lambda^{dk-k(k+1)}\leq\,|S_{\lambda\Delta}|\,\leq C_{\Delta}\, \lambda^{dk-k(k+1)} \tag{1}\] for some constants \(0<c_{\Delta}<C_{\Delta}<\infty\), see [7] or [15]. Note that for \(k=1\) and \(v_{1}=(1,0,\ldots,0)\) we have that \(S_{\lambda\Delta}=S_{\lambda}\) and hence \(\mathcal{A}_{\lambda\Delta}(f)=\mathcal{A}_{\lambda}(f)\). The \(\ell^{p}\) mapping properties of the maximal operators corresponding to these averages were considered in [2] and [4]. Here we establish the following particular \(\ell^{2}\)-estimates, the first non-trivial estimates of any type for such operators in dimensions lower that \(d=2k+5\) when \(k\geq 2\). The stronger refined \(\ell^{2}\) estimate (3), in addition to implying (2), plays a crucial role in our proof of Theorem 1. **Theorem 2**.: _If \(k\geq 1\), \(d\geq 2k+3\), and \(\Delta=\{0,v_{1},\ldots,v_{k}\}\subseteq\mathbb{Z}^{d}\) be a non-degenerate \(k\)-simplex, then_ \[\big{\|}\sup_{\lambda\geq 1}|\mathcal{A}_{\lambda\Delta}(f_{1},\ldots,f_{k})| \big{\|}_{2}\leq C_{d,\Delta}\|f_{1}\|_{2}\cdots\|f_{k}\|_{2}. \tag{2}\] _In fact, for any \(\eta>0\), and \(L\geq q_{\eta}^{4}\), we have_ \[\Big{\|}\sup_{\lambda\geq\eta^{-2}L}|\mathcal{A}_{\lambda\Delta}(f_{1},\ldots,f_{k})|\Big{\|}_{2}\leq C_{d,\Delta}\frac{\eta}{\log\eta^{-1}}\,\|f_{1}\|_{2 }\cdots\|f_{k}\|_{2} \tag{3}\] _whenever \(\operatorname{supp}\widehat{f}_{j}\subseteq\Omega^{c}_{\eta,L}\) for some \(1\leq j\leq k\), where \(\Omega^{c}_{\eta,L}\) denotes the complement of \(\Omega_{\eta,L}\)._ Estimate (3) in the case \(k=1\) was originally established in joint work with Magyar [10] via an adaptation of the transference methods from [13]. ## 2. Proof of Theorem 1 ### Reduction to uniform distributed sets In light of the observation made after Theorem 1 above regarding the sensitivity of this problem to the local structure of \(A\), it is natural to first consider the case when \(A\) is, in a suitable sense, well distributed in small congruence classes. In fact, this approach ultimately leads directly to a proof of Theorem 1. Following [10] we define \(A\subseteq\mathbb{Z}^{d}\) to be \(\eta\)_-uniformly distributed (modulo \(q_{\eta}\))_ if, for some \(\eta>0\), its _relative_ upper Banach density on any residue class modulo \(q_{\eta}\) never exceeds \((1+\eta^{4})\) times its density on \(\mathbb{Z}^{d}\), namely if \[\delta^{*}(A\,|\,s+(q_{\eta}\mathbb{Z})^{d})\leq(1+\eta^{4})\,\delta^{*}(A)\] for all \(s\in\{1,\ldots,q_{\eta}\}^{d}\). A straightforward density increment argument allows one to deduce Theorem 1 from the following analogue for \(\eta\)-uniformly distributed subsets of \(\mathbb{Z}^{d}\). **Proposition 1**.: _Let \(\varepsilon>0\), \(0<\eta\ll\varepsilon^{2}\) and \(k\geq 1\)._ _If \(A\subseteq\mathbb{Z}^{d}\) with \(d\geq 2k+3\) is \(\eta\)-uniformly distributed, and \(\Delta=\{0,v_{1},\ldots,v_{k}\}\subseteq\mathbb{Z}^{d}\) is a non-degenerate \(k\)-simplex, then there exist \(\lambda_{0}=\lambda_{0}(A,\Delta,\eta)\) such that for any \(\lambda_{1}\geq\lambda_{0}\) there exists a fixed \(x\in A\) such that_ \[\mathcal{A}_{\lambda\Delta}(1_{A},\ldots,1_{A})(x)>\delta^{*}(A)^{k}-\varepsilon\] _for all \(\lambda\in[\lambda_{0},\lambda_{1}]\cap\sqrt{\mathbb{N}}\), noting that_ \[\mathcal{A}_{\lambda\Delta}(1_{A},\ldots,1_{A})(x)=|S_{\lambda\Delta}|^{-1}\, \,|\{(y_{1},\ldots,y_{k})\in\mathbb{Z}^{dk}\,:\,x+\Delta^{\prime}\subseteq A \text{ with }\Delta^{\prime}=\{0,y_{1},\ldots,y_{k}\}\simeq\lambda\Delta\}|.\] In Proposition 1 above, and throughout this article, we use the notation \(\alpha\ll\beta\) to denote that \(\alpha\leq c\beta\) for some suitably small constant \(c>0\). Proposition 1 in fact implies the following stronger optimal formulation of Theorem 1. **Corollary 1**.: _Let \(k\geq 1\), \(A\subseteq\mathbb{Z}^{d}\) with \(d\geq 2k+3\), and \(\Delta=\{0,v_{1},\ldots,v_{k}\}\subseteq\mathbb{Z}^{d}\) be a non-degenerate \(k\)-simplex. For any \(\varepsilon>0\), there exists an integer \(q=q(\varepsilon,d)\) and \(\lambda_{0}(A,\Delta,\varepsilon)\) such that for any \(\lambda_{1}\geq\lambda_{0}\) there exists a fixed \(x\) such that_ \[|S_{\lambda\Delta}|^{-1}\,\,|\{(y_{1},\ldots,y_{k})\in(q\mathbb{Z})^{dk}\,:\,x+ \Delta^{\prime}\subseteq A\text{ with }\Delta^{\prime}=\{0,y_{1},\ldots,y_{k}\}\simeq\lambda q\Delta\}|>\delta^{*}(A) ^{k}-\varepsilon \tag{4}\] _for all \(\lambda\in[\lambda_{0},\lambda_{1}]\cap\sqrt{\mathbb{N}}\)._ _Remark_.: By considering sets \(A\) of the form \(\bigcup_{s\in\{1,\ldots,q\}^{d}}A_{s}\) with each set \(A_{s}\) a "random" subset of the congruence class \(s+(q\mathbb{Z})^{d}\) one can further easily see that conclusion (4) above is in general best possible. Proof that Proposition 1 implies Corollary 1.: Let \(0<\varepsilon\leq\delta\leq 1\) and \(A\subseteq\mathbb{Z}^{d}\) with \(d\geq 2k+3\). To prove Corollary 1 it is enough to prove that if \(\delta^{*}(A)\geq\delta\) then there exists \(\lambda_{0}=\lambda_{0}(A,\Delta,\varepsilon)\) and \(q=q(\varepsilon,d)\) such that for any \(\lambda_{1}\geq\lambda_{0}\) there exists a fixed \(x\in A\) such that (4) holds for all \(\lambda\in\sqrt{\mathbb{N}}\) with \(\lambda_{0}\leq\lambda\leq\lambda_{1}\). Let \(0<\eta\ll\varepsilon^{2}\). We prove the above for \(\delta_{m}:=(1+\eta^{4})^{-m}\) inductively for all \(m\geq 0\), using Proposition 1. For \(m=0\) the statement is trivial as \(\delta^{*}(A)=\delta_{0}=1\) and hence \(A\) contains arbitrarily large cubic grids. Suppose it holds for \(\delta=\delta_{m}\) and assume that \(\delta^{*}(A)\geq\delta_{m+1}\). If \(A\) is \(\eta\)-uniformly distributed then the result holds for \(\delta=\delta_{m+1}\) by Proposition 1. In the opposite case there is an \(s\in\mathbb{Z}^{d}\) so that \(\delta^{*}(A\,|\,s+(q_{\eta}\mathbb{Z})^{d})>(1+\eta^{4})\,\delta\,\). Let \(\phi:s+(q_{\eta}\mathbb{Z})^{d}\to\mathbb{Z}^{d}\) be defined by \(\phi(x):=q_{\eta}^{-1}(x-s)\) and let \(A^{\prime}:=\phi(A)\). Then \(\delta^{*}(A^{\prime})\geq\delta_{m}\) thus (4) holds for \(A^{\prime}\) and \(\delta=\delta_{m}\), with some \(q^{\prime}=q^{\prime}(\varepsilon,d)\) and \(x^{\prime}\in A^{\prime}\). Note that \[|\{(y_{1},\ldots,y_{k}) \in(q^{\prime}\mathbb{Z})^{dk}\,:\,x^{\prime}+\Delta^{\prime} \subseteq A^{\prime}\text{ with }\Delta^{\prime}=\{0,y_{1},\ldots,y_{k}\}\simeq q^{\prime} \lambda\Delta\}|\] \[=|\{(y_{1},\ldots,y_{k})\in(q_{\eta}q^{\prime}\mathbb{Z})^{dk}\,: \,q_{\eta}x^{\prime}+\Delta^{\prime}\subseteq A^{\prime}\text{ with }\Delta^{\prime}=\{0,y_{1},\ldots,y_{k}\}\simeq q_{\eta}q^{\prime} \lambda\Delta\}|\] which implies that (4) holds for \(A\), \(\delta=\delta_{m+1}\) with \(q=q_{\eta}q^{\prime}\) and \(x=q_{\eta}x^{\prime}+s\). ### Proof of Proposition 1 Before proving proving Proposition 1 we need two preparatory lemmas. We refer to a subset \(Q\subseteq\mathbb{Z}^{d}\) as a cube of sidelength \(\ell(Q)=N\) if \[Q=t_{0}+Q_{N}\] for some \(t_{0}\in\mathbb{Z}^{d}\), where as usual \(Q_{N}=[-N/2,N/2]^{d}\). **Definition** (\(U_{q,L}^{1}(Q)\)-norm).: For any cube \(Q\subseteq\mathbb{Z}^{d}\), integers \(1\ll q\ll L\ll\ell(Q)\), and functions \(f:Q\to\mathbb{R}\) we define \[\|f\|_{U_{q,L}^{1}(Q)}=\Big{(}\frac{1}{|Q|}\sum_{t\in\mathbb{Z}^{d}}|f*\chi_{q, L}(t)|^{2}\Big{)}^{1/2} \tag{5}\] where \(\chi_{q,L}\) denotes the normalized characteristic function of the cubes \(Q_{q,L}:=Q_{L}\cap(q\mathbb{Z})^{d}\), namely \[\chi_{q,L}(x)=\begin{cases}\big{(}\frac{q}{L}\big{)}^{d}&\text{if }\ x\in(q \mathbb{Z})^{d}\,\cap[-\frac{L}{2},\frac{L}{2}]^{d}\\ 0&\text{otherwise}\end{cases}. \tag{6}\] In (5) above and in the sequel we denote the convolution \(f*g\) of two functions \(f\) and \(g\) by \[f*g(x):=\sum_{y\in\mathbb{Z}^{d}}f(x-y)g(y).\] We note that the \(U_{q,L}^{1}(Q)\)-norm measures the mean square oscillation of a function with respect to cubic grids of size \(L\) and gap \(q\). The first key ingredient in our proof of Proposition 1 is the simple, yet significant, observation from [10] that subsets of \(\mathbb{Z}^{d}\) with positive upper Banach density that are \(\eta\)-uniformly distributed are also, in a precise sense, uniformly distributed at certain scales. **Lemma 1** (Consequence of Lemmas 1 and 2 in [10]).: _Let \(\eta>0\) and \(A\subseteq\mathbb{Z}^{d}\) be \(\eta\)-uniformly distributed with \(\delta:=\delta^{*}(A)>0\). There exists a positive integer \(L=L(A,\eta)\) and cubes \(Q\subseteq\mathbb{Z}^{d}\) of arbitrarily large sidelength \(\ell(Q)\) with \(\ell(Q)\geq\eta^{-4}L\) such that_ \[|A\cap Q|\geq(\delta-O(\eta))|Q| \tag{7}\] _and_ \[\|(1_{A}-\delta)1_{Q}\|_{U_{q_{\eta},L}^{1}(Q)}=O(\eta). \tag{8}\] The second key ingredient in our proof of Proposition 1 is the following _maximal_ variant of a so-called generalized von-Neumann-type inequality, which follows in a straightforward manner from Theorem 2. **Lemma 2** (Corollary of Theorem 2).: _Let \(k\geq 1\), \(d\geq 2k+3\), and \(\Delta=\{0,v_{1},\ldots,v_{k}\}\subseteq\mathbb{Z}^{d}\) be a non-degenerate \(k\)-simplex. For any \(\eta>0\), positive integer \(L\), cube \(Q\subseteq\mathbb{Z}^{d}\) with sidelength \(N\geq\eta^{-6}L\), and functions \(f_{1},\ldots,f_{k}:Q\to[-1,1]\) we have_ \[\frac{1}{|Q|}\sum_{x\in\mathbb{Z}^{d}}\ \sup_{\eta^{-2}L\leq\lambda\leq\eta^{3}N} \bigl{|}\mathcal{A}_{\lambda\Delta}(f_{1},\ldots,f_{k})(x)\bigr{|}\leq C_{d, \Delta}\,\Bigl{(}\min_{1\leq j\leq k}\|f_{j}\|_{U_{q_{\eta},L}^{1}(Q)}+O(\eta) \Bigr{)}.\] Proof.: By Cauchy-Schwarz, it suffices to prove the stronger estimate \[\bigg{(}\frac{1}{|Q|}\sum_{x\in\mathbb{Z}^{d}}\ \sup_{\eta^{-3}L\leq\lambda\leq \eta^{3}N}\bigl{|}\mathcal{A}_{\lambda\Delta}(f_{1},\ldots,f_{k})(x)\bigr{|}^{2 }\bigg{)}^{1/2}\leq C_{d,\Delta}\,\Big{(}\min_{1\leq j\leq k}\|f_{j}\|_{U^{1}_{q _{\eta},L}(Q)}+O(\eta)\Big{)}.\] This follows from Theorem 2 by symmetry and sublinearity after decomposing \(f_{k}=f_{k,1}+f_{k,2}+f_{k,3}\) with \[f_{k,1}=f_{k}*\chi_{q_{\eta},L}\] where \(f_{k,2}\) and \(f_{k,3}\) satisfy \[\widehat{f_{k,2}}=\widehat{f_{k}}\,1_{\Omega_{\eta,\eta^{-1}L}}(1-\widehat{ \chi_{q_{\eta},L}})\quad\text{and}\quad\widehat{f_{k,3}}=\widehat{f_{k}}\,1_ {\Omega^{c}_{\eta,\eta^{-1}L}}(1-\widehat{\chi_{q_{\eta},L}}).\] Indeed, estimate (2) implies that \[\bigg{(}\frac{1}{|Q|}\sum_{x\in\mathbb{Z}^{d}}\ \sup_{\lambda\geq 1}\big{|} \mathcal{A}_{\lambda\Delta}(f_{1},\ldots,f_{k-1},g)(x)\bigr{|}^{2}\bigg{)}^{1 /2}\leq C_{d,\Delta}\bigg{(}\frac{1}{|Q|}\sum_{x\in\mathbb{Z}^{d}}\ |g(x)|^{2}\bigg{)}^{1/2}\] for any \(g:\mathbb{Z}^{d}\to\mathbb{C}\). Note that if \(g=f_{k,1}\) then \[\bigg{(}\frac{1}{|Q|}\sum_{x\in\mathbb{Z}^{d}}\ |f_{k,1}(x)|^{2}\bigg{)}^{1/2}=\| f_{k}\|_{U^{1}_{q_{\eta},L}(Q)}.\] In light of the fact that \[\widehat{\chi}_{q_{\eta},L}(\xi)=\frac{q_{\eta}^{d}}{L^{d}}\sum_{x\in[-\frac{L }{2},\frac{L}{2})^{d},\ q_{\eta}|x}e^{-2\pi ix\cdot\xi}\] it is easy to see that \(\widehat{\chi}_{q,L}(\ell/q)=1\) for all \(\ell\in\mathbb{Z}^{d}\) and that there exists some absolute constant \(C>0\) such that \[0\leq 1-\widehat{\chi}_{q_{\eta},L}(\xi)\leq C\,L\,|\xi-\ell/q_{\eta}| \tag{9}\] for all \(\xi\in\mathbb{T}^{d}\) and \(\ell\in\mathbb{Z}^{d}\), and hence that \(1-\widehat{\chi_{q_{\eta},L}}(\xi)=O(\eta)\) for all \(\xi\in\Omega_{\eta,\eta^{-1}L}\). It thus follows, by Plancherel, that \[\bigg{(}\frac{1}{|Q|}\sum_{x\in\mathbb{Z}^{d}}\ |f_{k,2}(x)|^{2}\bigg{)}^{1/2}=O( \eta).\] Finally, since \(\operatorname{supp}\widehat{f_{k,3}}\subseteq\Omega^{c}_{\eta,\eta^{-1}L}\), it follows from estimate (3) that \[\bigg{(}\frac{1}{|Q|}\sum_{x\in\mathbb{Z}^{d}}\ \sup_{\eta^{-3}L\leq \lambda\leq\eta^{3}N}\bigl{|}\mathcal{A}_{\lambda}(f_{1},\ldots,f_{k-1},f_{k,3 })(x)\bigr{|}^{2}\bigg{)}^{1/2}\leq C_{d,\Delta}\,\eta.\qed\] Proof of Proposition 1.: Let \(0<\varepsilon\leq\delta\leq 1\) and \(0<\eta\ll\varepsilon^{2}\). Suppose there exists a set \(A\subseteq\mathbb{Z}^{d}\) with \(d\geq 2k+3\) with \(\delta=\delta^{*}(A)>0\) that is \(\eta\)-uniformly distributed but for which the conclusion of Proposition 1 fails, namely that there exists arbitrarily large pairs \((\lambda_{0},\lambda_{1})\) such that for every \(x\in A\) one has \[\mathcal{A}_{\lambda\Delta}(1_{A},\ldots,1_{A})(x)\leq\delta^{k}-\varepsilon\] for some \(\lambda\in[\lambda_{0},\lambda_{1}]\cap\sqrt{\mathbb{N}}\). Combining this with Lemma 1 we can conclude that there exists a positive integer \(L\) and a cube \(Q\in\mathbb{Z}^{d}\) with sidelength \(N\) sufficiently large so that in addition to the properties (7) and (8) we also have the property that \[\mathcal{A}_{\lambda\Delta}(1_{A\cap Q},\ldots,1_{A\cap Q})(x)\leq\delta^{k}-\varepsilon\] for every \(x\in A\) for some \(\lambda\in[\eta^{-3}L,\eta^{3}N]\cap\sqrt{\mathbb{N}}\). We now let \(A^{\prime}:=A\cap Q^{\prime}\), where \(Q^{\prime}\) denotes the cube of sidelength \((1-\eta^{3})N\) with the same center as \(Q\). It then follows, provided that \(N\) was chosen sufficiently large, that \[\mathcal{A}_{\lambda\Delta}(1_{Q},\delta_{1Q},\ldots,\delta_{1Q})(x)=\delta^{k}\] for every \(x\in A^{\prime}\) and hence that for each such \(x\) one has \[\sum_{j=0}^{k-1}\ \mathcal{A}_{\lambda\Delta}(\underbrace{1_{A\cap Q},\ldots 1 _{A\cap Q}}_{j\text{ copies}},(1_{A}-\delta)1_{Q},\delta 1_{Q},\ldots,\delta 1_{Q})(x)\leq-\varepsilon\] for some \(\lambda\in[\eta^{-3}L,\eta^{3}N]\cap\sqrt{\mathbb{N}}\). Consequently, we have that \[\sum_{j=0}^{k-1}\ \sup_{\eta^{-3}L\leq\lambda\leq\eta^{3}N}\biggl{|}\mathcal{A} _{\lambda}(\underbrace{1_{A\cap Q},\ldots 1_{A\cap Q}}_{j\text{ copies}},(1_{A}-\delta)1_{Q}, \delta 1_{Q},\ldots,\delta 1_{Q})(x)\biggr{|}\geq\varepsilon \tag{10}\] for every \(x\in A^{\prime}\). Since \(\eta\ll\delta\) and \(|A^{\prime}|\geq|A\cap Q|-\eta^{3}|Q|\) it follows from (7) that \(|A^{\prime}|/|Q|\geq\delta/2\). Combining this observation with (10) we obtain \[\sum_{j=0}^{k-1}\ \frac{1}{|Q|}\sum_{x\in\mathbb{Z}^{d}}\ \sup_{\eta^{-3}L\leq \lambda\leq\eta^{3}N}\biggl{|}\mathcal{A}_{\lambda}(\underbrace{1_{A\cap Q}, \ldots 1_{A\cap Q}}_{j\text{ copies}},(1_{A}-\delta)1_{Q},\delta 1_{Q}, \ldots,\delta 1_{Q})(x)\biggr{|}\geq\varepsilon\delta/2. \tag{11}\] However, Lemma 2 and (8) clearly imply that for each \(0\leq j\leq k-1\) one has \[\frac{1}{|Q|}\sum_{x\in\mathbb{Z}^{d}}\ \sup_{\eta^{-3}L\leq\lambda\leq\eta^{3}N} \biggl{|}\mathcal{A}_{\lambda}(\underbrace{1_{A\cap Q},\ldots 1_{A\cap Q}}_{j \text{ copies}},(1_{A}-\delta)1_{Q},\delta 1_{Q},\ldots,\delta 1_{Q})(x) \biggr{|}=O(\eta)\] which leads to a contradiction if \(\eta\) is chosen sufficiently small with respect to \(\varepsilon^{2}\). ## 3. Proof of Theorem 2 Following the approach in [12] we will deduce Theorem 2 from refined estimates for our maximal operators at a single dyadic scale, namely Proposition 2 below. We first need to introduce some notation closely related to that in Section 1.2. For any integer \(j\geq 0\) we let \[q_{j}=\operatorname{lcm}\{1,2,\ldots,2^{j}\}\] noting that \(q_{j}\asymp e^{2^{j}}\), and for any non-negative integers \(j\) and \(l\) that satisfy \(2^{j}\leq l\), we let \[\Omega_{j,l}:=\{\xi\in\mathbb{T}^{d}:\xi\in[-2^{j-l},2^{j-l}]^{d}+(q_{j}^{-1} \mathbb{Z})^{d}\}. \tag{12}\] **Proposition 2**.: _If \(k\geq 1\), \(d\geq 2k+3\), and \(\Delta=\{0,v_{1},\ldots,v_{k}\}\subseteq\mathbb{Z}^{d}\) be a non-degenerate \(k\)-simplex, then_ \[\bigl{\|}\sup_{2^{l}\leq\lambda\leq 2^{l+1}}|\mathcal{A}_{\lambda\Delta}(f_{1}, \ldots,f_{k})|\bigr{\|}_{2}\leq C_{d,\Delta}\,2^{-j/2}j^{-1}\,\|f_{1}\|_{2} \cdots\|f_{k}\|_{2} \tag{13}\] _whenever \(\operatorname{supp}\widehat{f}_{i}\subseteq\Omega_{j,l}^{c}\) for some \(1\leq i\leq k\), where \(\Omega_{j,l}^{c}\) denotes the complement of \(\Omega_{j,l}\)._ It is easy to see that Proposition 2 is equivalent to estimate (3) of Theorem 2. Indeed, note that in proving (3) one may restrict the sup to \(\eta^{-2}L\leq\lambda\leq 2\eta^{-2}L\). Choosing \(l,j\in\mathbb{N}\) such that \(2^{l}\leq\eta^{-2}L\leq 2^{l+1}\) and \(2^{j}\geq\eta^{-2}\) we have that \(2^{l-j}\leq L\) and hence \(\Omega_{j,l}\subseteq\Omega_{n,L}\). Applying Proposition 2 with \(j\) and \(l\) chosen as above implies estimate (3) of Theorem 2, while applying estimate (3) of Theorem 2 with \(L=2^{l-j}\) and \(\eta=2^{-j/2}\) immediately implies Proposition 2. We are left with establishing that Proposition 2 implies estimate (2) of Theorem 2. Following the approach in [12] we start by introducing a smooth sampling function supported on \(\Omega_{j,l}\). ### A smooth sampling function supported on \(\Omega_{j,l}\) Let \(\psi\in\mathcal{S}(\mathbb{R}^{d})\) be a Schwartz function satisfying \[1_{Q}(\xi)\leq\widetilde{\psi}(\xi)\leq 1_{2Q}(\xi)\] where \(Q=[-1/2,1/2]^{d}\) and \[\widetilde{\psi}(\xi):=\int_{\mathbb{R}^{d}}\psi(x)e^{-2\pi ix\cdot\xi}dx\] denote the Fourier transform of \(\psi\) on \(\mathbb{R}^{d}\). For a given \(q\in\mathbb{N}\) and \(L>q\) we define \(\psi_{q,L}:\mathbb{Z}^{d}\to\mathbb{R}\) as \[\psi_{q,L}(x)=\begin{cases}\left(\frac{q}{L}\right)^{d}\psi\left(\frac{m}{L} \right)&\text{if }x\in(q\mathbb{Z})^{d}\\ 0&\text{otherwise}\end{cases}\] Writing \(x=qr+s\) with \(r\in\mathbb{Z}^{d}\) and \(s\in\mathbb{Z}^{d}/q\mathbb{Z}^{d}\), it follows from Poisson summation that \[\widehat{\psi}_{q,L}(\xi)=\sum_{x\in\mathbb{Z}^{d}}\psi(x)e^{-2\pi ix\cdot\xi}\] is a \(q^{-1}\)-periodic function on \(\mathbb{T}^{d}\) that satisfies \[\widehat{\psi}_{q,L}(\xi)=\sum_{\ell\in\mathbb{Z}^{d}}\widetilde{\psi}(L(\xi- \ell/q)).\] For a given \(l\in\mathbb{N}\) and \(0\leq j\leq J_{l}:=[\log_{2}(l)]-2\), we now define the sampling function \[\Psi_{l,j}=\psi_{q_{j},2^{l-j}} \tag{14}\] and note that \(\operatorname{supp}\widehat{\Psi}_{l,j}\subseteq\Omega_{j,l}\). Finally we define \(\Delta\Psi_{l,j}=\Psi_{l,j+1}-\Psi_{l,j}\) and note the important almost orthogonality property they enjoy. **Lemma 3** (Lemma 1 in [12]).: _There exists a constant \(C=C_{\Psi}>0\) such that_ \[\sum_{l\geq 2^{j}}|\widehat{\Delta\Psi}_{l,j}(\xi)|^{2}\leq C\] _uniformly in \(j\in\mathbb{N}\) and \(\xi\in\mathbb{T}^{d}\)._ ### Proof that Proposition 2 implies estimate (2) of Theorem 2 Let \(k\geq 1\), \(d\geq 2k+3\), and \(\Delta=\{0,v_{1},\ldots,v_{k}\}\subseteq\mathbb{Z}^{d}\) be a non-degenerate \(k\)-simplex. In [12] the authors gave a direct proof of estimate (2) of Theorem 2 when \(k=1\), the \(\ell^{2}\)-boundedness of the discrete spherical maximal function. We may thus, without loss in generality assume that \(k\geq 2\), \(\operatorname{supp}\widehat{f}_{k}\subseteq\Omega_{j,l}^{c}\), and that \[\big{\|}\sup_{\lambda\geq 1}|\mathcal{A}_{\lambda\widetilde{\Delta}}(f_{1}, \ldots,f_{k-1})|\big{\|}_{2}\leq C_{d,\widetilde{\Delta}}\|f_{1}\|_{2}\cdots \|f_{k-1}\|_{2} \tag{15}\] where \(\widetilde{\Delta}=\{0,v_{1},\ldots,v_{k-1}\}\subseteq\mathbb{Z}^{d}\). Let \[\mathcal{M}_{l}(f_{1},\ldots,f_{k}):=\sup_{2^{l}\leq\lambda\leq 2^{l+1}}| \mathcal{A}_{\lambda\Delta}(f_{1},\ldots,f_{k})|. \tag{16}\] Writing \[f_{k}=f_{k}*\Psi_{l,0}+\sum_{j=0}^{J_{l}-1}f_{k}*\Delta\Psi_{l,j}+(f_{k}-f_{k}* \Psi_{l,J_{l}})\] it follows by subadditivity that \[\mathcal{M}_{l}(f_{1},\ldots,f_{k})\leq\mathcal{M}_{l}(f_{1},\ldots,f_{k}* \Psi_{l,0})+\sum_{j=0}^{J_{l}-1}\mathcal{M}_{l}(f_{1},\ldots,f_{k}*\Delta\Psi _{l,j})+\mathcal{M}_{l}(f_{1},\ldots,f_{k}-f_{k}*\Psi_{l,J_{l}}). \tag{17}\] Estimate (2) of Theorem 2 will now follow from a few observations and applications of Proposition 2, in light of the fact that \[\sup_{\lambda\geq 1}|\mathcal{A}_{\lambda\Delta}(f_{1},\ldots,f_{k})|=\sup_{l} \mathcal{M}_{l}(f_{1},\ldots,f_{k}).\] We first observe that the first term on the right in (17) above satisfies \[\mathcal{M}_{l}(f_{1},\ldots,f_{k}*\Psi_{l,0})\leq C_{\Psi}\mathcal{H}(f_{k}) \ \sup_{\lambda\geq 1}|\mathcal{A}_{\lambda\widetilde{\Delta}}(f_{1},\ldots,f_{k-1})|\] uniformly in \(l\), where \[\mathcal{H}(f)(x)=\sup_{N>0}\frac{1}{|Q_{N}|}\Big{|}\sum_{y\in Q_{N}}f(x-y) \Big{|}\] with \(Q(N)\) the discrete cube \([-N/2,N/2]^{d}\cap\mathbb{Z}^{d}\) denotes the discrete Hardy-Littlewood maximal operator, which trivially satisfies \(\|\mathcal{H}f\|_{\infty}\leq\|f\|_{\infty}\leq\|f\|_{2}\) by the nesting of discrete \(\ell^{p}\) spaces. It therefore follows from the inductive hypothesis (15) that \[\sup_{l}\|\mathcal{M}_{l}(f_{1},\dots,f_{k}*\Psi_{l,0})\|_{2}\leq C\|f_{1}\|_{2 }\cdots\|f_{k}\|_{2}.\] For the middle terms in (17) we first note that \[\sup_{l}\sum_{j=0}^{J_{1}-1}\mathcal{M}_{l}(f_{1},\dots,f_{k}*\Delta\Psi_{l,j} )\ll\Bigl{(}\sum_{l=0}^{\infty}\Bigl{|}\sum_{j=0}^{J_{1}-1}\mathcal{M}_{l}(f_ {1},\dots,f_{k}*\Delta\Psi_{l,j})\Bigr{|}^{2}\Bigr{)}^{1/2}.\] Taking \(\ell^{2}\) norms of both sides of the inequality above and applying Minkowski's inequality, followed by an application of Proposition 2, gives \[\Bigl{\|}\sup_{l}\sum_{0\leq j\leq J_{l}}\mathcal{M}_{l}(f_{1}, \dots,f_{k}*\Delta\Psi_{l,j})\Bigr{\|}_{2} \leq\sum_{j}\Bigl{(}\sum_{l\geq 2^{j}}\|\mathcal{M}_{l}(f_{1}, \dots,f_{k}*\Delta\Psi_{l,j})\|_{2}^{2}\Bigr{)}^{1/2}\] \[\leq C\|f_{1}\|_{2}\cdots\|f_{k-1}\|_{2}\ \sum_{j}2^{-j/2}\Bigl{(}\sum_{l\geq 2 ^{j}}\|f_{k}*\Delta\Psi_{l,j}\|_{2}^{2}\Bigr{)}^{1/2}\] \[\leq C\|f_{1}\|_{2}\cdots\|f_{k}\|_{2}\] where the last inequality above follows from Lemma 3. One more application of Proposition 2 with \(j=[\log_{2}l]-2\) to the last term in (17) gives \[\Bigl{\|}\sup_{l}\mathcal{M}_{l}(f_{1},\dots,f_{k}-f_{k}*\Psi_{l, J_{l}})\Bigr{\|}_{2} \leq\Bigl{(}\sum_{l=1}^{\infty}\|\mathcal{M}_{l}(f_{1},\dots,f_{k} -f_{k}*\Psi_{l,J_{l}})\|_{2}^{2}\Bigr{)}^{1/2}\] \[\leq C\Bigl{(}\sum_{l=1}^{\infty}l^{-1}(\log_{2}l)^{-2}\Bigr{)}^{ 1/2}\ \|f_{1}\|_{2}\cdots\|f_{k}\|_{2}\] \[\leq C\|f_{1}\|_{2}\cdots\|f_{k}\|_{2}.\qed\] ## 4. Proof of Proposition 2 Given any simplex \(\Delta=\{v_{0}=0,v_{1},\dots,v_{k}\}\subseteq\mathbb{R}^{d}\), we introduce the associated _inner product matrix_\(T=T_{\Delta}=(t_{ij})_{1\leq i,j\leq k}\) with entries \(t_{ij}:=v_{i}\cdot v_{j}\), where "\(\cdot\)" stands for the dot product in \(\mathbb{R}^{d}\). Note that \(T\) is a positive semi-definite matrix with integer entries and \(T\) is positive definite if and only if \(\Delta\) is non-degenerate. It is easy to see that \(\Delta^{\prime}\simeq\lambda\Delta\), with \(\Delta^{\prime}=\{y_{0}=0,y_{1},\dots,y_{k}\}\), if and only if \[y_{i}\cdot y_{j}=\lambda^{2}t_{ij}\quad\text{for all}\quad 1\leq i,j\leq k. \tag{18}\] If we let \(M\in\mathbb{Z}^{d\times k}\) be a matrix with column vectors \(y_{1},\dots,y_{k}\in\mathbb{Z}^{d}\), then the system of equations above can be written as the matrix equation \[M^{t}M=\lambda^{2}T, \tag{19}\] where \(M^{t}\) is the transpose of the matrix \(M\). It therefore follows that \[\mathcal{A}_{\lambda\Delta}(f_{1},\dots,f_{k})(x)=|S_{\lambda\Delta}|^{-1}\sum _{y_{1},\dots,y_{k}\in\mathbb{Z}^{d}}f_{1}(x+y_{1})\cdots f_{k}(x+y_{k})S_{ \lambda^{2}T}(M)\] if we use \(S_{\lambda^{2}T}(M)\) to denote the indicator function of relation (19). Let \(I_{k}=[0,2]^{k(k+1)/2}\) denote the space of symmetric \(k\times k\) matrices with entries in the interval \([0,2]\). Using the fact that \[\operatorname{tr}(X^{t}Y)=\operatorname{tr}(YX^{t})=\sum_{i=1}^{k}\sum_{j=1}^ {k}x_{ij}y_{ij},\] for any \(k\times k\) matrices \(X=(x_{ij})\), \(Y=(y_{ij})\), one has \[S_{\lambda^{2}T}(M)=2^{-k}\int_{I_{k}}e^{\pi i\ \operatorname{tr}[(M^{t}M- \lambda^{2}T)X]}\,dX \tag{20}\] where \(dX=\prod_{1\leq i\leq j\leq k}dx_{ij}\). Moreover, if \(M^{t}M=\lambda^{2}T\) then \[\operatorname{tr}(T^{-1}M^{t}M)=\operatorname{tr}(MT^{-1}M^{t})=\operatorname{ tr}(\lambda^{2}I)=k\lambda^{2}.\] Given \(l\in\mathbb{N}\) write \(\Lambda=2^{l}\) and \(\varepsilon=2^{-2l}\). We have \[S_{\lambda^{2}T}(M)=2^{-k}e^{k\varepsilon\lambda^{2}}\,\int_{I_{k}}e^{-\pi i \lambda^{2}\operatorname{tr}(TX)}\,e^{\pi i\,\operatorname{tr}(M(X+i\varepsilon T ^{-1})M^{t})}dX. \tag{21}\] Let \[G_{X,\varepsilon}(M)=G_{X,\varepsilon}(y_{1},\dots,y_{k})=e^{\pi i\, \operatorname{tr}(M(X+i\varepsilon T^{-1})M^{t})}\] be the Gaussian function, where \(y_{1},\dots,y_{k}\in\mathbb{Z}^{d}\) are the column vectors of the matrix \(M\), and define the corresponding multi-linear operator \[B_{X,\varepsilon}(f_{1},\dots,f_{k})(x):=\sum_{y_{1},\dots,y_{k}\in\mathbb{Z} ^{d}}f_{1}(x+y_{1})\dots f_{k}(x+y_{k})\,G_{X,\varepsilon}(y_{1},\dots,y_{k}). \tag{22}\] It follows that \[A_{\lambda}(f_{1},\dots,f_{k})(x)=2^{-k}e^{k\varepsilon\lambda^{2}}|S_{ \lambda\Delta}|^{-1}\int_{I_{k}}e^{-\pi i\lambda^{2}\operatorname{tr}(TX)}\,B _{X,\varepsilon}(f_{1},\dots,f_{k})(x)\,dX.\] Thus for the maximal function \[\mathcal{M}_{l}(f_{1},\dots,f_{k}):=\sup_{2^{l}\leq\lambda\leq 2^{l+1}}| \mathcal{A}_{\lambda\Delta}(f_{1},\dots,f_{k})|\] we have the pointwise estimate \[M_{l}(f_{1},\dots,f_{k})(x)\leq C_{d,\Delta}\,\Lambda^{-k(d-k-1)}\,\int_{I_{k} }|B_{X,\varepsilon}(f_{1},\dots,f_{k})(x)|\,dX, \tag{23}\] as \(\varepsilon=\Lambda^{-2}=2^{-2l}\) and \(\Lambda\leq\lambda\leq 2\Lambda\). Finally, by Minkowski's inequality \[\|M_{l}(f_{1},\dots,f_{k})\|_{2}\leq C_{d,\Delta}\,\Lambda^{-k(d-k-1)}\,\int_{ I_{k}}\|B_{X,\varepsilon}(f_{1},\dots,f_{k})(x)\|_{2}\,dX. \tag{24}\] Taking the Fourier transform of the expression in (22) we obtain \(\widehat{B_{X,\varepsilon}}(f_{1},\dots,f_{k})(\xi)\) equals \[\int_{\mathbb{T}^{k-1}}\widehat{f}_{1}(\xi_{1})\cdots\widehat{f}_{k-1}(\xi_{k -1})\widehat{f}_{k}(\xi-\xi_{1}-\cdots-\xi_{k-1})\,\widehat{G_{X,\varepsilon}} (\xi_{1},\dots,\xi_{k-1},\xi-\xi_{1}-\cdots-\xi_{k-1})\,d\xi_{1}\cdots d\xi_{ k-1}. \tag{25}\] Thus by the Cauchy-Schwarz inequality and Plancherel's indentity, one has \[\|B_{X,\varepsilon}(f_{1},\dots,f_{k})\|_{2}^{2}\leq\,\|\widehat{G_{X, \varepsilon}}\|_{\infty}^{2}\,\prod_{i=1}^{k}\|f_{i}\|_{2}^{2}. \tag{26}\] Thus, the \(\ell^{2}\times\cdots\times\ell^{2}\to\ell^{2}\) boundedness of the dyadic maximal operator \(M_{l}(f_{1},\dots,f_{k})\) follows from the estimate \[\int_{I_{k}}\|\widehat{G_{X,\varepsilon}}\|_{\infty}\,dX\leq C_{d,\Delta}\, \Lambda^{k(d-k-1)} \tag{27}\] with \(\Lambda=2^{l}\). For the mollified estimate assume that, supp \(\widehat{f}_{i}\subseteq\Omega_{j,l}^{c}\), i.e. \(\widehat{f}_{i}=1_{\Omega_{j,l}^{c}}\widehat{f}_{i}\) for some \(1\leq i\leq k\). By symmetry of the expression in (22) we may assume without loss of generality that \(i=1\). In this case in equality (25) the function \(\widehat{G_{X,\varepsilon}}(\xi_{1},\dots,\xi_{k-1},\xi-\xi_{1}-\cdots-\xi_{k -1})\) can be replaced by \(1_{\Omega_{j,l}^{c}}(\xi_{1})\,\widehat{G_{X,\varepsilon}}(\xi_{1},\dots,\xi_ {k-1},\xi-\xi_{1}-\cdots-\xi_{k-1})\), thus to prove Theorem 2, it is enough to show that for \(j,l\in\mathbb{N}\) with \(2^{j+2}\leq l\), one has \[\int_{I_{k}}\|1_{\Omega_{j,l}^{c}}(\xi_{1})\,\widehat{G_{X,\varepsilon}}(\xi_{1 },\dots,\xi_{k})\|_{\infty}\,dX\leq C_{d,\Delta}\,2^{-j/2}j^{-1}\,\Lambda^{k(d-k -1)} \tag{28}\] with \(\Lambda=2^{l}\). ## 5. Estimates for theta functions on the Siegel upper half space. To prove estimates (27) and (28) we will follow the approach given in Section 5 of [15]. For the sake of completeness we recall below some of the basic notions and constructs. If \(M=[m_{1},\ldots,m_{k}]\in\mathbb{Z}^{d\times k}\) and \(\mathcal{X}=[\xi_{1},\ldots,\xi_{k}]\in\mathbb{R}^{d\times k}\) are \(d\times k\) matrices then one has that \(\operatorname{tr}(M^{t}\mathcal{X})=m_{1}\cdot\xi_{1}+\ldots+m_{k}\cdot\xi_{k}\) where \(\cdot\) denotes the usual dot product. Thus the Fourier transform of a function \(f(m_{1},\ldots,m_{k})=f(M)\) may written \[\widehat{f}(\mathcal{X})=\widehat{f}(\xi_{1},\ldots,\xi_{k})=\sum_{M\in \mathbb{Z}^{d\times k}}f(M)e^{-2\pi i\operatorname{tr}(M^{t}\mathcal{X})}.\] This implies that \[\widehat{G}_{X,\varepsilon}(\mathcal{X})=\sum_{M\in\mathbb{Z}^{d\times k}}e^{ \pi i\ \operatorname{tr}[(M(X+i\varepsilon T^{-1})M^{t}-2M^{t}\mathcal{X}]}=\theta_ {d,k}(X+i\varepsilon T^{-1},-\mathcal{X},0) \tag{29}\] is the theta-function \(\theta_{d,k}:\mathbb{H}_{k}\times\mathbb{R}^{d\times k}\times\mathbb{R}^{d \times k}\rightarrow\mathbb{C}\) defined by \[\theta_{d,k}(Z,\mathcal{X},\mathcal{E})=\sum_{M\in\mathbb{Z}^{d\times k}}e^{ \pi i\ \operatorname{tr}[(M-\mathcal{E})Z(M-\mathcal{E})^{t}+2M^{t}\mathcal{X}- \mathcal{E}^{t}\mathcal{X}]} \tag{30}\] for \(Z=X+iY\in\mathbb{H}_{k}\), \(\mathbb{H}_{k}\) being the Siegel upper space, see (5.1)-(5.3) in [15]. We partition the range of integration \(I_{k}\) and estimating the theta function separately on each part by exploiting its transformation properties. This may be viewed as the extension of the classical Farey arcs decomposition to \(k>1\). Recall the integral symplectic group \[\Gamma_{k}=\left\{\gamma=\left(\begin{array}{cc}A&B\\ C&D\end{array}\right);\ AB^{t}=BA^{t},\ CD^{t}=DC^{t},\ AD^{t}-BC^{t}=E_{k},\right\} \tag{31}\] which acts on the Siegel upper-half space \(\,\mathcal{H}_{k}=\{Z=X+iY:\ X\in\mathcal{M}_{k},Y\in\mathcal{P}_{k}\}\,\) as a group of analytic automorphisms; The action being defined by: \(\gamma\langle Z\rangle=(AZ+B)(CZ+D)^{-1}\) for \(\gamma\in\Gamma_{k}\), \(Z\in\mathcal{H}_{k}\), see [15] and also [7]. Let us recall also the subgroup of integral modular substitutions: \[\Gamma_{k,\infty}=\left\{\gamma=\left(\begin{array}{cc}A&B\\ 0&D\end{array}\right);\ AB^{t}=BA^{t},\ AD^{t}=E_{k}\right\} \tag{32}\] Writing \(U=A^{t}\) and \(S=AB^{t}\), it is easy to see that \(D=U^{-1}\) and \(B=SU^{-1}\), moreover \(S\) is symmetric and \(U\in GL(k,\mathbb{Z})\), i.e. \(\det(U)=\pm 1\). The action of such \(\gamma\in\Gamma_{k,\infty}\) on \(Z\in\mathcal{H}_{k}\) takes the form: \[\gamma\langle Z\rangle=Z[U]+S \tag{33}\] using the notation \(Z[U]=U^{t}ZU\). The general linear group \(GL(k,\mathbb{Z})\) acts on the space \(\mathcal{P}_{k}\) of positive \(k\times k\) matrices, via the action: \(Y\to Y[U],\ Y\in\mathcal{P}_{k}\), and let \(\mathcal{R}_{k}\) denote the corresponding so-called Minkowski domain, see Definition 1 on p12 of [8]. A matrix \(Y=(y_{ij})\in\mathcal{R}_{k}\) is called reduced. We recall that for a reduced matrix \(Y\) \[Y\approx Y_{D}\,\ \ \ \ y_{11}\leq y_{22}\leq\cdots\leq y_{kk} \tag{34}\] where \(Y_{D}=\operatorname{diag}(y_{11},\ldots,y_{kk})\) denotes the diagonal part of \(Y\), and \(A\approx B\) means that \(A-c_{k}B>0\), \(B-c_{k}A>0\) for some constant \(c_{k}>0\). For a proof of these facts, see Lemma 2 on p20 in [8]. A fundamental domain \(\mathcal{D}_{k}\) for the action of \(\Gamma_{k}\) on \(\mathcal{H}_{k}\), called the Siegel domain, consists of all matrices \(Z=X+iY\), \((X=(x_{ij}))\), satisfying \[Y\in\mathcal{R}_{k},\ \ \ |x_{ij}|\leq 1/2,\ \ \ \ |\det\left(CZ+D\right)|\geq 1,\ \ \ \forall\ \gamma=\left(\begin{array}{cc}A&B\\ C&D\end{array}\right)\in\Gamma_{k}. \tag{35}\] The second rows of the matrices \(\gamma\in\Gamma_{k}\) are parameterized by the so-called coprime symmetric pairs of integral matrices \((C,D)\), which means that \(CD^{t}\) is symmetric and the matrices \(GC\) and \(GD\) with a matrix \(G\) of order \(k\) are both integral only if \(G\) is integral, see Lemma 2.1.17 in [1]. It is clear from definition (5.6) that if \(\gamma_{2}=\gamma\gamma_{1}\) with second rows \((C_{2},D_{2})\) and \((C_{1},D_{1})\) for some \(\gamma\in\Gamma_{k,\infty}\), then \((C_{2},D_{2})=(UC_{1},UD_{1})\) for some \(U\in GL(k,\mathbb{Z})\). On the other hand, if both \(\gamma_{1}\) and \(\gamma_{2}\) have the same second row \((C,D)\) then \(\gamma_{2}\gamma_{1}^{-1}\in\Gamma_{k,\infty}\). This gives the parametrization of the group \(\Gamma_{k,\infty}\backslash\Gamma_{k}\) by equivalence classes of coprime symmetric pairs \((C,D)\) via the equivalence relation \((C_{2},D_{2})\sim(C_{1},D_{1})\) if \((C_{2},D_{2})=(UC_{1},UD_{1})\) for some \(U\in GL(k,\mathbb{Z})\), see also p.54 in [1]. We will use the notation \([\gamma]=[C,D]\in\Gamma_{k,\infty}\backslash\Gamma_{k}\). If one defines the domain: \(\mathbb{F}_{k}=\cup_{\gamma\in\Gamma_{k,\infty}}\gamma{\mathcal{D}}_{k}\), then \(\ {\mathcal{H}}_{k}=\bigcup_{[\gamma]\in\Gamma_{k,\infty}\backslash\Gamma_{k}}\ \gamma^{-1}\mathbb{F}_{k}\) is a non-overlapping cover of the Siegel upper half-plane. Correspondingly, for a given matrix \(T>0\) of order \(k\), define the Farey arc dissection of level \(T\), as the cover \[I_{k}=\bigcup_{[\gamma]\in\Gamma_{k,\infty}\backslash\Gamma_{k}}\ I_{T}[ \gamma],\ \ \ \ I_{T}[\gamma]=\{X\in I_{k}:\ X+iT^{-1}\in\gamma^{-1}\mathbb{F}_{k}\} \tag{36}\] We recall the basic estimates (5.14)-(5.16) in [15] whose proofs are based on the transformation property \[|\theta_{d,k}(Z,{\mathcal{X}},0)|=|\det\left(CZ+D\right)|^{-\frac{d}{2}}|\ \theta_{d,k}(\gamma \langle Z\rangle,\,{\mathcal{X}}A^{t}-K_{\gamma}/2,\,{\mathcal{X}}C^{t}-N_{ \gamma}/2)|\] for some matrices \(K_{\gamma},N_{\gamma}\in\mathbb{Z}^{n\times k}\), see Proposition 5.2 in [15]. Namely, if \((C,D)\) is a coprime symmetric pair, then for \(Z\in I_{T}[C,D]\) one has \[|\theta_{d,k}(Z,{\mathcal{X}},0)|\leq C_{d,k}\,|\det\left(CZ+D\right)|^{-\frac {d}{2}} \tag{37}\] uniformly for \({\mathcal{X}}\in{\mathcal{M}}_{k}(\mathbb{R})\). Next we describe the "mollified" estimate (5.16) in [15] in slightly different form. For \(q\in\mathbb{N}\) and \(\tau>0\) define the region \[\Omega_{q,\tau}=\{{\mathcal{X}}\in\mathbb{R}^{d\times k}:\ |{\mathcal{X}}-P/2q| \leq\tau\ \ \text{for some}\ \ P\in\mathbb{Z}^{d\times k}\}. \tag{38}\] If \([\gamma]=[C,D]\) coprime symmetric pair, \(q:=|det(C)|>0\), then for \(Z\in I_{T}[C,D]\) \[|\theta_{d,k}(Z,{\mathcal{X}},0)|\lesssim|\det\left(CZ+D\right)|^{-\frac{d}{2 }}\,\left(e^{-c\,\tau^{2}\mu(C^{t}YC)}\right) \tag{39}\] uniformly for \({\mathcal{X}}\in\Omega_{q,\tau}^{c}\). Here \(Y=Im\gamma\langle Z\rangle\), \(\min(Y)=\min_{x\in\mathbb{Z}^{d},x\neq 0}|Yx\cdot x|\) and \(\mu(Y)=\min_{x\in\mathbb{R}^{d},\,|x|=1}|Yx\cdot x|\). Define, similarly as in (5.20) in [15] \[J_{T}[C,D]=\int_{I_{T}[C,D]}\,\sup_{{\mathcal{X}}}|\theta_{d,k}(X+iT^{-1},-{ \mathcal{X}},0)|\,dX. \tag{40}\] By (37) we have that \[J_{T}[C,D]\leq C_{d,k}\ J_{T}^{0}[C,D], \tag{41}\] where \[J_{T}^{0}[C,D]=\int_{X\in I_{T}[C,D]}|\det(CZ+D)|^{-\frac{d}{2}}\,dX. \tag{42}\] If \(q:=|\det(C)|>0\), then for \(\tau>0\) let \[J_{T,\tau}[C,D]:=\int_{I_{T}[C,D]}\,\sup_{{\mathcal{X}}}{\mathbf{1}}_{\Omega_ {\tau,q}^{c}}({\mathcal{X}})\,|\theta_{d,k}(X+iT^{-1},-{\mathcal{X}},0)|\,dX. \tag{43}\] By estimate (39) one has \[J_{T,\tau}[C,D]\leq C_{d,k}\ J_{T}^{1}[C,D]+J_{T,\tau}^{2}[C,D], \tag{44}\] where \[J_{T}^{1}[C,D]=\int_{I_{T}[C,D]}|\det(CZ+D)|^{-\frac{d}{2}}\,e^{-c\,min(Y)}\,dX \tag{45}\] \[J_{T,\tau}^{2}[C,D]=\int_{I_{T}[C,D]}|\det(CZ+D)|^{-\frac{d}{2}}\,e^{-cr^{2} \,\mu(C^{t}YC)}\,dX. \tag{46}\] where \(Y=Im\,\gamma\langle Z\rangle\) and \(\gamma\in\Gamma_{k}\) such that \([\gamma]=[C,D]\in\Gamma_{k,\infty}\backslash\Gamma\). Then by inequalities (5.24)-(5.26) given in Propositions 5.3-5.4 in [15], we have \[\sum_{S^{t}=S}J_{T}[C,D+CS]\leq C_{d,k}\ \det(T)^{\frac{d-k-1}{2}}|\det(C)|^{- \frac{d}{2}}and \tag{47}\] \[\sum_{S^{t}=S}J_{T,\tau}[C,D+CS]\leq C_{d,k}\ \det(T)^{\frac{d-k-1}{2}}\big{(}|\det(C)| ^{-k}\min(T)^{-\frac{d-2k}{4}}+|\det(C)|^{-\frac{d}{2}}(\tau^{2}\mu(T))^{- \frac{d-2k}{4}}\big{)} \tag{48}\] where the summation is over all symmetric integral matrices \(S\in{\mathcal{M}}_{k}(\mathbb{Z})\). Recall that the map \([C,D]\to C^{-1}D\) provides a one-one and onto correspondence between the classes of coprime symmetric pairs \([C,D]\in\Gamma_{k,\infty}\backslash\Gamma_{k}\), with \(\det(C)\neq 0\), and symmetric rational matrices \(R\) of order \(k\), and the pairs \([C,D+CS]\) correspond to the matrices \(R+S\) with symmetric \(S\in\mathbb{Z}^{k\times k}\). Let us write \(\mathbb{Q}(1)^{k\times k}\) for the space of modulo \(1\) incongruent symmetric rational matrices, where \(\mathbb{Q}(1)=\mathbb{Q}/\mathbb{Z}\), \(\mathbb{Q}\) being the set of rational numbers. If \(R=C^{-1}D\), for a coprime symmetric pair \([C,D]\) then will write \[J_{T}[R]:=\sum_{S^{t}=S}J_{T}[C,D+CS], \tag{49}\] \[J_{T,\tau}[R]:=\sum_{S^{t}=S}J_{T,\tau}[C,D+CS], \tag{50}\] which is well-defined as it only depends on the equivalence class \([R]\in\mathbb{Q}(1)^{k\times k}\). Finally write \(d(R)=|\det(C)|\) for \(R=C^{-1}D\). Then by (30) and (40), we have with \(\varepsilon=\Lambda^{-2}\) that \[\int_{I_{k}}\sup_{\mathcal{X}}|\theta_{d,k}(X+i\varepsilon T^{-1},-\mathcal{ X},0)|\,dX=\sum_{[C,D],det(C)\neq 0}J_{\Lambda^{2}T}[C,D]+\sum_{[C,D],det(C)=0}J _{\Lambda^{2}T}[C,D]=:\sum_{1}+\sum_{2}. \tag{51}\] An estimate for the second sum is given in Corollary 5.1 in [15], namely it is shown that \[\sum_{2}\leq C_{d,k}\;|\Lambda^{2}T|^{(k-1)(d-k)/2}\leq C_{d,k}\Lambda^{(d-k) (k-1)} \tag{52}\] where \(|T|=(\sum_{ij}t_{ij}^{2})^{1/2}\) is the Euclidean norm of the matrix \(T\). For the first sum we use estimate (47) for the matrix \(\Lambda^{2}T\), which implies \[\sum_{1}=\sum_{[R]\in\mathbb{Q}(1)^{k\times k}}J_{\Lambda^{2}T}[R]\leq C_{d,k} \;\Lambda^{k(d-k-1)}\sum_{[R]\in\mathbb{Q}(1)^{k\times k}}d(R)^{-d/2}. \tag{53}\] Recall the following estimate, proved in Lemma 1.4.9 in [7]; for \(u\geq 1\) and \(s>1\) one has \[u^{-s}\sum_{1\leq d(R)\leq u}d(R)^{-k}+\sum_{d(R)\geq u}d(R)^{-k-s}\leq C(2+ \frac{1}{s-1})\,u^{1-s} \tag{54}\] where the summation is taken over \([R]\in\mathbb{Q}(1)^{k\times k}\). In particular \(\sum_{R}d(R)^{-d/2}\lesssim 1\) in dimensions \(d>2k+2\), thus estimate (27) follows from (29), (51) and estimates (52)-(53). For the mollified estimate (28), we set \(\tau=2^{j-l}\) besides \(\Lambda=2^{l}\) and \(\varepsilon=2^{-2l}\). Again, we note that if \(q=|\det(C)|>0\) and if \(q\mid q_{j}\) i.e. if \(q\) divides \(q_{j}\) then \(\xi_{1}\in\Omega_{\tau,q}^{c}\) implies that \(\mathcal{X}\in\Omega_{\tau,q}\) for \(\mathcal{X}=(\xi_{1},\ldots,\xi_{d})\), for the sets \(\Omega_{j,l}\) and \(\Omega_{\tau,q}\) defined in (12) and (38). Using this observation, we have \[\int_{I_{k}}\sup_{\mathcal{X}}\mathbf{1}_{\Omega_{j,k}^{c}}(\xi_{1})|\theta_ {d,k}(X+i\varepsilon T^{-1},-\mathcal{X},0)|\,dX\;\lesssim\;\sum_{d(R)|q_{j}} J_{\Lambda^{2}T,\tau}[R]+\sum_{d(R)\nmid q_{j}}J_{\Lambda^{2}T}[R]+\sum_{2}. \tag{55}\] In dimensions \(d\geq 2k+3\), using (48) and (54), the first sum on the right side of (55) is crudely estimated by \[\sum_{d(R)\nmid q_{j}}J_{\Lambda^{2}T,\tau}[R] \lesssim\;\Lambda^{k(d-k-1)}\sum_{1\leq d(R)\leq q_{j}}\big{(}d(R )^{-k}\Lambda^{-\frac{d-2k}{2}}+d(R)^{-\frac{d}{2}}(\tau\Lambda)^{-\frac{d-2k }{2}}\big{)}\] \[\lesssim\;\Lambda^{k(d-k-1)}\big{(}q_{j}2^{-\frac{3l}{2}}+2^{- \frac{3j}{2}}\big{)}\;\lesssim\;\Lambda^{k(d-k-1)}2^{-\frac{3j}{2}}. \tag{56}\] Indeed, \(q_{j}=\operatorname{lcm}\{1\leq q\leq 2^{j}\}\approx e^{2^{j}}\leq 2^{l}\) as \(2^{j+2}\leq l\) by our assumptions. To estimate the second term on the right side of (55), we need the following. **Lemma 4**.: _Let \(j\in\mathbb{N}\) and \(s>1\). Then_ \[\sum_{d(R)\nmid q_{j}}d(R)^{-k-s}\leq C\,2^{j(1-s)}\,j^{-1} \tag{57}\] _where the constant \(C\) may depend on \(d,k\) and \(s\)._ Proof.: Let \[\Psi(s):=\sum_{[R]\in\mathbb{Q}(1)^{k\times k}}d(R)^{-k-s}=\sum_{n\geq 1}a_{k}(n)n^{-s}, \tag{58}\] with \(a_{k}(n)=\sum_{d(R)=n}d(R)^{-k}\). For two Dirichlet series \(\Psi(s)=\sum_{n\geq 1}a(n)n^{-s}\) and \(\Phi(s)=\sum_{n\geq 1}b(n)n^{-s}\) we will write \(\Psi(s)\preceq\Phi(s)\) if \(|a(n)|\leq b(n)\) for all \(n\geq 1\). It is proved in [7], see (34) Lemma 1.4.9 there, that \[\Psi(s)\preceq\zeta(s+1)^{K}\zeta(s)=:\sum_{n\geq 1}b_{K}(n)n^{-s}, \tag{59}\] with \(K=2^{k}+k-3\). Clearly the coefficients of the Dirichlet series \(\zeta(s+1)^{K}\zeta(s)\) are multiplicative i.e. \(b_{K}(nm)=b_{K}(n)b_{K}(m)\) if \((n,m)=1\), moreover are easy to show that, \[b_{K}(n)=\sum_{m|n}\frac{d_{K}(m)}{m}, \tag{60}\] where \(d_{K}(m)=|\{m_{1},\ldots,m_{k}\in\mathbb{N}:m_{1}m_{2}\cdots m_{K}=m\}|\). Since \(q_{j}=l.c.m.\{1\leq q\leq 2^{j}\}\), if \(n\nmid q_{j}\) the either there is a prime \(p>2^{j}\) such that \(p\mid n\) or there is a prime \(p<2^{j}\) such that \(p^{\gamma_{p}}>2^{j}\) but \(p^{\gamma_{p}}\mid n\). Accordingly, we have the estimate \[\sum_{d(R)!q_{j}}d(R)^{-k-s}=\sum_{n!q_{j}}a_{k}(n)n^{-s}\leq\sum_{p>2^{j}} \sum_{n\geq 1}b_{K}(pn)p^{-s}n^{-s}+\sum_{p<2^{j}}\sum_{n\geq 1}b_{K}(p^{ \gamma_{p}}n)p^{-\gamma_{p}s}n^{-s}. \tag{61}\] Writing \(n=p^{r}m\), the first sum on the right side of (61) is estimated by \[\sum_{p>2^{j}}\sum_{n\geq 1}b_{K}(pn)p^{-s}n^{-s}=\sum_{p>2^{j}}\sum_{r=1}^{ \infty}\sum_{m\geq 1,p!m}b_{K}(p^{r})b_{K}(m)p^{-rs}m^{-s}, \tag{62}\] using the fact that \(b_{K}(p^{r}m)=b_{K}(p^{r})b_{k}(m)\). By (60), we have \[b_{K}(p^{r})=1+\sum_{s=1}^{r}\frac{d_{K}(p^{s})}{p^{s}}\leq 1+\sum_{s=1}^{ \infty}\frac{(s+1)^{K}}{2^{s}}\lesssim 1, \tag{63}\] uniformly in \(r\geq 1\). Thus, for \(s>1\), \[\sum_{p>2^{j}}\sum_{r=1}^{\infty}\sum_{m\geq 1,p!m}b_{K}(p^{r})b_{K}(m)p^{- rs}m^{-s}\ \lesssim\ \sum_{p>2^{j}}p^{-s}\ \lesssim\ 2^{j(1-s)}j^{-1}, \tag{64}\] using the fact that the number of primes \(2^{J}\leq p<2^{J+1}\) is bounded by \(2^{J}\,J^{-1}\) for all \(J\geq j\). The second term on the right side of (61) is estimated similarly, except that here we use the fact that \(p^{\gamma_{p}}>2^{j}\) for \(p<2^{j}\). We have \[\sum_{p<2^{j}}\sum_{n\geq 1}b_{K}(p^{\gamma_{p}}n)p^{-\gamma_{p}s}n^ {-s} =\sum_{p<2^{j}}\sum_{r=\gamma_{p}}^{\infty}\sum_{m\geq 1,p!m}b_{K}(p^{r} )b_{K}(m)p^{-rs}m^{-s}\] \[\lesssim\ \sum_{p<2^{j}}\sum_{r=\gamma_{p}}^{\infty}p^{-rs}\ \lesssim\ \sum_{p<2^{j}}p^{-\gamma_{p}s}\ \lesssim\ 2^{j(1-s)}j^{-1}, \tag{65}\] as the number of primes \(p<2^{j}\) is bounded by \(2^{j}j^{-1}\). Estimate (57) follows immediately from (64)-(65). In dimensions \(d>2k+2\), Lemma 4 with \(s=d/2-k\geq 3/2\) implies that \[\sum_{d(R)!q_{j}}J_{\Lambda^{2}T}[R]\ \lesssim\ \Lambda^{k(n-k-1)}d(R)^{-d/2}\ \lesssim\ \Lambda^{k(n-k-1)}2^{-j/2}j^{-1}, \tag{66}\] with \(\Lambda=2^{l}\). Finally, by (52) (55)-(56) and (66) one obtains, in dimensions \(d>2k+2\) \[\int_{I_{k}}\sup_{\mathcal{X}}\mathbf{1}_{\Omega^{c}_{j,k}}(\xi_{1})|\theta_{d,k}(X+i\varepsilon T^{-1},-\mathcal{X},0)|\,dX\ \lesssim\Lambda^{k(d-k-1)}\big{(}2^{-j/2}j^{-1}+2^{-3j/2}+2^{-3l}\big{)}\ \lesssim\ \Lambda^{k(d-k-1)}2^{-j/2}j^{-1}. \tag{67}\] Estimate (28) follows immediately from (29) and (67).
2303.07342
Observational Causal Inference in Novel Diseases: A Case Study of COVID-19
A key issue for all observational causal inference is that it relies on an unverifiable assumption - that observed characteristics are sufficient to proxy for treatment confounding. In this paper we argue that in medical cases these conditions are more likely to be met in cases where standardized treatment guidelines do not yet exist. One example of such a situation is the emergence of a novel disease. We study the case of early COVID-19 in New York City hospitals and show that observational analysis of two important thereapeutics, anti-coagulation and steroid therapy, gives results that agree with later guidelines issued via combinations of randomized trials and other evidence. We also argue that observational causal inference cannot be applied mechanically and requires domain expertise by the analyst by showing a cautionary tale of a treatment that appears extremely promising in the data, but the result is due to a quirk of hospital policy.
Alexander Peysakhovich, Yin Aphinyanaphongs
2023-03-13T17:59:29Z
http://arxiv.org/abs/2303.07342v1
# Observational Causal Inference in Novel Diseases: A Case Study of COVID-19 ###### Abstract A key issue for all observational causal inference is that it relies on an unverifiable assumption - that observed characteristics are sufficient to proxy for treatment confounding. In this paper we argue that in medical cases these conditions are more likely to be met in cases where standardized treatment guidelines do not yet exist. One example of such a situation is the emergence of a novel disease. We study the case of early COVID-19 in New York City hospitals and show that observational analysis of two important thereapetics, anti-coagulation and steroid therapy, gives results that agree with later guidelines issued via combinations of randomized trials and other evidence. We also argue that observational causal inference cannot be applied mechanically and requires domain expertise by the analyst by showing a cautionary tale of a treatment that appears extremely promising in the data, but the result is due to a quirk of hospital policy. ## 1 Introduction Evaluating the effectiveness of therapies is a primary problem in medicine. The gold standard is the use of randomized trials (Concato et al., 2000). However, randomized trials can be expensive and time-consuming. A second form of evidence is the use of non-randomized data (e.g. data obtained during normal hospital operation) to perform observational causal inference. The biggest issue with observational causal inference is that patient treatment assignment is not randomized - in particular that treatment assignment and outcome are correlated aka. confounded (Angrist and Pischke, 2008; Imbens and Rubin, 2015). Thus, unlike in a randomized trial, a positive (or negative) difference between treated and control groups does not necessarily mean that the treatment causes this difference. A large set of observational causal inference methods (e.g. propensity score based methods (Rosenbaum and Rubin, 1983), model adjustment based methods (Van der Laan and Rose, 2011), or doubly robust methods (Bang and Robins, 2005)) attempt to deal with the issue by removing the confounding using observed characteristics of the patients. In essence this creates a synthetic treatment and control group where now treatment is 'as good as random.' The key assumption behind all of these methods is that observed characteristics are sufficient to proxy for the confounding (Imbens and Rubin, 2015). However, this assumption is by definition unverifiable and so in general the credibility of an effect derived by observational causal inference comes down to whether the "deconfounding" is likely to be sufficient (or not). In medicine some conditions have well entrenched treatment plans. In this case, we either explicitly lack the ability to match treated patients to a'similar' control. This is because when a patient that meets criteria for treatment is not treated it is for a specific reason that makes that patient'special' (perhaps for unobservable reasons) and thus not necessarily a good match for a treated patient. By contrast, sometimes we do not have well entrenched treatment plans. For example, in the case of novel diseases where information changes day to day, doctors have little experience with the disease, and different doctors may choose to take different actions for similar patients. In this case we argue that observational causal inference can be extremely useful as there will be considerable overlap between treated and control distributions. We study the case of the early COVID-19 outbreak in New York City. During this phase there were two major causes of patient degradation - thrombotic events (clotting) (Cantador et al., 2020) and inflammation (Shang et al., 2020). We use observational causal inference to look at the effectiveness of two therapies - aggressive anti-coagulation and the use of steroids. We show that general use of therapeutic anti-coagulation (relative to prophylactic) appears not to have obvious benefits. By contrast, we show that steroid therapy does have positive effects. This is precisely the guidance that emerged over the next year of COVID-19 treatment due to randomized trials (RECOVERYGroup, 2021; Sadeghipour et al., 2021; REMAP-CAP and Investigators, 2021). There is no statistical test for whether the de-confounding assumption holds (Angrist and Pischke, 2008). Thus observational causal inference requires analysts to get their hands dirty with understanding the data generating process. We show a cautionary tale - an example of a therapy, factor XA inhibitors, where patients who receive it appear to have better outcomes, even after adjusting for other observed factors. However, this positive association is unlikely to be causal - rather, patients who are deemed well enough to be discharged from the hospital were often given a factor XA inhibitor prescription and started on the treatment in their last few days. Taken together, we argue that observational causal inference can be especially useful in the context of novel diseases where a lack of treatment protocols creates the necessary overlap in treated/control distributions. While observational results are not guaranteed to be causal without additional assumptions, knowledge of the data generating mechanism by the analyst can make these assumptions more plausible. Observational causal inference results should be used as complements to physician knowledge to help come up with treatment plans as well as generating potential hypotheses to be verified in more expensive randomized trials. ## 2 Dataset We use a de-identified dataset of patients admitted to a New York area hospital system for COVID19 between March and May of 2020. For all of the patients we construct a set of covariates including demographics (age, sex, race/ethnicity, smoking status), known comorbidities (prior myocardial infarction, heart failure, vascular disease, dementia, pulmonary disease, rheumatoid disease, peptic ulcer disease, diabetes, cancer, liver disease, HIV/AIDS, and the compound Charlson Score), out-patient medications (whether the patient was taking analgesics, antihistamines, antiarthritrics, antifatpangals, antibiotics, antiparastics, anticoagulants, antihyperglycomics, tumor necrosis inhibiting agents, antineoplastics, antiparkson drugs, antiplatelet drugs, antivirals, cardiovascular drugs, contraceptives, cough preparations, CNS drugs, autonomic drugs, diuretics, gastrointestinal drugs, hormones, immunosupants, muscle relaxants, pre-natal vitamins, psychotherapeutic drugs, sedatives, smoking deterrents, thyroid drugs, or vitamins), as well as the first lab taken within the first 36 hours of hospital stay (we use 36 hours as we have access to lab result times rather than lab order times, see below for more description of labs used), and summary statistics (mean, min, max) of vital signs (heart rate, oxygen saturation, blood pressure, temperature, respiratory rate) from the first 24 hours of hospital stay. ### Lab Values Not all patients have all lab values. To deal with this we first drop all labs with more than \(20\%\) missing values. As an exception to rule we keep D-dimer (a measure of coagulation) as a covariate in our anti-coagulation analyses. It is missing for \(25\%\) of patients, but we have prior knowledge that is used by clinicians in treatment assignment for anti-coagulation. Many of these labs are extremely right skewed, so we use a log transformation to normalize them. In addition, we see some examples of values outside of biological plausibility due to data quality issues. To deal with these we perform a winsorization at the 99th percentile for each lab. The final list of labs that we use are: Albumin, Alkaline Phosphate, ALT, AST, Bilirubin, Blood Urea Nitrogen, C-Reactive Protein, Calcium, Chloride, Creatinine, D-Dimer, Eosonophils, Ferritin, Hematocrit, LDH, Lymphocytes, Platelet Volume, Monocytes, Neutrophils, Potassiumum, Protein, Prothrombin Time, Sodium, WBC. ### Outcome Measure Construction As our primary outcome of interest we use 21 organ support free days (21OSFD). We chose this measure after consulting with physicians working in our hospital as this measure is employed in existing randomized trials (e.g. ) or (Abdelhady et al., 2021). The 21 organ support free days measure is the number of days that a patient does not require pressors, renal replacement therapy, hi flow oxygen, or invasive mechanical ventilation. Patients that die during the course of their stay receive a \(-1\) for the measure. We look at patients that are admitted at least 21 days before the end of our observation period, thus we do not have censoring in our data. ## 3 Treatments Studied We study two treatments: aggressive anti-coagulation therapy and steroids. We describe the medical details of each treatment that guide some of our analysis choices below. ### Anticoagulation A major driver of adverse outcomes in COVID-19 patients is thrombosis (Bilaloglu et al., 2020). The standard protocol for dealing with clotting is the use of anti-coagulation (AC). AC is typically broken down into two doses - a smaller or prophylactic dose which is used to prevent clots from forming and a larger or therapeutic dose which is typically given when a clot is already detected (Lloyd et al., 2008). At the time this data was collected, there was major debate about whether the prevalence of thrombotic events in COVID patients should lead to a more aggressive AC strategy. Here we evaluate the use of such an aggressive strategy. Note that almost all patients at our hospital receive some form of anticoagulation therapy during their stay - \(92\%\) patients in our data receive some AC with median time to first AC dose of \(8\) hours, so we are not evaluating a "no AC" arm. We focus on two possible strategies: aggressive early AC versus a more conservative strategy of beginning with prophylactic and moving to a larger dose later if it is needed. We define our treatment as individuals either receive therapeutic levels (treatment) or only prophylactic (control) levels of AC for their first \(72\) hours in the hospital. We remove individuals who receive more than one level of AC during this period.1 Footnote 1: The 72 hour window was chosen after consultation with physicians. Varying the window to be 24, 48, 72, or 96 hours does not change the main results. The tradeoff is that using a larger window decreases our sample size since we remove patients that receive multiple levels of AC during the window of study. The definition of therapeutic AC includes all intravenous Heparin, Rivaroxban, Warfarin, Dabigatran, or high dose (\(\geq\)60mg) Enoxaparin. Preventative/prophylatic AC includes subcutaneous heparin with \(\leq\) 15,000 units per 24 hours, or low dose (less than 60mg) Enoxaparin. The vast majority of AC treatment in our sample is Heparin. From speaking to clinicians, we know that D-dimer was used as an indicator of possible severe clotting during this time. We see in our data that almost all patients with extremely high (?3000 ng/mL) D-dimer levels receive aggressive AC. For this reason, we restrict our analysis to patients with D-dimer levels below \(3000\). Overall \(23\%\) of our sample receives the aggressive treatment. Figure 1 below shows the observed probability of aggressive treatment by baseline D-dimer as well as distribution of D-dimer by treatment group. We see that there is substantial overlap in the 'elevated' category which also makes up the majority of our patients. ### Steroid Therapy A second major driver of adverse outcomes for COVID-19 patients is is inflammation-mediated lung injury. A treatment for inflammation is the use of glucocorticoids (steroids). In our analysis patient is deemed to be receiving steroid treatment if they receive steroids (Dexamethasone, Hydrocortisone, or Prednisone) within their first 72 hours of hospital admission. Here our complete cases analysis includes \(2282\) patients with \(190\) (approximately \(8\%\)) of them receiving the steroid treatment. Unlike in the AC analysis we do not have exclusion criteria based on lab results as we did not learn of any single lab that was used by all clinicians. Rather, combinations of factors led some (but not others) to treat using steroids. The prevailing consensus at the time of the COVID19 outbreak in New York City was mixed (Shang et al., 2020; Russell et al., 2020) with randomized trials (RECOVERYGroup, 2021) only available much later. Though after randomized trials the use of steroids became common, in our sample, however, we see that only \(\sim 8\%\) of our patients receive steroids in their first 72 hours of hospital stay. ## 4 Results For both treatment we look at 3 analyses: first, the overall (unadjusted) correlation between treatment and outcome. Second, we adjust for our observed covariates in a linear regression and report the coefficient on treatment. Third, we perform propensity score matching. To do the propensity score matching we use a logistic regression on the full set of covariates to construct the propensity scores. Because most units are not treated, we matched 3 control units per 1 treated unit with a caliper of size \(.05\). All analyses were done using the R package Matching (Sekhon, 2008). We estimate the average treatment effect on the treated (ATT) in our propensity score match. Recent work as shown that regression in observational data estimates something close to the ATT (Sloczynski, 2020) when the proportion of treated individuals is small (as in both our cases), so this makes our estimates comparable. ### Overlap Analysis We investigate the quality of our matching analysis by looking at overlaps in propensity scores as well as pre-and-post match balance on the covariates. Because we have a large number of covariates, we show only 'important' ones in the balance plot. We select these important covariates by running two cross-validated L1 regularized regressions using the \(R\) package \(glmnet\)(Hastie et al., 2016) predict the outcome and the treatment. We then take the union of the set of covariates selected by these models (note that in theory only variables which affect _both_ the outcome and the treatment are actually confounders so this is a very conservative selection). Figure 2 shows the propensity score distributions for both treatments. We see that there are some unmatchable samples (examination of our data shows that these examples appear to be very sick Figure 1: Higher D-dimer patients were much more likely to receive aggressive AC as we would expect (panel a). We see that there is substantial overlap in D-dimer across treatments when extremely high levels (\(\geq 3000\)) are removed. patients that receive aggressive treatment). However, there is a large overlap in distributions which allows for matching. We drop \(84\) (approximately \(33\%\)) treated patients in our matching for AC analysis and \(44\) (\(\sim 23\%\)) treated patients in our steroid analysis. Figure 3 shows balance plots2 for both treatments for the 'important' variables as defined above. We see that the propensity matching produces relatively comparable distributions in terms of means in both treatments. Footnote 2: We used the \(R\) package cobalt ([https://cran.r-project.org/web/packages/cobalt/index.html](https://cran.r-project.org/web/packages/cobalt/index.html)) to generate these plots. Figure 2: Propensity score distributions from our dataset. We are able to match \(66\%\) of aggressive AC treated patients and \(77\%\) of steroid treated patients when using a caliper of size \(.05\). ### Causal Effect Estimates Finally, we show our causal effect estimates in Figure 4. We see that without balance/adjustment that there is a baseline negative association between both treatments and outcome. Thus, baseline sicker patients are more likely to receive both treatments. However, after adjustment or matching we see that there is now a positive association between steroid treatment and outcome (regression point estimate \(1.34\) and \(95\%\) CI = \([.11,.257]\)\(p<.05\), matching point estimate \(1.36\) and \(95\%\) CI = \([-.13,2.85]\)\(p=.07\)). By contrast we see little effect of more aggressive AC therapy with point estimates near 0 (regression point estimate \(-.55\) with \(95\%\) CI = \([-1.82,.75]\), matching point estimate \(-.27\) with \(95\%\) CI = \([-1.8.8]\)). ### External Validation Our observational results, derived from data in the early months of the pandemic, are consistent with treatment policies which evolved over the course of the pandemic www.covid19treatmentguidelines.nih.gov/therapies/antithrombotic-therapy/) as well as by randomized trial data showing no beneficial effect of starting aggressive therapeutic level AC compared to standard care (Sadeghipour et al., 2021). Other observational studies hint at heterogeneous effects (Paranjpe et al., 2020). We lack the statistical power to look for anything other than a main effect in our population (recall the rule of thumb that testing a single heterogeneous cut requires about \(16\times\) the data compared to testing a main effect (Gelman, 2018)). By contrast, steroids (in particular, Dexamethasone) have been shown to be effective at reducing adverse events in COVID patients in randomized trials (RECOVERYGroup, 2021) with other observational work looking for heterogeneous effects (Lengerich et al., 2021). Steroid therapy is a standard tool in practitioners treating COVID patients [https://www.covid19treatmentguidelines.nih.gov/management/clinical-management/hospitalized-adults--therapeutic-management/](https://www.covid19treatmentguidelines.nih.gov/management/clinical-management/hospitalized-adults--therapeutic-management/)). ## 5 Conclusion and Caution While the prior sections may seem to suggest that observational causal inference is relatively straightforward in cases where treatment plans are not set in stone, this is quite far from the truth. Here we will argue that causal inference is not simply a statistical exercise, but one which requires knowledge about the underlying data generating process from the analyst. We consider another class of anti-coagulant drugs, factor XA inhibitors (FXI). These drugs are different from the mostly Heparin based AC that we considered earlier in two ways: first, they affect Figure 4: In both AC and steroid treatments sicker patients are more likely to receive the treatment. However, after adjustment or matching we see that there is no strong association between AC treatment and outcome and a positive association between steroid treatment and outcome. Error bars indicate \(95\%\) confidence intervals. a different part of the clotting pathway than Heparin (Ansell, 2007), second they are available in oral form while most of the AC we considered before is either subcutaneous or intravenously administered. We consider a naive definition of treatment where we define a patient as treated \((N_{treated}=318)\) if they received any factor XA inhibitors during their time in the hospital \((N_{control}=1963)\). We perform the same analyses (linear model with adjusting, propensity score matching) as in the main analysis. Figure 5 shows the balance plots (as above we show only 'important' variables selected via Lasso regressions) as well as the the effect estimates. The model results appear to show that FXI has a large (the size of steroids) protective affect on patients. However, this is likely an artifact of the data generating process - we know that at the time these data were collected patients who were to be discharged from the hospital were prescribed Factor XA inhibitors as out-patient treatment and started on them before the discharge occurred. The model results appear to show that FXI has a large (the size of steroids) protective affect on patients. However, this is likely an artifact of the data generating process - we know that at the time these data were collected patients who were to be discharged from the hospital were part of our erroneous conclusion comes from the way we define the treatment variable - by looking at individuals who _ever_ receive treatment rather than individuals who receive the treatment early in their stay (as in the AC/steroids analyses). We do note that there appear to be some hints in the results that something is off. First, we see that the baseline correlation between treatment and outcome is weakly positive. This means that in general we do not see the relationship that we see in steroid/AC treatments where more sick patients are more likely to get the treatment. While there are plausibly reasons for this, it does signal us to be somewhat cautious. Second, we see that even after matching we have an imbalance in the important covariates. This further suggests that very different individuals are receiving treatment and control. The key point remains that while observational causal inference can be a useful tool in medicine especially during times when treatment guidelines are not set in strong. In both the AC and steroid treatments we cleanly recover results that are confirmed by randomized control trials. However, gaining knowledge from observational studies requires carefully defining treatment variables and understanding the data generating process as our factor XA example shows. Figure 5: Model results appear to show that Factor XA has a large (the size of steroids) protective affect on patients. However, this is likely an artifact of the data generating process - we know that at the time these data were collected patients who were to be discharged from the hospital were prescribed Factor XA inhibitors as out-patient treatment and started on them before the discharge occurred.
2310.09900
Algebra and Combinatorics of the Brion map on Generalized Permutahedra
The Brion morphism maps a generalized permutahedron to a collection of posets associated to its vertices. We compute this map explicitly for the Hopf monoids of permutahedra, associahedra, and orbit polytopes, and we explore the dual Brion map of the primitive Lie monoids associated to these three Hopf monoids. We describe the Lie monoid structure of the primitives in this dual setting and in particular we show that the Lie monoid of primitives of associahedra is isomorphic to the positive part of the Witt Lie algebra
Alvaro Cornejo, Mariel Supina
2023-10-15T18:03:16Z
http://arxiv.org/abs/2310.09900v1
# Algebra and combinatorics of the Brion map on generalized permutahedra ###### Abstract The Brion morphism maps a generalized permutahedron to a collection of posets associated to its vertices. We compute this map explicitly for the Hopf monoids of permutahedra, associahedra, and orbit polytopes, and we explore the dual Brion map of the primitive Lie monoids associated to these three Hopf monoids. We describe the Lie monoid structure of the primitives in this dual setting and in particular we show that the Lie monoid of primitives of associahedra is isomorphic to the positive part of the Witt Lie algebra. ## 1 Introduction Joyal [9], Joni and Rota [8], Schmitt [14], and others used Hopf algebras as an algebraic framework to study combinatorial objects, like posets, graphs, and matroids, as these objects have natural operations of merging and breaking. Aguiar and Mahajan [3] provided a useful framework to study combinatorial objects in the contexts of species and Hopf monoids, which keep track of more combinatorial information than Hopf algebras. Aguiar and Ardila [1] showed that the family of polyhedra called generalized permutahedra have the structure of a Hopf monoid, and in fact showed that this is the largest family of polytopes that support this structure. They also showed that generalized permutahedra contain many other combinatorial Hopf monoids, like posets, graphs, matroids, and more. When studying algebraic structures, it is often useful to study its structure in two ways: externally by seeing how structure-preserving maps behave, and internally by studying elements of the structure themselves. We explore these two directions. In the internal perspective, in a cocommutative Hopf monoid \(\mathbf{H}\), the primitive elements \(\mathcal{P}(\mathbf{H})\) play an important role due to a variant of the Cartier-Milnor-Moore theorem [6, 12, 2]. We provide necessary background on this topic in Section 2. In the context of Hopf monoids this theorem states that if we have a cocommutative Hopf monoid, then \(\mathbf{H}\) is exactly the same as the universal enveloping monoid on the primitives (see Theorem 2.14). While \(\mathbf{GP}\) is not cocommutative, it is commutative. This motivates us to consider its dual, which is cocommutative (see Proposition 2.1), allowing us to recover the dual Hopf monoid \(\mathbf{GP}^{*}\) from \(\mathcal{P}(\mathbf{GP}^{*})\). There is even more structure at play: A Hopf monoid \(\mathbf{H}\) has the structure of a Lie monoid with the commutator \(x\cdot y-y\cdot x\) as the Lie bracket, and in fact the primitives \(\mathcal{P}(\mathbf{H})\) form a Lie submonoid. In Section 3 we describe the primitives for permutahedra, associahedra, and orbit polytopes and each of their Lie brackets. In particular, for associahedra we find that the Lie monoid is isomorphic to the positive part of the Witt Lie algebra. In the external perspective, Ardila and Sanchez [4] introduced several morphisms of Hopf monoids such as the Brianchon-Gram morphism and the Brion morphism. The latter was motivated by Brion's theorem, which expresses the generating function of the lattice points of a polytope in terms of the generating function for the tangent cones of the vertices. In the context of Hopf monoids, the Brion map is a Hopf monoid morphism from the Hopf monoid of (extended) generalized permutahedra \(\mathbf{GP}^{+}\) to the Hopf monoid of posets \(\mathbf{P}\), as the tangent cones of the vertices of generalized permutahedra are encoded by posets. In Section 4, we calculate the Brion map on the standard permutahedra, associahedra, and orbit polytopes. Each of these families feature an elegant combinatorial structure. Duality allows us to consider the dual Brion map \(\mathcal{P}(\mathbf{P}^{*})\to\mathcal{P}(\mathbf{GP}^{*})\), where it suffices to restrict our domains to the primitives in light of the Cartier-Milnor-Moore theorem. From our inclusion maps, we can restrict the images to be \(\mathcal{P}(\overline{\mathbf{\Pi}}^{*})\), \(\mathcal{P}(\overline{\mathbf{A}}^{*})\), and \(\mathcal{P}(\overline{\mathbf{OP}}^{*})\). In Section 5, we find this dual Brion map on permutahedra, associahedra, and orbit polytopes. ## 2 Background In this section we introduce necessary context regarding Hopf monoids, generalized permutahedra, and the Cartier-Milnor-Moore Theorem. ### Hopf Monoids Let \(\mathbb{K}\) be a field of characteristic \(0\). A **vector species**\(\mathbf{H}\) assigns a \(\mathbb{K}\)-vector space \(\mathbf{H}[I]\) to each finite set \(I\), and a linear map \(\mathbf{H}[\sigma]:\mathbf{H}[I]\to\mathbf{H}[J]\) to each bijection \(\sigma:I\to J\) between two finite sets. The maps must have the property that \(\mathbf{H}[\mathrm{id}]=\mathrm{id}\) and \(\mathbf{H}[\sigma\circ\tau]=\mathbf{H}[\sigma]\circ\mathbf{H}[\tau]\). A **morphism** of vector species \(f:\mathbf{H}\to\mathbf{K}\) is a collection of maps \(f_{I}:\mathbf{H}[I]\to\mathbf{K}[I]\) which satisfy the naturality axiom: for each bijection \(\sigma:I\to J\), we have \(f_{J}\circ\mathbf{H}[\sigma]=\mathbf{K}[\sigma]\circ f_{I}\). With the structure above we can now construct a Hopf monoid. A **connected Hopf monoid in vector species** is a vector species with \(\mathbf{H}[\emptyset]=\mathbb{K}\) endowed with two families of linear maps \(\mu_{S,T}:\mathbf{H}[S]\otimes\mathbf{H}[T]\longrightarrow\mathbf{H}[I]\) and \(\Delta_{S,T}:\mathbf{H}[I]\longrightarrow\mathbf{H}[S]\otimes\mathbf{H}[T]\) for each decomposition \(I=S\sqcup T\). These maps must satisfy naturality, unitality, associativity, and compatibility (see [3, 1]). We call the collection of the \(\mu\)-maps the **product** and the collection of the \(\Delta\)-maps the **coproduct** of \(\mathbf{H}\). A **symmetric braiding** of a Hopf monoid \(\mathbf{H}[I]\) is a natural isomorphism \(\beta_{S,T}:\mathbf{H}[S]\otimes\mathbf{H}[T]\to\mathbf{H}[T]\otimes\mathbf{H }[S]\)[3, section 1.1.2]. In all our applications, \(\beta\) swaps or "braids" the \(\mathbf{H}[S]\) and \(\mathbf{H}[T]\) components of a tensor. Let \(x\in\mathbf{H}[S]\), \(y\in\mathbf{H}[T]\), and \(z\in\mathbf{H}[I]\) for some decomposition \(I=S\sqcup T\). Intuitively and in the many contexts of this paper, the product will describe how to merge the combinatorial structures \(x\) with labels in \(S\) and \(y\) with labels in \(T\) into one structure \(x\cdot y\) with labels in \(I\). The coproduct will describe how to break up a structure \(z\) with labels in \(I\) into "two" structures, the **restriction**\(z|_{S}\) with labels in \(S\) and the **contraction**\(z/_{S}\) with labels in \(T\). A **morphism of Hopf monoids**\(f:\mathbf{H}\to\mathbf{K}\) is a morphism of vector species which also commutes with the products and coproducts. We say \(\mathbf{H}\) is **commutative** if for all \(I=S\sqcup T\) with \(x\in\mathbf{H}[S]\) and \(y\in\mathbf{H}[T]\), it holds that \(\mu_{S,T}(x\otimes y)=\mu_{T,S}(y\otimes x)\), and **co-commutative** if \(\Delta_{S,T}(z)=\beta_{T,S}(\Delta_{T,S}(z))\). In some settings it is useful to consider Hopf algebras, which can be obtained from combinatorial Hopf monoids by "forgetting" the labelings of objects to give more insight into the global structure. We will use the **Fock functor**\(\mathcal{F}\) which takes a Hopf monoid in vector species \(\mathbf{H}\) to a graded Hopf algebra \(H\) (see [3]). ### Duality of Hopf Monoids Given a vector species \(\mathbf{H}\), the **dual vector species**\(\mathbf{H}^{*}\) is the vector species where for a finite set \(I\), \(\mathbf{H}^{*}[I]\) is the vector space dual to \(\mathbf{H}[I]\), and the map \(\mathbf{H}^{*}[\sigma]:\mathbf{H}^{*}[I]\to\mathbf{H}^{*}[J]\) is given by the inverse map to the dualization of \(\mathbf{H}[\sigma^{-1}]\). Given a basis for \(\mathbf{H}[I]\), there is a dual basis for \(\mathbf{H}^{*}[I]\) consisting of \(b^{*}:\mathbf{H}[I]\to\mathbb{K}\) for each basis element \(b\in\mathbf{H}[I]\) where \(b^{*}(c)\) is \(1\) if \(c=b\) and \(0\) otherwise. If \(\mathbf{H}\) is a Hopf monoid in vector species, the **dual Hopf monoid**\(\mathbf{H}^{*}\) is the dual vector species whose product and coproduct are dual morphisms to the coproduct and product of \(\mathbf{H}\) respectively, viewed as linear maps. For a Hopf monoid with the property that the coproduct of a basis element is always a simple tensor of basis elements (as is the case with the Hopf monoids mentioned in this article), the dual product and coproduct are \[\mu^{*}_{S,T}(x^{*}\otimes y^{*})=\sum_{\Delta_{S,T}(z)=x\otimes y}z^{*},\qquad \Delta^{*}_{S,T}(z^{*})=\sum_{\mu_{S,T}(x\otimes y)=z}x^{*}\otimes y^{*}. \tag{2.1}\] As a consequence of (2.1), we obtain the following well-known relationship between a Hopf monoid and its dual. **Proposition 2.1**.: _If \(\mathbf{H}\) is commutative (resp. co-commutative), then \(\mathbf{H}^{*}\) is cocommutative (resp. commutative)._ This also results in a relationship between the primitive and indecomposable elements, which can be viewed as the "basic building blocks" of Hopf monoids. The element \(z\in\mathbf{H}[I]\) is **primitive** if for any nontrivial decomposition \(S\sqcup T=I\), we have \(\Delta_{S,T}(z)=0\); it is **indecomposable** if \(\mu_{S,T}(x\otimes y)=z\) implies that \(S\) or \(T\) is the empty set. We denote the set of primitives of \(\mathbf{H}\) by \(\mathcal{P}(\mathbf{H})\). We use the following result about finite-dimensional Hopf monoids (see e.g. [2]): **Proposition 2.2**.: _If \(z\in\mathbf{H}[I]\) is primitive then \(z^{*}\in\mathbf{H}^{*}[I]\) is indecomposable, and if \(z\in\mathbf{H}[I]\) is indecomposable then \(z^{*}\in\mathbf{H}^{*}[I]\) is primitive._ ### Generalized Permutahedra Let \(I\) be a finite set. Denote the linearization of \(I\) by \(\mathbb{R}I\) and the dual of \(\mathbb{R}I\) by \((\mathbb{R}I)^{*}\) where \((\mathbb{R}I)^{*}=\{\text{functions }y:I\to\mathbb{R}\}\). Such a \(y\in(\mathbb{R}I)^{*}\) is called a **linear functional**. **Definition 2.3**.: A **fan**\(\mathcal{F}\) is a collection of cones where, * If \(C\in\mathcal{F}\) then all the faces of the cone \(C\) are in \(\mathcal{F}\) * For \(C,D\in\mathcal{F}\), if \(C\cap D\neq\emptyset\) then \(C\cap D\) is a face of both \(C\) and \(D\). We say a fan \(\mathcal{G}\) is a **coarsening** of another fan \(\mathcal{F}\) (or \(\mathcal{F}\) is a refinement of \(\mathcal{G}\)) if every cone of \(\mathcal{F}\) is contained in a cone of \(\mathcal{G}\). Given a face \(F\) of a polytope \(P\subseteq\mathbb{R}I\), the **normal cone of \(F\)** is \[\mathcal{N}_{P}(F)=\{y\in(\mathbb{R}I)^{*}\mid y\text{ attains its maximal value on $P$ at every point in $F$}\}\] This means \(\mathcal{N}_{P}(F)\) is the cone of linear functionals \(y\) such that the \(y\)-maximal face of \(P\) contains the face \(F\). Define the **normal fan**\(\mathcal{N}_{P}\) as the fan consisting of normal cones \(\mathcal{N}_{P}(F)\) for every face of the polytope \(P\). This is indeed a fan [1]. We say that two polytopes are **normally equivalent** if they have the same normal fan. For any finite set \(I\), the **standard permutahedron**\(\pi_{I}\) in \(\mathbb{R}I\) is the convex hull of all points of the form \(\sum_{i\in I}f(i)e_{i}\in\mathbb{R}I\), where \(f:I\to[n]\) is a bijection and \(n=|I|\). The normal fan of \(\pi_{I}\) is called the **braid fan**, \(\mathcal{N}_{\pi_{I}}\). We can see this fan as the collection of faces of the **braid arrangement**\(\mathcal{B}_{I}\) in \((\mathbb{R}I)^{*}\)[1]. This arrangement consists of \(\binom{n}{2}\) hyperplanes defined by \(x_{i}=x_{j}\) for \(i,j\in I\) and \(i\neq j\). **Definition 2.4** ([1]).: A **generalized permutahedron** in \(\mathbb{R}I\) is a polytope whose normal fan is a coarsening of the braid fan \(\mathcal{N}_{\pi_{I}}\). Equivalently, it is a polytope whose edge directions have the form \(\{e_{i}-e_{j}:i\neq j\}\); i.e. the edges are all parallel to those of the permutahedron. An example of a generalized permutahedron is the **(Loday) associahedron**[1, 10]. For a finite set \(I\) with a linear ordering \(\ell\), the associahedron is defined as the Minkowski sum \(a_{\ell}=\sum_{i\leq\ell j}\Delta_{[i,j]_{\ell}}\) where \(\Delta_{[i,j]_{\ell}}=\operatorname{conv}\{e_{k}:i\leq_{\ell}k\leq_{\ell}j\}\). Orbit polytopes form another subfamily of generalized permutahedra. Let \(p\in\mathbb{R}I\). Then the **orbit polytope**\(\mathcal{O}(p)\) is the convex hull of all permutations of the coordinates of \(p\). That is \(\mathcal{O}(p)=\operatorname{conv}(\sigma(p)\in\mathbb{R}I:\sigma\in S_{I})\) where \(S_{I}\) is the symmetric group on the set \(I\) which acts by permuting the coordinates. We also consider a related family of unbounded polyhedra. An **extended generalized permutahedra**\(P\subseteq\mathbb{R}I\) is a polyhedron whose normal fan is a coarsening of a subfan of the braid fan. An important family of extended generalized permutahedra are **poset cones**. The cone of a poset \(p\) is defined to be \(\operatorname{cone}(p)=\operatorname{cone}(e_{i}-e_{j}:i\geq j\text{ in $p$})\). In fact, poset cones are exactly the tangent cones of generalized permutahedra (Figure 2.1). The **tangent cone**\(\operatorname{cone}_{F}(P)\) at a face \(F\) of a polytope \(P\) is the cone generated by all of the directions of edges with at least one endpoint in \(F\), oriented to point out of \(F\). A **vertex cone** is a tangent cone at a vertex \(v\), denoted by \(\operatorname{cone}_{v}(P)\). ### The Hopf Monoid of Generalized Permutahedra We can create a vector species **GP** of generalized permutahedra by defining \(\textbf{GP}[I]\) to be a real vector space with a basis indexed by generalized permutahedra that live in \(\mathbb{R}I\), as in Definition 2.4. The relabeling maps in this species are induced by relabeling the basis Figure 2.1: The ray generators of \(\operatorname{cone}_{(2,1,3)}(\pi_{3})\), and the associated poset of \(\mathbb{R}I\). Given any two generalized permutahedra \(P\subseteq\mathbb{R}S\) and \(Q\subseteq\mathbb{R}T\) for finite sets \(S\) and \(T\), their **product** is given by \[\mu_{S,T}(P\otimes Q)=P\times Q=\{(p,q):p\in P,q\in Q\}\subseteq\mathbb{R}(S \sqcup T),\] which is again a generalized permutahedron. Note that this product is commutative in the sense of Hopf monoids. We further have the following theorem defining the coproduct \(\Delta_{S,T}(P)=P|_{S}\otimes P/_{S}\): **Theorem 2.5** ([7]).: _Let \(P\in\mathbf{GP}[I]\) and suppose \(I=S\sqcup T\). Write \(\mathbb{1}_{S}\) to be the linear functional \(\sum_{i\in S}e_{i}\). Then the \(\mathbb{1}_{S}\)-maximal face of \(P\) can be decomposed uniquely as a product of polytopes \(P|_{S}\times P/_{S}\), where \(P|_{S}\in\mathbf{GP}[S]\) and \(P/_{S}\in\mathbf{GP}[T]\)._ **Theorem 2.6** ([1]).: **GP** _is a commutative Hopf monoid under this product and coproduct._ We can also consider the Hopf monoid of generalized permutahedra up to normal equivalence, which is a quotient of \(\mathbf{GP}\) that we denote by \(\overline{\mathbf{GP}}\). For a given ground set \(I\), the quotient \(\overline{\mathbf{GP}}[I]\) is finite dimensional. We can see this since each polytope corresponds to a coarsening of the braid fan, and there are only finitely many coarsened fans. The Hopf monoid \(\mathbf{GP}^{+}\) is obtained by expanding \(\mathbf{GP}\) to include extended generalized permutahedra. We now introduce the important submonoids of permutahedra, associahedra, orbit polytopes, and posets. #### 2.4.1 The Hopf Monoid of Permutahedra We first consider the submonoid of \(\overline{\mathbf{GP}}\) that results from restricting our basis elements to standard permutahedra. Specifically, \(\overline{\mathbf{\Pi}}[I]\) is the vector space with a basis indexed by normal equivalence classes of products of standard permutahedra \(\pi_{S_{1}}\times\cdots\times\pi_{S_{k}}\subseteq\mathbb{R}I\) where \(S_{1}\sqcup\cdots\sqcup S_{k}=I\). The product and coproduct are induced from \(\overline{\mathbf{GP}}\). In particular, the coproduct on standard permutahedra \(\pi_{I}\) satisfies \[\Delta_{S,T}(\pi_{I})=\pi_{S}\otimes\pi_{T} \tag{2.2}\] for each decomposition \(I=S\sqcup T\)[1]. #### 2.4.2 The Hopf Monoid of Associahedra We can similarly consider the submonoid of \(\overline{\mathbf{GP}}\) which results from restricting our basis elements to be Loday associahedra up to normal equivalence. For a given ground set \(I\), define \[\overline{\mathbf{A}}[I]=\text{span}\{a_{\ell_{1}}\times\cdots\times a_{\ell_ {k}}\mid\ell_{i}\text{ is a linear order on }S_{i}\text{ for }I=S_{1}\sqcup\cdots\sqcup S_{k}\}.\] This again forms a submonoid of \(\overline{\mathbf{GP}}\), where the coproduct can be expressed as \[\Delta_{S,T}(a_{\ell})=a_{\ell|S}\otimes\left(a_{\ell|T_{1}}\times\cdots\times a _{\ell|T_{k}}\right). \tag{2.3}\] for each linear order \(\ell\) on \(I\) and any decomposition \(I=S\sqcup T\) where \(T=T_{1}\sqcup\cdots\sqcup T_{k}\) is a decomposition of \(T\) into maximal intervals of \(\ell\)[1]. Here \(\ell|U\) denotes the restriction of the linear order \(\ell\) to \(U\). **Example 2.7** (Coproduct in \(\overline{\mathbf{A}}\)).: Consider \(a_{1234}\in\overline{\mathbf{A}}[\{1,2,3,4\}]\). Then, \[\Delta_{\{2\},\{1,3,4\}}(a_{1234}) =a_{\ell|S}\otimes\left(a_{\ell|T_{1}}\times\cdots\times a_{\ell| T_{k}}\right)\] \[=a_{2}\otimes\left(a_{1}\times a_{34}\right).\] #### 2.4.3 The Hopf Monoid of Orbit Polytopes Let \(p\in\mathbb{R}I\). Recall that the **orbit polytope**\(\mathcal{O}(p)\) in \(\mathbb{R}I\) is the convex hull of all permutations of the coordinates of \(p\). **Example 2.8**.: Let \(p=(0,1,1)\in\mathbb{R}[3]\). Then \(\mathcal{O}(p)\) is \(\mathrm{conv}((0,1,1),(1,0,1),(1,1,0))\): Orbit polytopes are in correspondence with compositions of natural numbers. A **composition** of \(n\) is a way of writing \(n\) as an ordered sum of positive integers. For example, the distinct compositions of \(n=3\) are \(1+1+1\), \(1+2\), \(2+1\), and \(3\) which we denote as \((1,1,1)\), \((1,2)\), \((2,1)\), and \((3)\) respectively. In genenral, we denote compositions as \(\lambda=(\lambda_{1},\ldots,\lambda_{k})\), meaning \(\lambda_{i}>0\) for each \(i\) and \(\sum_{i=1}^{k}\lambda_{i}=n\). We denote by \(|\lambda|\) the sum of all the parts of \(\lambda\). Let \(p\in\mathbb{R}I\) and write the coordinates in decreasing order. Construct a composition \(\lambda\) where \(\lambda_{i}\) be the number of times the \(i\)th largest coordinate appears in \(p\). We call \(\lambda\) the composition of the point \(p\). There is in fact a bijection between normal equivalence classes of orbit polytopes in \(\mathbb{R}I\), and compositions of \(n=|I|\); two orbit polytopes are normally equivalent if and only if their vertices have the same composition [16]. Given a composition \(\lambda\) with \(k\) parts, the associated normal equivalence class \(\mathcal{O}_{\lambda}\) is the normal equivalence class of \(\mathcal{O}(p)\) where \(p\) is a point defined as follows: \[p=(\underbrace{k,\ldots,k}_{\lambda_{1}\text{ entries}},\ldots,\underbrace{2, \ldots,2}_{\lambda_{k-1}\text{ entries}},\underbrace{1,\ldots,1}_{\lambda_{k} \text{ entries}}). \tag{2.4}\] Orbit polytopes are generalized permutahedra, and their normal equivalence classes \(\mathcal{O}_{\lambda}\) form a submonoid \(\overline{\mathbf{OP}}\) of \(\overline{\mathbf{GP}}\)[16]. We define \[\overline{\mathbf{OP}}[I]=\mathrm{span}\{\mathcal{O}_{\alpha_{1},S_{1}} \times\cdots\times\mathcal{O}_{\alpha_{k},S_{k}}:I=S_{1}\sqcup\cdots\sqcup S_{ k}\text{ and }\alpha_{i}\text{ composition of }|S_{i}|\text{ for all }i\}.\] To describe the coproduct of \(\overline{\mathbf{OP}}\), we use two operations on compositions.. Given two compositions \(\alpha=(\alpha_{1},\ldots,\alpha_{k})\) and \(\beta=(\beta_{1},\ldots,\beta_{\ell})\) define * the **concatenation**\(\alpha\cdot\beta\) as \((\alpha_{1},\ldots,\alpha_{k},\beta_{1},\ldots,\beta_{\ell})\); * the **near-concatenation**\(\alpha\odot\beta\) as \((\alpha_{1},\ldots,\alpha_{k-1},\alpha_{k}+\beta_{1},\beta_{2},\ldots,\beta_{ \ell})\). **Theorem 2.9** ([16]).: _Given a decomposition \(I=S\sqcup T\) and a composition \(\alpha\) of \(n=|I|\), we can describe the coproduct of \(\overline{\mathbf{OP}}\) as_ \[\Delta_{S,T}(\mathcal{O}_{\alpha})=\mathcal{O}_{\alpha|_{S}}\otimes\mathcal{O }_{\alpha/_{S}} \tag{2.5}\] _where \(\alpha|_{S}\) and \(\alpha/_{S}\) are the unique pair of compositions satisfying_ 1. \(\alpha|_{S}\) _is a composition of_ \(|S|\) _and_ \(\alpha/_{S}\) _is a composition of_ \(|T|\)_, and_ 2. _either_ \(\alpha|_{S}\cdot\alpha/_{S}=\alpha\) _or_ \(\alpha|_{S}\odot\alpha/_{S}=\alpha\) #### 2.4.4 The Hopf Monoid of Posets We have seen that posets can be represented as cones, and that these cones are exactly the tangent cones of generalize permutahedra. This induces a Hopf monoid structure on poset cones that realizes them as a submonoid of extended generalized permutahedra [1, Proposition 3.4.6]. If \(p\) is a poset structure on a finite set \(I\) and \(I=S\sqcup T\), then the **restriction of \(p\) to \(S\)**, or \(p|_{S}\), consists of the relations in \(p\) on \(S\) that only relate elements elements of \(S\). We also say \(S\) is a **lower set** of the poset \(p\) when no element of \(T\) is less than an element of \(S\). In other words, if \(s\in S\) and \(x\leq s\) for some \(x\in I\), then \(x\in S\). Given a finite set \(I\), let \(\mathbf{P}[I]\) be the vector space with basis indexed by poset structures on \(I\). The relabeling maps in this vector species are induced by relabeling the posets. The product \(\mu\) and coproduct \(\Delta\) on \(\mathbf{P}\) are as follows: Let \(I=S\sqcup T\) be a decomposition, and define * the product \[\mu_{S,T}:\mathbf{P}[S]\otimes\mathbf{P}[T]\to\mathbf{P}[I],\] \[\mu_{S,T}(p_{1}\otimes p_{2})=p_{1}\sqcup p_{2}.\] This is the disjoint union of the posets, i.e. \(p_{1}\sqcup p_{2}\) is a poset on \(I\) with no relations between elements of \(S\) and \(T\). * the coproduct \[\Delta_{S,T}:\mathbf{P}[I]\to\mathbf{P}[S]\otimes\mathbf{P}[T],\] \[\Delta_{S,T}(p)=\begin{cases}p|_{S}\otimes p|_{T}&\text{if $S$ is a lower set of $p$,}\\ 0&\text{otherwise.}\end{cases}\] **Example 2.10** (Coproduct in \(\mathbf{P}\)).: Let us coproduct of a poset with respect to two different decompositions: In the first expression, \(S=\{c,d\}\) is a lower set because no element in \(T=\{a,b\}\) is less than any element of \(S\), so we can express the coproduct as a tensor. In the second expression, \(\{b,d\}\) is not a lower set so the coproduct is \(0\). **Theorem 2.11** ([1]).: _Identifying each poset \(p\) with the corresponding poset cone \(\operatorname{cone}(p)\) embeds \(\mathbf{P}\) as a Hopf submonoid of extended generalized permutahedra._ ### The Cartier-Milnor-Moore Theorem A **Lie monoid in the category of vector species** is a pair \((\mathbf{L},\gamma)\) where the Lie bracket \(\gamma_{S,T}:\mathbf{L}[S]\otimes\mathbf{L}[T]\to\mathbf{L}[I]\) for each \(S\sqcup T=I\) is a family of functions that satisfy the following properties (see [3]): * **antisymmetry**, meaning that \(\gamma_{S,T}+\gamma_{T,S}\circ\beta_{S,T}=0\), and * the **Jacobi identity**, which states that \(\gamma\circ(\gamma\otimes\mathrm{id})\circ(\mathrm{id}+\xi+\xi^{2})=0\). Here, \(\beta\) is the symmetric braiding map, and \(\xi\) is the rotation **Theorem 2.12** (Proposition 1.26, [3]).: _We can regard any Hopf monoid \(\mathbf{H}\) in vector species as a Lie monoid where \(\gamma\) is the commutator \(\gamma_{S,T}(x\otimes y)=\mu_{S,T}(x\otimes y)-\mu_{T,S}(y\otimes x)\) for \(x\in\mathbf{H}[S],y\in\mathbf{H}[T]\)._ **Theorem 2.13** (Section 11.9.2, [3]).: _For any Hopf monoid \(\mathbf{H}\) in vector species, the primitives \(\mathcal{P}(\mathbf{H})\) form a Lie submonoid under the commutator._ The **free monoid** on a positive species \(\mathbf{p}\) is \(\mathcal{T}(\mathbf{p})\) where for a finite set \(I\), \[\mathcal{T}(\mathbf{p})[I]=\bigoplus_{I_{1}\sqcup\cdots\sqcup I_{k}=I}\mathbf{ q}(I_{1})\otimes\cdots\otimes\mathbf{q}(I_{k}).\] If \(\mathbf{p}\) is a Lie monoid, its **universal enveloping monoid**\(\mathcal{U}(\mathbf{p})\) is a connected Hopf monoid obtained as the quotient of the free monoid by the ideal generated by elements of the form \[x\cdot y-y\cdot x-\gamma_{S,T}(x\otimes y)\] for \(x\in\mathbf{p}[S]\) and \(y\in\mathbf{p}[T]\) (see [2, Section 8.2]). Theorem 2.14 tells us when this can be used to reconstruct a Hopf monoid from its Lie monoid of primitives. **Theorem 2.14** (Cartier-Milnor-Moore [6, 12], Hopf monoid version [2]).: _Given a connected, cocommutative Hopf monoid \(\mathbf{H}\) over a field of characteristic \(0\) that is also connilpotent, the primitives form a Lie monoid \(\mathcal{P}(\mathbf{H})\) and the inclusion of \(\mathcal{P}(\mathbf{H})\) into \(\mathbf{H}\) extends into an isomorphism of Hopf monoids \(\varphi:\mathcal{U}(\mathcal{P}(\mathbf{H}))\to\mathbf{H}\)._ Since \(\mathbf{GP}\) (and hence its submonoids) is commutative, the dual \(\mathbf{GP}^{*}\) (and its submonoids) is cocommutative by Proposition 2.1, and Theorem 2.14 applies. ## 3 Lie Algebras of Primitives When we dualize \(\overline{\mathbf{GP}}\) and its Hopf submonoids from Section 2.4, we obtain cocommutative Hopf monoids of particular interest. Theorem 2.14 tells us that these are determined by their primitive elements, so in this section we describe these primitive elements and their Lie monoid structure. **Proposition 3.1**.: _The primitives of the dual Hopf monoid of posets, \(\mathbf{P}^{*}\), are the duals of connected posets._ Proof.: By Proposition 2.2, the primitives of \(\mathbf{P}^{*}\) are the duals of indecomposables in \(\mathbf{P}\). The product of two posets is their disjoint union, so the indecomposable elements of \(\mathbf{P}\) are the connected posets. In fact, the Hopf monoids \(\overline{\mathbf{GP}}\), \(\overline{\mathbf{\Pi}}\), \(\overline{\mathbf{A}}\), and \(\overline{\mathbf{OP}}\) all have a notion of "connected" elements that cannot be expressed as nontrivial products of other elements. We use this to determine the primitives of \(\overline{\mathbf{GP}}\), and the description of the primitives of its submonoids follows as a corollary. **Proposition 3.2**.: _The primitives of \(\overline{\mathbf{GP}}^{*}\) are the duals to the \((|I|-1)\)-dimensional generalized permutahedra in \(\mathbb{R}I\)._ Proof.: Like with the proof of Proposition 3.1, we find the primitives in \(\overline{\mathbf{GP}}^{*}\) by taking the duals of the indecomposable elements of \(\overline{\mathbf{GP}}\). These are generalized permutahedra in \(\mathbb{R}I\) that cannot be expressed as products of two or more generalized permutahedra, which are exactly those of codimension 1. **Corollary 3.3**.: _The primitives of \(\overline{\mathbf{\Pi}}^{*}\), \(\overline{\mathbf{A}}^{*}\), and \(\overline{\mathbf{OP}}^{*}\) are as follows:_ 1. _For_ \(\overline{\mathbf{\Pi}}^{*}[I]\)_, these are_ \(\pi_{I}^{*}\)_._ 2. _For_ \(\overline{\mathbf{A}}^{*}[I]\)_, these are_ \(a_{\ell}^{*}\) _for any linear order_ \(\ell\) _on_ \(I\)_._ 3. _For_ \(\overline{\mathbf{OP}}^{*}[I]\)_, these are_ \(\mathcal{O}_{\alpha,I}^{*}\) _where_ \(\alpha\) _is a composition of_ \(|I|\)_._ ### Primitives of Dual Permutahedra The coproduct in \(\overline{\mathbf{\Pi}}\) is \(\Delta_{S,T}(\pi_{I})=\pi_{S}\otimes\pi_{T}\). This gives us the following theorems. **Theorem 3.4**.: _Let \(I=S\sqcup T\), then the product of \(\pi_{S}^{*}\in\mathcal{P}(\overline{\mathbf{\Pi}}^{*})[S]\) and \(\pi_{T}^{*}\in\mathcal{P}(\overline{\mathbf{\Pi}}^{*})[T]\) in \(\mathcal{P}(\overline{\mathbf{\Pi}}^{*})\) is \(\mu_{S,T}^{*}(\pi_{S}^{*}\otimes\pi_{T}^{*})=\pi_{S\sqcup T}^{*}+(\pi_{S} \times\pi_{T})^{*}\)._ Proof.: As we have a single tensor in the coproduct in \(\overline{\mathbf{\Pi}}\) we can define the dual product as \[\mu_{S,T}^{*}(\pi_{S}^{*}\otimes\pi_{T}^{*})=\sum_{\Delta_{S,T}(z)=\pi_{S} \otimes\pi_{T}}z^{*} \tag{3.1}\] in \(\overline{\mathbf{\Pi}}^{*}\). Geometrically speaking, this is a formal sum over the duals of all permutahedra and products of permutahedra whose \(\mathbbm{1}_{S}\)-maximal face is the product \(\pi_{S}\times\pi_{T}\). The only possibilities are \(\pi_{S\sqcup T}\) and \(\pi_{S}\times\pi_{T}\), so \(\mu_{S,T}^{*}(\pi_{S}^{*}\otimes\pi_{T}^{*})=(\pi_{S}\times\pi_{T})^{*}+\pi_{ S\sqcup T}^{*}\). This leads us to the Lie bracket for \(\mathcal{P}(\overline{\mathbf{\Pi}}^{*})\). In this case, the Lie bracket is identically zero because the product in \(\mathcal{P}(\overline{\mathbf{\Pi}}^{*})\) is commutative. **Theorem 3.5**.: _The Lie monoid of primitives of \(\overline{\mathbf{\Pi}}^{*}\) has Lie bracket given by the commutator \(\gamma_{S,T}:\mathcal{P}(\overline{\mathbf{\Pi}}^{*})[S]\otimes\mathcal{P}( \overline{\mathbf{\Pi}}^{*})[T]\rightarrow\mathcal{P}(\overline{\mathbf{\Pi} }^{*})[I]\) where \(\gamma_{S,T}(\pi_{S}^{*}\otimes\pi_{T}^{*})=0\)._ Proof.: From Theorem 3.4, it follows that the commutator is \[\gamma_{S,T}(\pi_{S}^{*}\otimes\pi_{T}^{*}) =\mu_{S,T}^{*}(\pi_{S}^{*}\otimes\pi_{T}^{*})-\mu_{T,S}^{*}(\pi_ {T}^{*}\otimes\pi_{S}^{*})\] \[=(\pi_{S}\times\pi_{T})^{*}+\pi_{S\sqcup T}^{*}-(\pi_{T}\times \pi_{S})^{*}-\pi_{T\sqcup S}^{*}\] \[=(\pi_{S}\times\pi_{T})^{*}-(\pi_{T}\times\pi_{S})^{*}\] \[=0.\] ### Primitives of Dual Associahedra and the Witt Lie Algebra Next we find the commutator for \(\mathcal{P}(\overline{\mathbf{A}}^{*})\). In order to understand the product in \(\mathcal{P}(\overline{\mathbf{A}}^{*})\), we will need a notion of block insertions of linear orders. For disjoint sets \(S\) and \(T\), let \(\ell\) be a linear order \(s_{1}<_{\ell}s_{2}<_{\ell}\cdots<_{\ell}s_{|S|}\) on a finite set \(S\) and \(m\) be a linear order \(t_{1}<_{m}t_{2}<_{m}\cdots<_{m}t_{|T|}\) on a finite set \(T\). We say \(b\) is a **block insertion** of \(m\) into \(\ell\) if there exists some index \(i\) with \(0\leq i\leq|S|\) such that \(b\) is the linear order \(s_{1}<_{b}\cdots<_{b}s_{i}<_{b}t_{1}<_{b}\cdots<_{b}t_{|T|}<_{b}s_{i+1}<_{b} \cdots<_{b}s_{|S|}\). In other words, \(m\) is inserted into \(\ell\) at index \(i\) (where \(i=0\) means \(t_{1}\) is the least element in the order and \(t_{|T|}<s_{1}\)). Note that there are \(|S|+1\) ways to block insert \(m\) into \(\ell\). **Theorem 3.6**.: _Let \(I=S\sqcup T\); then the product of two dual associahedra \(a_{\ell}^{*}\in\mathcal{P}(\overline{\mathbf{A}}^{*})[S]\) and \(a_{m}^{*}\in\mathcal{P}(\overline{\mathbf{A}}^{*})[T]\) in \(\mathcal{P}(\overline{\mathbf{A}}^{*})\) is_ \[\mu_{S,T}^{*}(a_{\ell}^{*}\otimes a_{m}^{*})=\sum_{\begin{subarray}{c}b\text { a block insertion}\\ \text{ of m into }\ell\end{subarray}}a_{b}^{*}+(a_{\ell}\times a_{m})^{*}.\] Proof.: Let \(I=S\sqcup T\) be some decomposition of a finite set \(I\). The product in the dual Hopf monoid \(\overline{\mathbf{A}}^{*}\) can be understood as \[\mu_{S,T}^{*}(a_{\ell}^{*}\otimes a_{m}^{*})=\sum_{\Delta_{S,T}(z)=a_{\ell} \otimes a_{m}}z^{*} \tag{3.2}\] where \(\ell\) is a linear order on \(S\) and \(m\) is a linear order on \(T\). We will first consider when \(z\) is not a product of associahedra, meaning \(z=a_{b}\) for some linear order \(b\) on \(I\). We have that \(\Delta_{S,T}(a_{b})=a_{b|S}\otimes\left(a_{b|T_{1}}\times\cdots\times a_{b|T_ {k}}\right)\) where \(T=T_{1}\sqcup\cdots\sqcup T_{k}\) is the decomposition of \(T\) into maximal intervals of \(b\) from (2.3). Suppose that \[a_{b|S}\otimes\left(a_{b|T_{1}}\times\cdots\times a_{b|T_{k}}\right)=a_{\ell} \otimes a_{m}.\] This means that \(a_{b|S}=a_{\ell}\) and \(a_{b|T_{1}}\times\cdots\times a_{b|T_{k}}=a_{m}\), implying that the product in the latter equation consists of only one term, since an associahedron can never be equal to the product of two or more associahedra. So \(b|T\) should have a single maximal interval, which means that \(b\) must be a block insertion of \(m\) into \(\ell\). We will now show the only product of associahedra whose coproduct in \(\overline{\mathbf{A}}\) results in \(a_{\ell}\otimes a_{m}\) must be \(a_{\ell}\times a_{m}\). Suppose we have a product of two arbitrary associahedra \(a_{o}\times a_{p}\) with \(o\) a linear order on \(S\) and \(p\) a linear order on \(T\). Then the compatibility property of Figure 3.1: An example of block insertions the product and the coproduct of \(\overline{\mathbf{A}}\) results in the following: \[\Delta_{S,T}(a_{o}\times a_{p}) =\Delta_{S,T}(\mu_{S,T}(a_{o},a_{p}))\] \[=(\mu_{S,\emptyset}\otimes\mu_{\emptyset,T})(\mathrm{id}_{S} \otimes\beta_{\emptyset,\emptyset}\otimes\mathrm{id}_{T})(\Delta_{S,\emptyset }(a_{o})\otimes\Delta_{\emptyset,T}(a_{p}))\] \[=(\mu_{S,\emptyset}\otimes\mu_{\emptyset,T})(a_{o}\otimes 1 \otimes 1\otimes a_{p})\] \[=a_{o}\otimes a_{p}.\] So \(\Delta_{S,T}(a_{o}\times a_{p})=a_{\ell}\otimes a_{m}\) can only happen when \(a_{o}=a_{\ell}\) and \(a_{p}=a_{m}\). We do not get any terms where \(z\) is a product of more than two associahedra because \(a_{\ell}\) and \(a_{m}\) are not products of associahedra. **Theorem 3.7**.: _Let \(\ell\) and \(m\) be linear orders on \(S\) and \(T\) respectively. The Lie bracket on \(\mathcal{P}(\overline{\mathbf{A}}^{*})\) is given by the commutator \(\gamma_{S,T}:\mathcal{P}(\overline{\mathbf{A}}^{*})[S]\otimes\mathcal{P}( \overline{\mathbf{A}}^{*})[T]\rightarrow\mathcal{P}(\overline{\mathbf{A}}^{*}) [I]\) which has the form_ \[\gamma_{S,T}(a_{\ell}^{*}\otimes a_{m}^{*})=\sum_{\begin{subarray}{c}b\text{ a block insertion}\\ \text{of $m$ into $\ell$}\end{subarray}}a_{b}^{*}-\sum_{\begin{subarray}{c}c \text{ a block insertion}\\ \text{of $\ell$ into $m$}\end{subarray}}a_{c}^{*}.\] Moreover, we can consider the Hopf algebra \(A^{*}\) obtained by applying the Fock functor to \(\overline{\mathbf{A}}^{*}\) as mentioned in Section 2.1. The basis elements of \(A^{*}\) have the form \(a_{n}^{*}\) where \(a_{n}\) is the isomorphism class of all associahedra \(a_{I}\) where \(I=n\). **Theorem 3.8**.: _The Lie algebra of primitives \(\mathcal{P}(A^{*})\) is generated by elements \(a_{1}^{*},a_{2}^{*},\ldots\) with the Lie bracket \([a_{s}^{*},a_{t}^{*}]=(s-t)a_{s+t}^{*}\)._ Proof.: Let \(s,t\in\mathbb{N}\) and consider the dual associahedra \(a_{s}^{*},a_{t}^{*}\in A^{*}\). By taking isomorphism classes we can see that the product from Theorem 3.6 extends to the multiplication \(a_{s}^{*}\cdot a_{t}^{*}=(s+1)a_{s+t}^{*}+(a_{s}\times a_{t})^{*}\) in \(A^{*}\). Likewise \(a_{t}^{*}\cdot a_{s}^{*}=(t+1)a_{t+s}^{*}+(a_{t}\times a_{s})^{*}\). So the Lie bracket is \[[a_{s}^{*},a_{t}^{*}] =a_{s}^{*}\cdot a_{t}^{*}-a_{t}^{*}\cdot a_{s}^{*}\] \[=(s+1)a_{s+t}^{*}+(a_{s}\times a_{t})^{*}-\big{(}(t+1)a_{t+s}^{*} +(a_{t}\times a_{s})^{*}\big{)}\] \[=sa_{s+t}^{*}+a_{s+t}^{*}+(a_{s}\times a_{t})^{*}-ta_{t+s}^{*}-a_ {t+s}^{*}-(a_{t}\times a_{s})^{*}\] \[=sa_{s+t}^{*}-ta_{t+s}^{*}+a_{s+t}^{*}-a_{s+t}^{*}+(a_{s}\times a _{t})^{*}-(a_{t}\times a_{s})^{*}\] \[=(s-t)a_{s+t}^{*}.\] This follows from the fact that \(a_{s}\times a_{t}\) and \(a_{t}\times a_{s}\) are isomorphic. The Lie algebra above looks similar to the Witt algebra, which is defined as follows (see [15]). Take \(\mathbb{C}[z,z^{-1}]\) to be the ring of Laurent polynomials with variable \(z\). A **derivation** is a linear map \(D:\mathbb{C}[z,z^{-1}]\rightarrow\mathbb{C}[z,z^{-1}]\) that satisfies the rule \(D(fg)=D(f)g+fD(g)\). The **Witt algebra**\(W\) is the space of derivations of \(\mathbb{C}[z,z^{-1}]\). It has a basis given by \(L_{n}=-z^{n+1}\frac{d}{dz}\) for \(n\in\mathbb{Z}\) and is a Lie algebra with \([L_{m},L_{n}]=(m-n)L_{m+n}\) for \(m,n\in\mathbb{Z}\). **Theorem 3.9**.: _The Lie algebra of primitives \(\mathcal{P}(\overline{\mathbf{A}}^{*})\) is isomorphic to the positive part of the Witt Lie algebra._ ### Primitives of Dual Orbit Polytopes We next consider \(\mathcal{P}(\overline{\mathbf{OP}}^{*})\) and the commutator \(\gamma\) in this context. **Theorem 3.10**.: _The product of the dual Hopf monoid of orbit polytopes is_ \[\mu^{*}_{S,T}(\mathcal{O}^{*}_{\alpha}\otimes\mathcal{O}^{*}_{\beta})=\begin{cases} \mathcal{O}^{*}_{\alpha\cdot\beta}+\mathcal{O}^{*}_{\alpha\odot\beta}+( \mathcal{O}_{\alpha}\times\mathcal{O}_{\beta})^{*}&\text{if $\alpha$ or $\beta$ have more than one part,}\\ \mathcal{O}^{*}_{\alpha\cdot\beta}+\mathcal{O}^{*}_{\alpha\odot\beta}&\text{ otherwise,}\end{cases}\] _for \(\alpha\) a composition of \(|S|\) and \(\beta\) a composition of \(|T|\), so that \(\mathcal{O}_{\alpha}\in\mathcal{P}(\overline{\mathbf{OP}})[S],\mathcal{O}_{ \beta}\in\mathcal{P}(\overline{\mathbf{OP}})[T]\)._ Proof.: Let \(I=S\sqcup T\) be some decomposition of a finite set \(I\). The product in the dual Hopf monoid \(\overline{\mathbf{OP}}^{*}\) is \[\mu^{*}_{S,T}(\mathcal{O}^{*}_{\alpha}\otimes\mathcal{O}^{*}_{\beta})=\sum_{ \Delta_{S,T}(z)=\mathcal{O}_{\alpha}\otimes\mathcal{O}_{\beta}}z^{*}.\] We can first consider the case when \(z\in\overline{\mathbf{OP}}[I]\) is a single orbit polytope. So say \(z=\mathcal{O}_{\lambda}\) for some composition \(\lambda\) of \(|I|\). This means \(\Delta_{S,T}(\mathcal{O}_{\lambda})=\mathcal{O}_{\alpha}\otimes\mathcal{O}_{\beta}\) which holds if and only if \(\mathcal{O}_{\lambda|_{S}}\otimes\mathcal{O}_{\lambda/_{S}}=\mathcal{O}_{ \alpha}\otimes\mathcal{O}_{\beta}\) where \(\lambda|_{S}\) and \(\lambda/_{S}\) are the unique pair of compositions satisfying 1. \(\lambda|_{S}\) is a composition of \(|S|\) and \(\lambda/_{S}\) is a composition of \(|T|\), and 2. either \(\lambda|_{S}\cdot\lambda/_{S}=\lambda\) or \(\lambda|_{S}\odot\lambda/_{S}=\lambda\). The first condition means that our compositions must be the same, \(\lambda|_{S}=\alpha\) and \(\lambda/_{S}=\beta\). The second condition tells us that the only compositions which \(\lambda\) can be are \(\alpha\cdot\beta\) or \(\alpha\odot\beta\). So we have the terms \(\mathcal{O}^{*}_{\alpha\cdot\beta}\) and \(\mathcal{O}^{*}_{\alpha\odot\beta}\) appearing in the expression for \(\mu^{*}_{S,T}(\mathcal{O}^{*}_{\alpha}\otimes\mathcal{O}^{*}_{\beta})\). Now suppose \(z\in\overline{\mathbf{OP}}[I]\) is a product of two or more orbit polytopes, i.e. \(I=S_{1}\sqcup\cdots\sqcup S_{k}\) and \(z=\mathcal{O}_{\lambda_{1}}\times\cdots\times\mathcal{O}_{\lambda_{n}}\) for \(\lambda_{i}\) a composition of \(|S_{i}|\). If \(\alpha\) and \(\beta\) have more than one part, then \(\dim\mathcal{O}_{\alpha}=|S|-1\) and \(\dim\mathcal{O}_{\beta}=|T|-1\). Moreover \(\dim z\leq|I|-k\) since the dimension of \(\mathcal{O}_{\lambda_{i}}\) is less than or equal to \(|S_{i}|-1\). In order for the inequality \[\Delta_{S,T}(\mathcal{O}_{\lambda_{1},S_{1}}\times\cdots\times\mathcal{O}_{ \lambda_{n},S_{k}})=\mathcal{O}_{\alpha}\otimes\mathcal{O}_{\beta}\] to hold, we need \(\dim z\geq\dim\mathcal{O}_{\alpha}+\dim\mathcal{O}_{\beta}\), meaning \(k\leq 2\). We have already explored the case \(k=1\). For \(k=2\), this means \(z=\mathcal{O}_{\lambda_{1}}\times\mathcal{O}_{\lambda_{2}}\). The compatibility axiom gives \[\Delta_{S,T}(\mathcal{O}_{\lambda_{1}}\times\mathcal{O}_{\lambda_{ 2}}) =\Delta_{S,T}(\mu_{S,T}(\mathcal{O}_{\lambda_{1}}\otimes\mathcal{O}_ {\lambda_{2}}))\] \[=(\mu_{S,\emptyset}\otimes\mu_{\emptyset,T})(\operatorname{id}_{ S}\otimes\beta_{\emptyset,\emptyset}\otimes\operatorname{id}_{T})(\Delta_{S, \emptyset}(\mathcal{O}_{\lambda_{1}})\otimes\Delta_{\emptyset,T}(\mathcal{O}_{ \lambda_{2}}))\] \[=(\mu_{S,\emptyset}\otimes\mu_{\emptyset,T})(\mathcal{O}_{ \lambda_{1}}\otimes 1\otimes 1\otimes\mathcal{O}_{\lambda_{2}})\] \[=\mathcal{O}_{\lambda_{1}}\otimes\mathcal{O}_{\lambda_{2}}.\] Thus \(\alpha=\lambda_{1}\) and \(\beta=\lambda_{2}\). This gives rise to the term \((\mathcal{O}_{\alpha}\times\mathcal{O}_{\beta})^{*}\) in the expression for \(\mu^{*}_{S,T}(\mathcal{O}_{\alpha}\otimes\mathcal{O}^{*}_{\beta})\) Since \(\alpha\) and \(\beta\) have more than one part, the terms \(\mathcal{O}_{\alpha\cdot\beta}\) and \(\mathcal{O}_{\alpha\odot\beta}\) cannot be expressed as products of orbit polytopes, so we have found a third distinct term. If either of \(\alpha\) or \(\beta\) have one part, then the corresponding orbit polytope is just a point. In this case, \((\mathcal{O}_{\alpha}\times\mathcal{O}_{\beta})^{*}=\mathcal{O}^{*}_{\alpha \odot\beta}\), so \(\mu_{S,T}\) consists of only two terms. With this product we can now find the lie bracket on the primitives \(\mathcal{P}(\overline{\mathbf{OP}}^{*})\). **Theorem 3.11**.: _The commutator \(\gamma:\mathcal{P}(\overline{\mathbf{OP}}^{*})[S]\otimes\mathcal{P}(\overline{ \mathbf{OP}}^{*})[T]\to\mathcal{P}(\overline{\mathbf{OP}}^{*})[I]\) is_ \[\gamma(\mathcal{O}^{*}_{\alpha}\otimes\mathcal{O}^{*}_{\beta})=\mathcal{O}^{*}_{ \alpha\cdot\beta}-\mathcal{O}^{*}_{\beta\cdot\alpha}+\mathcal{O}^{*}_{\alpha \odot\beta}-\mathcal{O}^{*}_{\beta\odot\alpha}.\] The Brion Map on Generalized Permutahedra So far we described the internal structure of our Hopf monoids in Section 2 and Section 3. We now turn to the external perspective of studying a Hopf monoid by exploring Hopf monoid morphisms, in particular the Brion map. We compute this map over the Hopf monoids of permutahedra, associahedra, and orbit polytopes. The Brion map was introduced in [1] and described explicitly in [4]. Motivated by Brion's theorem (see [5]), the Brion map connects a generalized permutahedron with posets coming from the vertices, extending the correspondence described in Section 2.3. **Definition 4.1**.: The **Brion map**\(B:\overline{\mathbf{GP}}[I]\to\mathbf{P}[I]\) is defined as \[B(P)=\sum_{v\text{ vertex of }P}\operatorname{poset}_{v}(P)\] where \(\operatorname{poset}_{v}(P)\) is the poset corresponding to the tangent cone of the generalized permutahedron \(P\) at the vertex \(v\). It was shown in [4] that this is a morphism of Hopf monoids. In this section, we give explicit expressions for the Brion map when the domain is restricted to the Hopf monoids of permutahedra, associahedra, and orbit polytopes. ### The Brion Map on Permutahedra We will first consider the Hopf monoid of permutahedra \(\overline{\mathbf{\Pi}}\). A **chain**\(c\) over the set \(I\) is a poset where any two elements are comparable; for any \(x\in I\) and any \(y\in I\) we have \(x\leq_{c}y\) or \(y\leq_{c}x\). **Theorem 4.2**.: _The Brion map on the Hopf monoid of permutahedra is_ \[B(\pi_{I})=\sum_{c\text{ chain over }I}c.\] Proof.: Suppose \(|I|=n\) and consider a vertex \(v\) of the permutahedron, which has the general form \[v=(n,n-1,\ldots,1)=ne_{i_{1}}+\cdots+(n-(j-1))e_{i_{j}}+(n-j)e_{i_{j}+1}+ \cdots+1e_{i_{n}}\] where \(I=\{i_{1},\ldots,i_{n}\}\). The vertices of \(\pi_{I}\) neighboring \(v\) are those obtained by swapping the coefficient of \(e_{i_{j}}\) and \(e_{i_{j+1}}\) in this expression for some \(1\leq j\leq n-1\). Thus the vector obtained by subtracting \(v\) from one of its neighboring vertices \(w\) is \(w-v=e_{i_{j+1}}-e_{i_{j}}\). This corresponds to the relation \(i_{j+1}\geq i_{j}\) in \(\operatorname{poset}_{v}(\pi_{I})\). Thus \(\operatorname{poset}_{v}(\pi_{I})\) is simply the chain \(i_{n}\geq i_{n-1}\geq\ldots\geq i_{2}\geq i_{1}\) over the set \(I\). Since the vertices of \(\pi_{I}\) are exactly the orbit of \(v\) under the action of the symmetric group \(S_{I}\) on \(\mathbb{R}I\) by permuting coordinates, each chain over \(I\) appears as a poset corresponding to a vertex of \(\pi_{I}\) exactly once. All chains on \(n\) elements belong to the equivalence class \(c_{n}\) of unlabelled chains. So after applying the Fock functor to \(\overline{\mathbf{\Pi}}\) to pass to the Hopf algebra \(\Pi\) of standard permutahedra, the Brion map becomes \[B(\pi_{n})=n!\,c_{n}.\] ### The Brion Map on Associahedra We define a **rooted binary tree** recursively. We say that the empty set is a binary tree. Otherwise, a binary tree has a root vertex \(v\), a left sub-tree \(T_{1}\), and a right sub-tree \(T_{2}\) which are both rooted binary trees. We will follow the convention of Loday in [10] and [11] which depicts a rooted binary tree with the root at the bottom. Figure 4.1 shows two visual representations of a rooted binary tree; we will make use of both. In the upper representation, each vertex is connected by an edge to both its left and right sub-trees, regardless of whether or not a sub-tree is empty. Edges connecting to empty sub-trees are called "leaves" and this depiction is the "leaves representation." The lower representation is obtained by deleting the leaves, leaving behind only the internal vertices and edges of the rooted binary tree. We call this the "pruned representation." If a rooted binary tree has \(n\) internal vertices, then it has \(n+1\) leaves. Let \(Y_{n}\) denote the set of all rooted binary trees with \(n\) internal vertices. Rooted binary trees are in correspondence with the vertices of associahedra. The following construction is by Loday [10]. For a given linear order \(\ell=\ell_{1}<\ell_{2}<\cdots<\ell_{n}\), introduce a new symbolic minimal element \(\ell_{0}\). Label the leaves of \(T\) with \(\ell_{0},\ell_{1}\ldots,\ell_{n}\) going from left to right. This induces a canonical ordering of the \(n\) internal vertices of \(T\); internal vertex \(\ell_{i}\) is the internal vertex between leaves \(\ell_{i-1}\) and \(\ell_{i}\). This is known as the **binary search labelling** with respect to the linear order \(\ell\)[13, Section 8.1]. To each tree \(T\in Y_{n}\), we can associate an integral point \(v_{\ell}(T)\in\mathbb{R}^{n}\) as follows. Define \(l_{i}\) and \(r_{i}\) to be the number of leaves that are descendants on the left side and right side of internal vertex \(\ell_{i}\) in \(T\), respectively. We can now construct the point \(v_{\ell}(T)=(l_{1}r_{1},\ldots,l_{i}r_{i},\ldots,l_{n}r_{n})\). As an example, using the tree from Figure 4.2 we have \(v(T)=(2,1,6,1)\) corresponding to the standard ordering \(1<2<3<4\). The **associahedron**\(a_{\ell}\) is the convex hull of \(\{v_{\ell}(T):T\in Y_{n}\}\)[10, 13]. Moreover, \(Y_{n}\) has a poset structure called the Tamari order [10]. Given \(A,B\in Y_{n}\), we have a covering relation \(B\leq A\) if \(A\) can be obtained from \(B\) by replacing locally a sub-tree of \(B\) of the form with. This operation and its inverse are called **tree rotations**. Notice that the local sub-tree pictured here has exactly one internal edge, both before and after performing a tree rotation. Hence each internal edge of a rooted binary tree \(T\) corresponds to a unique tree rotation that can be performed on \(T\). Figure 4.2: Labeling the internal vertices. Figure 4.1: The rooted binary trees with 3 internal vertices represented in two ways. Each covering relation in the Tamari order corresponds to an edge of the associahedron [11]. Thus given the rooted binary tree associated to a vertex of the associahedron, the neighboring vertices are the ones associated to rooted binary trees obtained by performing a tree rotation. From this we can determine the edge directions incident to each vertex of the associahedron. In order to do this, we need to analyze how tree rotations affect the ordering of the internal vertices of the rooted binary tree. Consider some internal edge of a tree \(T\) with vertices \(\ell_{i}\) and \(\ell_{j}\), where \(\ell_{i}\) is a child of \(\ell_{j}\). If \(\ell_{i}\) is a left child of \(\ell_{j}\), then the tree has the following form locally around \(\ell_{i}\) and \(\ell_{j}\), before and after performing a tree rotation: Similarly, if \(\ell_{i}\) is a right child of \(\ell_{j}\), then the tree before and after the rotation has the form: Figure 4.3 shows how the labelling of the internal vertices changes from all possible rotations for a particular rooted binary tree. Denote by \(p_{\ell}(T)\) the poset whose diagram is obtained by taking the binary search labeling on the internal vertices of the rooted binary tree \(T\), deleting the leaves to get the Figure 4.4: From rooted binary trees, to labeling, to posets Figure 4.3: Bijection between internal edges of a tree and rotations pruned representation, then forgetting the distinction between left and right sub-trees. This associates a poset on \(\{1,2,\ldots,n\}\) to each \(T\in Y_{n}\), that is to say, to each vertex of the associahedron. For instance, the vertices of \(a_{4}\) correspond to the posets in Figure 4.4. Notice that these posets have a minimum element and are connected, with each element covered by at most two other elements. They also depend on the linear order \(\ell\) that is used. We call posets that can be obtained in this way **rooted binary tree (RBT) posets** and denote the set of all such posets \(\mathrm{RBT}_{\ell}=\{p_{\ell}(T):T\in Y_{n}\}\) for the linear order \(\ell\). The following result is known ([13, Proposition 7.10]), but we include a proof for reference. **Theorem 4.3**.: _The Brion map on the Hopf monoid of associahedra is given by_ \[B(a_{\ell})=\sum_{p\in\mathrm{RBT}_{\ell}}p. \tag{4.1}\] Proof.: We will show that the poset \(\mathrm{poset}_{v_{\ell}(T)}(a_{\ell})\) corresponding to the tangent cone of \(a_{\ell}\) at the vertex \(v_{\ell}(T)\) is the poset \(p_{\ell}(T)\). Recall that the interior edges of \(T\) are in bijection with tree rotations in \(T\). So consider an internal edge of \(T\), say between the vertices labelled \(\ell_{i}\) and \(\ell_{j}\) by the binary search labelling on \(T\). Without loss of generality suppose \(\ell_{i}\) is a child of \(\ell_{j}\). Then there are two cases: \(\ell_{i}\) is either a left child or a right child. 1. Suppose \(\ell_{i}\) is a left child of \(\ell_{j}\), meaning the tree locally looks like \[\raisebox{-0.0pt}{\includegraphics[height=14.226378pt]{figs/2- in Figure 4.5. From this relation between the leaves of \(T\) and \(T^{\prime}\) we can see that \[v_{\ell}(T) =(l_{i}r_{i})e_{i}+(l_{j}r_{j})e_{j}+\sum_{k\in[n]\setminus\{i,j\} }l_{k}r_{k}e_{k}\] \[=(l_{i}r_{i})e_{i}+((l_{i}+r_{i})r_{j})e_{j}+\sum_{k\in[n] \setminus\{i,j\}}l_{k}r_{k}e_{k}\] and \[v_{\ell}(T^{\prime}) =(l^{\prime}_{i}r^{\prime}_{i})e_{i}+(l^{\prime}_{j}r^{\prime}_{ j})e_{j}+\sum_{k\in[n]\setminus\{i,j\}}l^{\prime}_{k}r^{\prime}_{k}e_{k}\] \[=(l_{i}(r_{i}+r_{j}))e_{i}+(r_{i}r_{j})e_{j}+\sum_{k\in[n] \setminus\{i,j\}}l_{k}r_{k}e_{k}.\] Subtracting these gives the edge direction oriented from \(v_{\ell}(T)\) to \(v_{\ell}(T^{\prime})\): \[v(T^{\prime})-v(T) =(l_{i}(r_{i}+r_{j}))e_{i}+(r_{i}r_{j})e_{j}-((l_{i}r_{i})e_{i}+( (l_{i}+r_{i})r_{j})e_{j})\] \[=(l_{i}r_{i})e_{i}+(l_{i}r_{j})e_{i}+(r_{i}r_{j})e_{j}-(l_{i}r_{i} )e_{i}-(l_{i}r_{j})e_{j}-(r_{i}r_{j})e_{j}\] \[=(l_{i}r_{j})e_{i}-(l_{i}r_{j})e_{j}\] \[=(l_{i}r_{j})(e_{i}-e_{j})\] Thus \(i\geq j\) in \(\operatorname{poset}_{v_{\ell}(T)}(a_{\ell})\). 2. Now suppose that \(\ell_{i}\) is a right child of \(\ell_{j}\), meaning we have the local sub-tree \[\includegraphics[height=14.226378pt]{images/2014-1 **Theorem 4.4**.: _The Brion map on the Hopf algebra of associahedra is given by_ \[B(a_{n})=\sum_{p\in\mathrm{RBT}_{n}}2^{n-\max(p)-\mathrm{sym}(p)}\,p\qquad\text{ for }n\in\mathbb{N}\] _where \(\max(p)\) and \(\mathrm{sym}(p)\) are the numbers of maximal and symmetric elements, respectively, of the poset \(p\)._ Proof.: Let \(p\in\mathrm{RBT}_{n}\). We want to count how many rooted binary trees \(T\in Y_{n}\) turn into the poset diagram for \(p\) when we forget the distinction between left and right children. Working backwards from an RBT poset \(p\), we need to count the number of distinct ways to assign left and right branches to each node of \(p\). If a node is maximal in \(p\), then no assignment of left and right branches can be made. Likewise no assignment needs to be made for symmetric nodes, since both possibilities would result in the same RBT. All other nodes in \(p\) have two ways to assign left and right branches, and these choices are all distinct. Thus in total there are \(2^{n-\max(p)-\mathrm{sym}(p)}\) different ways to assign left and right branches to the poset \(p\) and obtain an RBT in \(Y_{n}\). This gives us the coefficient for any RBT poset which appears in the Brion map, and since all rooted binary tree posets appear, we can conclude that \[B(a_{n})=\sum_{p\in\mathrm{RBT}_{n}}2^{n-\max(p)-\mathrm{sym}(p)}p.\] **Example 4.5**.: On \(a_{3}\), \[B(a_{3})=\,2^{3-1-0}\] [MISSING_PAGE_POST] \[+2^{3-2-1}\] **Corollary 4.7**.: _The \(n\)th Catalan number \(C_{n}\) is odd if and only if \(n=2^{k}-1\) for some \(k\in\mathbb{N}\)._ Proof.: We have expressed \(C_{n}\) as a sum of powers of two which depends on RBT posets. For a given poset \(p\), we have \(n-\max(p)-\operatorname{sym}(p)=0\) if and only if all nodes are either symmetric or maximal. This means every node has either \(0\) or \(2\) children. Call an RBT poset \(p\) with this property **totally symmetric**. By Corollary 4.6 it is clear that \(C_{n}\) is odd if and only if there are an odd number of totally symmetric posets in \(\operatorname{RBT}_{n}\). By definition, all totally symmetric RBT posets must be constructed as follows. We start with a root (level \(1\)), which has two children (level \(2\)), which each have two children (level \(3\)), and so on. If we repeat this process to level \(k\), then the resulting has \(2^{k}-1\) nodes, each of which is either maximal or symmetric. Thus the number of totally symmetric posets in \(\operatorname{RBT}_{n}\) is \(1\) if \(n=2^{k}-1\) for some \(k\in\mathbb{N}\) and \(0\) otherwise. ### The Brion Map on Orbit Polytopes We now consider the Brion map on orbit polytopes, making use of the correspondence between orbit polytopes and integer compositions (see Section 2.4.3). Given a composition \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{k})\) of the integer \(n\), note that the number of vertices of an orbit polytope in the class \(\mathcal{O}_{\lambda}\) is \(\frac{n!}{\lambda_{1}!\cdots\lambda_{k}!}\). Construct the unlabelled \(\lambda\)**-layered poset**\(p(\lambda)\) with \(\lambda_{1}\) nodes on the first level, \(\lambda_{2}\) on the level above that, and so on until the top level which has \(\lambda_{k}\) nodes; include all possible covering relations between each consecutive pair of levels (see Figure 4.7). **Theorem 4.8**.: _For \(\mathcal{O}_{\lambda}\in\overline{\mathbf{OP}}[I]\), the Brion map on the Hopf monoid of orbit polytopes is given by_ \[B(\mathcal{O}_{\lambda})=\sum_{\begin{subarray}{c}p\text{ a labelling}\\ \text{of }p(\lambda)\text{ by }I\end{subarray}}p.\] Proof.: We focus on the case where \(I=\{1,2,\ldots,n\}\); the proof for general sets \(I\) is analogous. Recall that \(\mathcal{O}_{\lambda}\) is a normal equivalence class of orbit polytopes, as defined in Figure 4.7: The \(\lambda\)-layered poset for \(\lambda=(3,1,5,2,5)\). Section 2.3. Normally equivalent polytopes have the same vertex cones, so consider the orbit polytope \(O\in\mathbb{R}^{n}\) of the class \(\mathcal{O}_{\lambda}\) whose vertices are all permutations of the vertex \[v=(\underbrace{k,\ldots,k}_{\lambda_{1}},\ldots,\underbrace{2,\ldots,2}_{ \lambda_{k-1}},\underbrace{1,\ldots,1}_{\lambda_{k}}).\] We claim that \(\operatorname{poset}_{v}(O)=p(\lambda)\). A neighbor \(w\) of \(v\) is obtained by swapping the positions of two consecutive values in \(v\); i.e. by swapping one of the \(1\)s with one of the \(2\)s, one of the \(j\)'s with one of the \(j+1\)'s, etc. Suppose \(w\) is obtained by swapping a \(k-(i-1)\) with a \(k-i\) in \(v\) for some \(1\leq i<k\): \[w=(\ldots,\underbrace{k-(i-1),\ldots,k-i,\ldots,k-(i-1)}_{\lambda_{i}}, \underbrace{k-i,\ldots,k-(i-1),\ldots,k-i}_{\lambda_{i+1}},\ldots)\] This means that the positions where \(v\) and \(w\) differ are \(\lambda_{1}+\cdots+\lambda_{i-1}+a\) and \(\lambda_{1}+\cdots+\lambda_{i}+b\) for some \(1\leq a\leq\lambda_{i}\) and \(1\leq b\leq\lambda_{i+1}\), and the edge direction vector \(w-v\) is \(e_{\lambda_{1}+\cdots+\lambda_{i}+b}-e_{\lambda_{1}+\cdots+\lambda_{i-1}+a}\). Hence we have the relation \(\lambda_{1}+\cdots+\lambda_{i}+b\geq\lambda_{1}+\cdots+\lambda_{i-1}+a\) in \(\operatorname{poset}_{v}(O)\) for any choices of \(1\leq a\leq\lambda_{i}\) and \(1\leq b\leq\lambda_{i+1}\). From this we can deduce that \(\operatorname{poset}_{v}(O)\) is the \(\lambda\)-layered poset labelled with \(1,2,\ldots,\lambda_{1}\) on the first level, \(\lambda_{1}+1,\lambda_{1}+d,\ldots,\lambda_{1}+\lambda_{2}\) on the second level, and so on. To find the vertex cone for another vertex of \(O\), we can permute the coordinates of points in \(O\) by any element of \(S_{n}\). This would permute the labels of corresponding poset, but would maintain the underlying unlabelled structure. Thus the vertex posets are all labellings of \(p(\lambda)\). **Theorem 4.9**.: _The Brion map on the Hopf algebra of orbit polytopes is given by,_ \[B(\mathcal{O}_{\lambda})=\frac{n!}{\lambda_{1}!\cdots\lambda_{k}!}p(\lambda).\] Proof.: Choose \(\lambda_{1}\) labels for the bottom level of \(p(\lambda)\), \(\lambda_{2}\) labellings for level \(2\), and so on. Thus the number of such labellings is the multinomial coefficient \(\frac{n!}{\lambda_{1}!\cdots\lambda_{k}!}\). Alternatively, this is the number of vertices of a representative of \(\mathcal{O}_{\lambda}\). The Dual Brion Map for \(\overline{\Pi}\), \(\overline{\mathbf{A}}\), and \(\overline{\mathbf{OP}}\) The **dual Brion map**\(B^{*}:\mathbf{P}^{*}\to\mathbf{GP}^{*}\) is the map dual to the Brion map \(B:\mathbf{GP}\to\mathbf{P}\) which is defined linearly on the basis elements by \(p^{*}\mapsto p^{*}\circ B\). This is again a morphism of Hopf monoids. Since \(\mathbf{P}^{*}\) and \(\mathbf{GP}^{*}\) are cocommutative, by Theorem 2.14 we may view these Hopf monoids as the universal enveloping monoids of their respective primitives. Hence we do not lose information by considering the restriction of the dual Brion map to the primitive elements: \(B^{*}|_{\mathcal{P}(\mathbf{P}^{*})}:\mathcal{P}(\mathbf{P}^{*})\to\mathcal{P} (\mathbf{GP}^{*})\). We can further restrict the image of this dual map to specific submonoids \(\overline{\mathbf{\Pi}}^{*},\overline{\mathbf{A}}^{*},\overline{\mathbf{OP}}^ {*}\) which is a dual notion to what we did in Section 4 when restricting our domain. Because we have inclusion maps \(\iota_{\overline{\mathbf{\Pi}}}:\overline{\mathbf{\Pi}}\to\overline{\mathbf{GP }}\), \(\iota_{\overline{\mathbf{A}}}:\overline{\mathbf{A}}\to\overline{\mathbf{GP}}\), \(\iota_{\overline{\mathbf{OP}}}:\overline{\mathbf{OP}}\to\overline{\mathbf{GP}}\) which are injective, the dual maps \(\iota_{\overline{\mathbf{\Pi}}}^{*}\), \(\iota_{\overline{\mathbf{A}}}^{*}\), \(\iota_{\overline{\mathbf{OP}}}^{*}\) are surjective. We can again restrict \(\iota^{*}\) to the primitives. Denote the following compositions \[B^{*}_{\overline{\mathbf{\Pi}}}=\iota_{\overline{\mathbf{\Pi}}|\mathcal{P}( \overline{\mathbf{GP}}^{*})}^{*}\circ B^{*}|_{\mathcal{P}(\mathbf{P}^{*})},\quad B ^{*}_{\overline{\mathbf{A}}}=\iota_{\overline{\mathbf{\Lambda}}|\mathcal{P}( \overline{\mathbf{GP}}^{*})}^{*}\circ B^{*}|_{\mathcal{P}(\mathbf{P}^{*})}, \quad B^{*}_{\overline{\mathbf{OP}}}=\iota_{\overline{\mathbf{OP}}|\mathcal{P} (\overline{\mathbf{GP}}^{*})}^{*}\circ B^{*}|_{\mathcal{P}(\mathbf{P}^{*})}.\] This dual inclusion map selects the terms in the image of \(B^{*}\) which are the linear functionals associated to the primitives in each respective Hopf monoid. Finally, composing with the Fock functor allows us to pass to Hopf algebras and define the maps we will now explore: \(B^{*}_{\Pi}=\mathcal{F}(B^{*}_{\overline{\mathbf{\Pi}}}),B^{*}_{A}=\mathcal{F}(B ^{*}_{\overline{\mathbf{A}}}),B^{*}_{OP}=\mathcal{F}(B^{*}_{\overline{\mathbf{ OP}}})\). Since the primitives given by elements \(p^{*}\in\mathbf{P}^{*}\) are dual to connected posets \(p\in\mathbf{P}\), we have that \(B^{*}(p^{*})\) is a linear functional sending \(Q\in\overline{\mathbf{GP}}\) to the coefficient of the poset \(p\) in the evaluation of the Brion map \(B(Q)\). If \(Q\in\mathcal{P}(\overline{\mathbf{GP}})\) is a single generalized permutahedron, then \(B^{*}(p^{*})(Q)\) will be the coefficient of the Brion map if the vertex poset of \(Q\) at some vertex is the same as \(p\), and \(0\) otherwise. **Theorem 5.1**.: _The dual Brion map restricted to permutahedra, associahedra, and orbit polytopes is given as follows:_ 1. _The dual Brion map over the dual Hopf algebra of permutahedra is defined as_ \[B^{*}_{\Pi}(p^{*})=\begin{cases}n!\,\pi^{*}_{n}&\text{if $p$ is an unlabeled chain of $n$ nodes},\\ 0&\text{otherwise}.\end{cases}\] 2. _The dual Brion map over the dual Hopf algebra of associahedra is defined as_ \[B^{*}_{A}(p^{*})=\begin{cases}2^{n-\max(p)-\mathrm{sym}(p)}\,a^{*}_{n}&\text{ if $p$ is an unlabeled RBT poset on $[n]$},\\ 0&\text{otherwise}.\end{cases}\] 3. _The dual Brion map over the dual Hopf algebra of orbit polytopes is defined as_ \[B^{*}_{OP}(p^{*})=\begin{cases}\frac{n!}{\lambda_{1}!\cdots\lambda_{k}!}\, \mathcal{O}^{*}_{\lambda}&\text{if $p$ is $P(\lambda)$ for some $\lambda$},\\ 0&\text{otherwise}.\end{cases}\] Proof.: When considering the Brion map over Hopf algebras, for a given permutahedron \(\pi_{n}\), associahedron \(a_{n}\), and orbit polytope \(\mathcal{O}_{\lambda}\) we have \[B(\pi_{n})=n!\,c_{n},\quad B(a_{n})=\sum_{\text{RBT poset $p$}}2^{n-\max(p)- \mathrm{sym}(p)}\,p,\quad B(\mathcal{O}_{\lambda})=\frac{n!}{\lambda_{1}! \cdots\lambda_{k}!}p(\lambda)\] where \(c_{n}\) is the unlabeled chain on \(n\) nodes and \(\lambda=(\lambda_{1},\ldots,\lambda_{k})\) (see Sections 4.1 to 4.3). We now consider \(B^{*}(p^{*})\) which in each case is a linear functional from the Hopf algebra (\(\Pi\), \(A\), or \(OP\)) to \(\mathbb{R}\). Since the primitives in \(\mathbf{GP}^{*}\) corresponded to those generalized permutahedra in \(\mathbf{GP}\) which are not products (Proposition 3.2) we only consider the linear functional evaluated on these elements of \(\Pi\), \(A\), and \(OP\). If \(p\) is a poset appearing in the Brion map... 1....of \(\pi_{n}\in\Pi\), then \(p\) is the unlabeled chain \(c_{n}\) on \(n\) nodes, and hence \[\left(B^{*}_{\Pi}(p^{*})\right)\left(\pi_{n}\right)=p^{*}(B(\pi_{n}))=p^{*}(n! \,p)=n!\] Moreover, \(\left(B^{*}(p^{*})\right)\left(\pi_{m}\right)=0\) for \(m\neq n\) since \(p\) has \(n\) nodes but the posets which appear in \(B(\pi_{m})\) have \(m\) nodes. 2....of \(a_{n}\in A\), then \(p\) is an unlabeled RBT poset on \(n\) nodes, and thus \[\left(B_{A}^{*}(p^{*})\right)\left(a_{n}\right)=p^{*}\left(B(a_{n})\right)=\sum_{ \text{RBT poset }q}2^{n-\max(q)-\operatorname{sym}(q)}\,p^{*}(q)=2^{n-\max(p)- \operatorname{sym}(p)}\] Moreover, \(\left(B^{*}(p^{*})\right)\left(a_{m}\right)=0\) for \(m\neq n\) by the same reasoning as above. 3....of \(\mathcal{O}_{\lambda}\in OP\), then \(p\) is the unlabeled \(\lambda\)-layered poset \(p(\lambda)\) and \[\left(B_{OP}^{*}(p^{*})\right)\left(\mathcal{O}_{\lambda}\right)=p^{*}\left(B (\mathcal{O}_{\lambda})\right)=p^{*}\left(\frac{n!}{\lambda_{1}!\cdots\lambda _{k}!}\,p\right)=\frac{n!}{\lambda_{1}!\cdots\lambda_{k}!}\] Moreover if \(\lambda^{\prime}\) is a different composition than \(\lambda\), we have \(\left(B^{*}(p^{*})\right)\left(\mathcal{O}_{\lambda^{\prime}}\right)=0\). If \(p\) is any other poset not appearing as a term in the evaluation of the Brion map, then the linear functional \(B^{*}(p^{*})\) is identically \(0\). ## 6 Acknowledgements We thank Federico Ardila for valuable guidance and mentorship, and Tracy Camacho for fruitful conversations. This work first appeared in the first author's Masters thesis, which was co-advised by Federico Ardila and the second author at San Francisco State University. The second author was partially supported by grant 2018-03968 of the Swedish Research Council as well as the Goran Gustafsson Foundation.
2306.12378
Boundedness in $L_p$ spaces for the Hartley-Fourier convolutions operator and their applications
The paper deals with $L_p$-boundedness of the Hartley-Fourier convolutions operator and their applied aspects. We establish various new Young-type inequalities and obtain the structure of a normed ring in Banach space when equipping it with such convolutional multiplication. Weighted $L_p$-norm inequalities of these convolutions are also considered. As applications, we investigate the solvability and the bounded $L_1$-solution of a class of Fredholm-type integral equations and linear Barbashin's equations with the help of factorization identities of such convolutions. Several examples are provided to illustrate the obtained results to ensure their validity and applicability.
Trinh Tuan
2023-06-21T16:50:00Z
http://arxiv.org/abs/2306.12378v3
Boundedness in \(L_{p}\) spaces for the Hartley-Fourier convolutions operator and their applications ###### Abstract. The paper deals with \(L_{p}\)-boundedness of the Hartley-Fourier convolutions operator and their applied aspects. We establish various new Young-type inequalities and obtain the structure of a normed ring in Banach space when equipping it with such convolutional multiplication. Weighted \(L_{p}\)-norm inequalities of these convolutions are also considered. As applications, we investigate the solvability and the bounded \(L_{1}\)-solution of a class of Fredholm-type integral equations and linear Barbashin's equations with the help of factorization identities of such convolutions. Several examples are provided to illustrate the obtained results to ensure their validity and applicability. **Keywords.**\(L_{p}\)-Boundedness. Convolutions. Hartley-Fourier transforms. Young's inequalities. Wiener-Tauberian's theorem. Barbashin's equations. Fredholm's integral equations. Accepted 02 May 2023 by _J. Math. Sci._ Published 11 July 2023 E-mail: [email protected] ## 1. Introduction Integral inequalities are fundamental tools in the study of qualitative as well as quantitative properties of integral transformations and solutions of differential equations. In particular, the convolution inequality is indispensable because there are so many integral transformations and solutions of differential equations that are represented as convolutions (see [8]). Certainly among all the convolutional transformations, the best known is the Fourier convolution. In 1912, Young. W. H introduced Young's inequality for the bounded estimation of the Fourier convolution (refer [29]), which is \(\|(f\ast_{F}g)\|_{L_{r}(\mathbb{R})}\leq\|f\|_{L_{p}(\mathbb{R})}\ \|g\|_{L_{q}(\mathbb{R})}\), where \(p,q,r>1\) such that \(\frac{1}{p}+\frac{1}{q}=1+\frac{1}{r}\). R.A. Adams and J.F. Fournier generalized Young's inequality for the Fourier convolution (refer to Theorem 2.24, page 33, in [1]) to include a weight \[\bigg{|}\int\limits_{\mathbb{R}^{n}}(f\ast_{F}g)(x).\ \omega(x)dx\bigg{|}\leq\|f\|_{L_{p}( \mathbb{R}^{n})}\|g\|_{L_{q}(\mathbb{R}^{n})}\|\omega\|_{L_{r}(\mathbb{R}^{n})},\] where \(p,q,r\) are real numbers in \((1,\infty)\) such that \(\frac{1}{p}+\frac{1}{q}+\frac{1}{r}=2\) and \(f\in L_{p}(\mathbb{R}^{n}),g\in L_{q}(\mathbb{R}^{n}),\omega\in L_{r}(\mathbb{ R}^{n})\) with the Fourier convolution defined by \((f\ast_{F}g)(x):=\frac{1}{\sqrt{2\pi}}\int f(y)\ g(x-y)dy,\ x\in\mathbb{R}\), then Young's inequality is seen as a consequence of Young's theorem [16]. Nevertheless, Young's inequality does not hold for the important case \(f,g\in L_{2}(\mathbb{R}).\) Based on the technique in [1, 10, 23] and derived from the previous results for Hartley-Fourier convolutions [9, 21, 26], we study the \(L_{p}\) norm estimates of these convolutions and applicabilities, which are also the main contributions of this paper. The obtained results are new Young type inequalities with their important applications in solving classes of integro-differential equations. The applications aspect will be detailed here, for a large class of integral operators, with the results that will exemplify three different topics: **(i)** Non-commutative normed ring structure on Banach space \(L_{1}(\mathbb{R})\); **(ii)** Solvability and boundedness of \(L_{1}\)-solutions of Fredholm's integral equation of the second kind; **(iii)** Solvability and boundedness of \(L_{1}\)-solutions for Barbashin's equations & Cauchy-type problem. In addition, it should be noted that the factorization property of Hartley-Fourier convolutions is crucial in solving corresponding convolution-type equations [5, 21, 26]. It is also clear that convolution-type equations are very often used in the modeling of a broad range of different problems [6, 7, 25], so additional knowledge about their solvability is very welcome. This paper is divided into five sections including the introduction and the remaining four parts. Section 2 is devoted to the presentation of the notions of Fourier and Hartley's transforms, and the relationship between them. We also recall the definitions of Hartley, Hartley-Fourier convolutions and some results for factorization properties of these convolutions. We also prove that the assertion of Wiener-Levy's theorem holds for the Hartley transform (see Lemma 2.1). Section 3 consists of three subsections. In Subsection 3.1, we prove Young's Theorem of the Hartley-Fourier generalized convolution (2.4) and corollaries for estimation in \(L_{1}(\mathbb{R})\) space. The general formulation of Young-type inequality for the Hartley convolutions (2.5) is stated in Subsection 3.2. In Subsection 3.3, using Holder's inequality, Fubini's theorem, and integral transforms, we establish norm inequalities in weighted \(L_{p}(\mathbb{R},\rho_{j})\) spaces of each above convolutions. Section 4 deals with applications of the constructed convolutions. In Subsection 4.1, the linear space \(L_{1}(\mathbb{R})\), equipped with the above-mentioned convolutions, becomes a normed ring, having no unit, which is non-commutative, and could be used in the theories of Banach algebra. Subsections 4.2 and 4.3 contain the most important results of this section where Fredholm's integral equations of the second kind and Barbashin's equation are considered simultaneously. By using the constructed convolutions together with the help of Wiener-Levy theorem and Schwartz's function classes, we provide necessary and sufficient conditions for the solvability of the integral equations of convolution type and obtain explicit \(L_{1}\)-solutions. The most obvious differences when solving the problems of subsections 4.2 and 4.3 in \(L_{1}(\mathbb{R})\) and the previous results in [24, 26] are that the Fourier, Hartley transforms \(L_{2}(\mathbb{R})\longleftrightarrow L_{2}(\mathbb{R})\) are an isometric (unitary) isomorphism [17], meanwhile it is no longer true for \(L_{1}(\mathbb{R})\). Furthermore, in the space \(L_{1}(\mathbb{R})\), Theorem (68, page 92, in [22]) is no longer true. To overcome that, we use simultaneously the density in \(L_{1}(\mathbb{R})\) of Schwartz's function classes [14], Wiener-Levy Lemma 2.1, and factorization properties. Subsection 4.4 is concerned with the Cauchy-type problem. Techniques used here to check if the solution of the problem satisfies the initial value theorem or not come from [11] and [28]. Namely, we follow closely the strategy of Wiener-Tauberian's theorem. Some computational examples can be found in the final section of the paper to illustrate the obtained results to ensure their validity and applicability. ## 2. Preliminaries We recall some notions and results coming from [4, 5, 9, 11, 12, 21, 22, 25, 26]. We denote by (F) the Fourier integral transform ([4, 22]) of a function \(f\) defined by \((Ff)(y)=\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}}e^{-ixy}f(x)dx\) with \(y\in\mathbb{R}\), where \(e^{-ixy}=\frac{1+i}{2}\operatorname{Cas}(xy)+\frac{1-i}{2}\operatorname{Cas} (-xy)\). The Hartley transform was proposed as an alternative to the Fourier transform by R. V. L. Hartley in 1942 (see [5]). The Hartley transform is an alternate means of analyzing a given function in terms of its sinusoids. This transform is its own inverse and an efficient computational tool in case of real-value functions. The Hartley transform of a function is a spectral transform and can be obtained from the Fourier transform by replacing the exponential kernel \(exp(-ixy)\) by \(\operatorname{Cas}(-xy)\). The Hartley transform of a function \(f\) can be expressed as either \[\big{(}H_{\big{\{}\frac{1}{2}\big{\}}}f\big{)}(y)=\frac{1}{\sqrt{2\pi}}\int_{ \mathbb{R}}f(x)\operatorname{Cas}(\pm xy)dx,\ y\in\mathbb{R}, \tag{2.1}\] where \(\operatorname{Cas}(\pm xy)=\cos(xy)\pm\sin(xy)\). Thus, it is obvious that we have the following correlation between the Hartley transform and the Fourier transform as follows \[\big{(}H_{\big{\{}\frac{1}{2}\big{\}}}f\big{)}(y)=F\left(\frac{1\pm i}{2}f(x) +\frac{1\mp i}{2}f(-x)\right)(y), \tag{2.2}\] and \[(Ff)(y)=H_{\big{\{}\frac{1}{2}\big{\}}}\left(\frac{1-i}{2}f(\pm x)+\frac{1+i}{ 2}f(\mp x)\right)(y). \tag{2.3}\] Actually, the Hartley transform can be applied to approximate the continuous transformation of a non-periodic signal of finite duration. Being a real-valued function, it has some computational advantages such as memory efficiency and faster harmonic solutions over the Fourier transform which is a complex tool, readers refer to details in [5, 12]. Consequently, the method based on the Hartley transform is helpful to estimate harmonic effects for inrush current by giving a unified iterative solution that exhibits good convergence. **Definition 1**.: _The following generalized convolution related to Hartley-Fourier \((H_{1},F)\) transforms was introduced recently (see [21, 26]) defined by_ \[\big{(}f\underset{H_{1},F}{*}g\big{)}(x):=\frac{1}{2\sqrt{2\pi}}\int_{\mathbb{R }}g(y)[f(x+y)+f(x-y)+if(-x-y)-if(-x+y)]dy,\quad\forall x\in\mathbb{R}. \tag{2.4}\] _For brevity, we call (2.4) it the Hartley-Fourier generalized convolution._ **Definition 2**.: _The convolution of two functions \(f\) and \(g\) for the Hartley \(H_{\big{\{}\frac{1}{2}\big{\}}}\) transform (see [9, 26]) is defined by_ \[\big{(}f\underset{H_{\big{\{}\frac{1}{2}\big{\}}}}{*}g\big{)}(x):=\frac{1}{2 \sqrt{2\pi}}\int_{\mathbb{R}}f(y)[g(x+y)+g(x-y)+g(-x+y)-g(-x-y)]dy,\quad\forall x \in\mathbb{R}. \tag{2.5}\] _For brevity, we call (2.5) it the Hartley convolution._ **Proposition 2.1**.: i) _[_21_]__. Let \(f,g\) be the functions belonging to \(L_{1}(\mathbb{R})\), then Hartley-Fourier generalized convolution is \(\big{(}f\underset{H_{1},F}{*}g\big{)}(x)\in L_{1}(\mathbb{R})\). Furthermore, the factorization equality (2.6) is valid._ \[H_{1}\big{(}f\underset{H_{1},F}{*}g\big{)}(y)=(H_{1}f)(y)(Fg)(y),\quad y\in \mathbb{R}. \tag{2.6}\] ii) _[9, 26]_. Suppose that \(f,g\in L_{1}(\mathbb{R})\), then Hartley convolution (2.5) is well-defined, which means that \(\big{(}f\underset{H_{\big{\{}\frac{1}{2}\big{\}}}}{*}g\big{)}\) belongs to \(L_{1}(\mathbb{R})\). And the following factorization equality_ \[H_{\big{\{}\frac{1}{2}\big{\}}}\big{(}f\underset{H_{\big{\{}\frac{1}{2}\big{\} }}}{*}g\big{)}(y)=\big{(}H_{\big{\{}\frac{1}{2}\big{\}}}f\big{)}(y)\big{(}H_{ \big{\{}\frac{1}{2}\big{\}}}g\big{)}(y),\quad y\in\mathbb{R}, \tag{2.7}\] _holds true for any \(f,g\in L_{1}(\mathbb{R})\)._ We need the following auxiliary lemmas. The Wiener-Levy's Theorem (refer [11]) for the Fourier transform stated that if \(\xi\in L_{1}(\mathbb{R})\), then \(1+(F\xi)(y)\neq 0\) for any \(y\in\mathbb{R}\) is a necessary and sufficient condition for the existence of a function \(k\) belonging to \(L_{1}(\mathbb{R})\) such that \((Fk)(y)=\frac{(F\xi)(y)}{1+(F\xi)(y)}\), where \(\xi\in L_{1}(\mathbb{R})\). This theorem still holds true for the Hartley transform and we will detail it below. **Lemma 2.1**.: **(Wiener-Levy's Theorem for Hartley transform)** _For the Hartley transform is defined by (2.1) and suppose that \(\varphi\in L_{1}(\mathbb{R})\), then \(1+\big{(}H_{\big{\{}\frac{1}{2}\big{\}}}\varphi\big{)}(y)\neq 0\), \(\forall y\in\mathbb{R}\) is a necessary and sufficient condition for the existence of a function \(\ell\in L_{1}(\mathbb{R})\) such that \(\big{(}H_{\big{\{}\frac{1}{2}\big{\}}}\ell\big{)}(y)=\frac{\big{(}H_{\big{\{} \frac{1}{2}\big{\}}}\varphi\big{)}(y)}{1+\big{(}H_{\big{\{}\frac{1}{2}\big{\}} }\varphi\big{)}(y)},\) for any \(y\in\mathbb{R}\)._ Proof.: We put \(\xi(x):=\frac{1+i}{2}\varphi(x)+\frac{1+i}{2}\varphi(-x)\). This implies that \(\xi\in L_{1}(\mathbb{R})\) if and only if \(\varphi\) belongs to \(L_{1}(\mathbb{R})\). Hence, from the correlation between Hartley and Fourier transforms (2.2), we obtain \((F\xi)(y)=\big{(}H_{\big{\{}\frac{1}{2}\big{\}}}\varphi\big{)}(y),\forall y\in \mathbb{R}\). Applying Wiener-Levy's theorem to the Fourier (F) transform (see[11]), we have \(1+(F\xi)(y)\neq 0\), which implies that \(1+\big{(}H_{\big{\{}\frac{1}{2}\big{\}}}\varphi\big{)}(y)\neq 0\), \(\forall y\in\mathbb{R}\) is a necessary and sufficient condition for the existence of a function \(k\in L_{1}(\mathbb{R})\) such that \[(Fk)(y)=\frac{(F\xi)(y)}{1+(F\xi)(y)}=\frac{\big{(}H_{\big{\{}\frac{1}{2}\big{\} }}\varphi\big{)}(y)}{1+\big{(}H_{\big{\{}\frac{1}{2}\big{\}}}\varphi\big{)}(y)}, \forall y\in\mathbb{R}. \tag{2.8}\] Conversely, if set \(\ell(x):=\frac{1-i}{2}k(\pm x)+\frac{1+i}{2}k(\mp x)\), by the same above argument, we deduce \(\ell\) belongs to the \(L_{1}(\mathbb{R})\) if and only if \(k\in L_{1}(\mathbb{R})\) and by formula (2.3), we obtain \[\big{(}H_{\big{\{}\frac{1}{2}\big{\}}}\ell\big{)}(y)=(Fk)(y),\forall y\in \mathbb{R}. \tag{2.9}\] Coupling (2.8) with (2.9), we have \(\big{(}H_{\big{\{}\frac{1}{2}\big{\}}}\ell\big{)}(y)=(Fk)(y)=\frac{(F\xi)(y)}{ 1+(F\xi)(y)}=\frac{\big{(}H_{\big{\{}\frac{1}{2}\big{\}}}\varphi\big{)}(y)}{1+ \big{(}H_{\big{\{}\frac{1}{2}\big{\}}}\varphi\big{)}(y)},\forall y\in\mathbb{R}\). **Lemma 2.2**.: _Suppose that \(g\in C^{2}(\mathbb{R})\cap L_{1}(\mathbb{R})\) such that \(g^{\prime},g^{\prime\prime}\in L_{1}(\mathbb{R})\) and \(\lim_{|x|\to\infty}g(x)=\lim_{|x|\to\infty}g^{\prime}(x)=0\). For any function \(f\) belongs to \(L_{1}(\mathbb{R})\), we obtain_ \[H_{\big{\{}\frac{1}{2}\big{\}}}\bigg{\{}\left(1-\frac{d^{2}}{dx^{2}}\right) \big{(}f\underset{H_{\big{\{}\frac{1}{2}\big{\}}}}{*}g\big{)}(x)\bigg{\}}(y)=(1+ y^{2})H_{\big{\{}\frac{1}{2}\big{\}}}\big{(}f\underset{H_{\big{\{}\frac{1}{2}\big{\}}}}{*}g \big{)}(y), \tag{2.10}\] Proof.: By (2.1) and assumption of \(g(x)\), we have \[\Big{(}H_{\{\frac{1}{2}\}}g^{\prime\prime}(x)\Big{)}\,(y) = \frac{1}{2\sqrt{\pi}}\int_{\mathbb{R}}g^{\prime\prime}(x)\text{Cas} (\pm xy)dx,\quad y\in\mathbb{R} \tag{2.11}\] \[= \frac{1}{\sqrt{2\pi}}\bigg{\{}g^{\prime}(x)\text{Cas}(\pm xy) \Big{|}_{-\infty}^{\infty}\pm y\text{Cas}(\pm xy)g(x)\Big{|}_{-\infty}^{\infty }-y^{2}\int_{\mathbb{R}}g(x)\text{Cas}(\pm xy)dx\bigg{\}}\] \[= -y^{2}\big{(}H_{\{\frac{1}{2}\}}g(x)\big{)}(y).\] Since \(f,g,g^{\prime}\), and \(g^{\prime\prime}\) are functions belonging to \(L_{1}(\mathbb{R})\), we infer that the following convolutions \(\Big{(}f\underset{H_{\{\frac{1}{2}\}}}{*}g\Big{)}\); \(\Big{(}f\underset{H_{\{\frac{1}{2}\}}}{*}g^{\prime}\Big{)}\); and \(\Big{(}f\underset{H_{\{\frac{1}{2}\}}}{*}g^{\prime\prime}\Big{)}\) both belong to \(L_{1}(\mathbb{R})\) (see [21, 26]). Notation \(\mathcal{S}\) is the space of functions \(g(x)\in C^{\infty}(\mathbb{R})\) whose derivative decreases rapidly when \(|x|\) tends to \(\infty\), it is also known as Schwartz space. Then, the closure of \(\mathcal{S}=L_{1}(\mathbb{R})\), which means that \(\mathcal{S}\) is the dense set in the \(L_{1}(\mathbb{R})\) space. Thus, for any \(g,g^{\prime}\), and \(g^{\prime\prime}\) belong to the \(L_{1}(\mathbb{R})\) there exists a sequence of functions \(\{g_{n}\}\in S\) such that \(\{g_{n}\}\to g,\ \{g^{\prime}_{n}\}\to g^{\prime}\) and \(\{g^{\prime\prime}_{n}\}\to g^{\prime\prime}\) when n tends to \(\infty\). With the above function classes, together with the assumption that \(g\) is a function belonging to the \(L_{1}(\mathbb{R})\) space, the integral in the formula (2.5) is convergent. We continue to change the order of the integration and the differentiation as follow \[\frac{d^{2}}{dx^{2}}\big{(}f\underset{H_{\{\frac{1}{2}\}}}{*}g_{n} \big{)}(x) =\frac{1}{2\sqrt{2\pi}}\int_{-\infty}^{\infty}f(y)\frac{d^{2}}{dx ^{2}}\left\{g_{n}(x+y)+g_{n}(-x+y)g_{n}(x-y)-g_{n}(-x-y)\right\}dy\] \[=\frac{1}{2\sqrt{2\pi}}\int_{-\infty}^{\infty}f(y)\left\{g^{ \prime\prime}_{n}(x+y)+g^{\prime\prime}_{n}(-x+y)+g^{\prime\prime}_{n}(x-y)-g^ {\prime\prime}_{n}(-x-y)\right\}dy\] \[=\big{(}f\underset{H_{\{\frac{1}{2}\}}}{*}g^{\prime\prime}_{n} \big{)}(x)\in L_{1}(\mathbb{R}).\] Therefore \(\lim\limits_{n\to\infty}\frac{d^{2}}{dx^{2}}\big{(}f\underset{H_{\{\frac{1}{2} \}}}{*}g_{n}\big{)}(x)=\lim\limits_{n\to\infty}\Big{(}f\underset{H_{\{\frac{1}{2} \}}}{*}g^{\prime\prime}_{n}\big{)}(x)\). This implies that \(\frac{d^{2}}{dx^{2}}\big{(}f\underset{H_{\{\frac{1}{2}\}}}{*}g\big{)}(x)= \big{(}f\underset{H_{\{\frac{1}{2}\}}}{*}g^{\prime\prime}\big{)}(x)\) belongs to \(L_{1}(\mathbb{R})\). Coupling factorization equality (2.7) with (2.11), we have \[H_{\{\frac{1}{2}\}}\bigg{\{}\frac{d^{2}}{dx^{2}}\big{(}f\underset{H_{\{ \frac{1}{2}\}}}{*}g\big{)}(x)\bigg{\}}(y) =H_{\{\frac{1}{2}\}}\big{(}f\underset{H_{\{\frac{1}{2}\}}}{*}g^{ \prime\prime}\big{)}(y)=\big{(}H_{\{\frac{1}{2}\}}f\big{)}(y)\big{(}H_{\{\frac{1} {2}\}}g^{\prime\prime}\big{)}(y)\] \[=-y^{2}\big{(}H_{\{\frac{1}{2}\}}f\big{)}(y)\big{(}H_{\{\frac{1}{2} \}}g\big{)}(y)\] \[=-y^{2}H_{\{\frac{1}{2}\}}\big{(}f\underset{H_{\{\frac{1}{2}\}}}{*}g \big{)}(y).\] From the above equality, we infer the desired conclusion of the lemma. Throughout the article, we shall make frequent use of the weighted Lebesgue spaces \(L_{p}(\mathbb{R},w(x)dx)\), \(1\leq p<\infty\) with respect to a positive measure \(w(x)dx\) equipped with the norm for which \(\|f\|_{L_{p}(\mathbb{R},w)}\)= \(\bigg{(}\int\limits_{\mathbb{R}}|f(x)|^{p}w(x)dx\bigg{)}^{1/p}<+\infty\). If the weighted function \(w=1\), then \(L_{p}(\mathbb{R},w)\equiv L_{p}(\mathbb{R})\). In case \(p=\infty\), then the norm of functions is defined by \(\|f\|_{L_{\infty}(\mathbb{R})}\)= \(\sup\limits_{x\in\mathbb{R}}|f(x)|<+\infty\). ## 3. \(L_{p}\)-boundedness for the Hartley-Fourier convolutions ### Young's theorem for Hartley-Fourier generalized convolution **Theorem 3.1** (Young-type Theorem for Hartley-Fourier generalized convolution).: _Let \(p,q\), and \(r\) be real numbers in open interval \((1,\infty)\) such that \(\frac{1}{p}+\frac{1}{q}+\frac{1}{r}=2\). For any functions \(g\in L_{p}(\mathbb{R})\), \(f\in L_{q}(\mathbb{R})\), and \(h\in L_{r}(\mathbb{R})\), we obtain the following inequality_ \[\bigg{|}\int_{\mathbb{R}}\big{(}f\underset{H_{1},F}{*}g\big{)}(x)\cdot h(x)\,dx \bigg{|}\leq\sqrt{\frac{2}{\pi}}\|g\|_{L_{p}(\mathbb{R})}\|f\|_{L_{q}(\mathbb{R})} \|h\|_{L_{r}(\mathbb{R})}, \tag{3.1}\] _where \(\big{(}f\underset{H_{1},F}{*}g\big{)}\) is defined by (2.4)._ Proof.: Let \(p_{1},q_{1},r_{1}\) be the conjugate exponentials of \(p,q,r\), respectively. This means that \(\frac{1}{p}+\frac{1}{p_{1}}=\frac{1}{q}+\frac{1}{q_{1}}=\frac{1}{r}+\frac{1}{r_{1 }}=1\), together with the assumption of theorem, we get the correlation between exponential numbers as follows \[\left\{\begin{array}{l}\frac{1}{p_{1}}+\frac{1}{q_{1}}+\frac{1}{r_{1}}=1,\\ p\left(\frac{1}{q_{1}}+\frac{1}{r_{1}}\right)=q\left(\frac{1}{p_{1}}+\frac{1}{ r_{1}}\right)=r\left(\frac{1}{p_{1}}+\frac{1}{q_{1}}\right)=1.\end{array}\right. \tag{3.2}\] For simplicity, we set \[A_{1}(x,y) = |f(x+y)+f(x-y)+if(-x-y)-if(-x+y)|^{\frac{q}{p_{1}}}\cdot|h(x)|^{ \frac{r}{p_{1}}}\in L_{p_{1}}(\mathbb{R}^{2}),\] \[A_{2}(x,y) = |f(x+y)+f(x-y)+if(-x-y)-if(-x+y)|^{\frac{q}{r_{1}}}|g(x)|^{\frac{ r}{r_{1}}}\in L_{r_{1}}(\mathbb{R}^{2}),\] \[A_{3}(x,y) = |g(y)|^{\frac{p}{q_{1}}}|h(x)|^{\frac{r}{q_{1}}}\in L_{q_{1}}( \mathbb{R}^{2}).\] From (3.2), we deduce that \[\prod_{i=1}^{3}A_{i}(x,y)=|f(x+y)+f(x-y)+if(-x-y)-if(-x+y)|\ |g(x)||h(x)|,\quad \forall(x,y)\in\mathbb{R}^{2}. \tag{3.3}\] On the other hand, since \(q>1\), then \(t^{q}\) is a convex function, using the change of variables theorem, we obtain \[\int_{\mathbb{R}}|f(x+y)+f(x-y)+if(-x-y)-if(-x+y)|^{q}dy\] \[\leq 4^{q-1}\int_{\mathbb{R}}\left(|f(x+y)|^{q}+|f(x-y)|^{q}+|if(- x-y)|^{q}+|if(-x+y)|^{q}\right)dy=4^{q}\int_{\mathbb{R}}|f(t)|^{q}dt.\] Based on the assumption of \(f\in L_{q}(\mathbb{R}),h\in L_{r}(\mathbb{R})\), using Fubini's theorem, we obtain \(L_{p_{1}}(\mathbb{R}^{2})\)-norm estimation for the operator \(A_{1}(x,y)\) as follows \[\|A_{1}\|_{L_{p_{1}}(\mathbb{R}^{2})}^{p_{1}} =\int_{\mathbb{R}^{2}}|f(x+y)+f(x-y)+if(-x-y)-if(-x+y)|^{q}|h(x)|^ {r}dxdy\] \[\leq 4^{q}\int_{\mathbb{R}}\biggl{\{}\int_{\mathbb{R}}|f(t)|^{q}dt \biggr{\}}|h(x)|^{r}dx=4^{q}\biggl{(}\int_{\mathbb{R}}|f(t)|^{q}dt\biggr{)} \biggl{(}\int_{\mathbb{R}}|h(x)|^{r}dx\biggr{)}=4^{q}\|f\|_{L_{q}(\mathbb{R})} ^{q}\|h\|_{L_{r}(\mathbb{R})}^{r}.\] This means that \[\|A_{1}\|_{L_{p_{1}}(\mathbb{R}^{2})}=4^{\frac{q}{p_{1}}}\|f\|_{L_{q}(\mathbb{ R})}^{\frac{q}{p_{1}}}\|h\|_{L_{r}(\mathbb{R})}^{\frac{r}{p_{1}}}. \tag{3.4}\] Similar to what we did with the evaluation (3.4) of \(A_{1}(x,y)\), we also get the norm estimation of \(A_{2}\) on \(L_{r_{1}}(\mathbb{R}^{2})\) as follows \[\|A_{2}\|_{L_{r_{1}}(\mathbb{R}^{2})}=4^{\frac{q}{r_{1}}}\|f\|_{L_{q}(\mathbb{ R})}^{\frac{q}{p_{1}}}\|g\|_{L_{p}(\mathbb{R})}^{\frac{p}{p_{1}}}. \tag{3.5}\] And \(L_{q_{1}}(\mathbb{R}^{2})\)-norm estimation for the operator \(A_{3}\) has the following form \[\|A_{3}\|_{L_{q_{1}}(\mathbb{R}^{2})}^{q_{1}}=\int_{\mathbb{R}^{2}}|g(y)|^{p}|h (x)|^{r}dxdy=\bigg{(}\int_{\mathbb{R}}|g(y)|^{p}dy\bigg{)}\bigg{(}\int_{ \mathbb{R}}|h(x)|^{r}dx\bigg{)}=\|g\|_{L_{p}(\mathbb{R})}^{p}\|h\|_{L_{r}( \mathbb{R})}^{r}.\] Therefore, we have \[\|A_{3}\|_{L_{q_{1}}(\mathbb{R}^{2})}=\|g\|_{L_{p}(\mathbb{R})}^{\frac{p}{q_{1}} }\|h\|_{L_{r}(\mathbb{R})}^{\frac{r}{q_{1}}}. \tag{3.6}\] Combining (3.4)(3.5) and (3.6), under condition (3.2), we obtain \[\|A_{1}\|_{L_{p_{1}}(\mathbb{R}^{2})}\|A_{2}\|_{L_{r_{1}}(\mathbb{R}^{2})}\|A_{ 3}\|_{L_{q_{1}}(\mathbb{R}^{2})}\leq 4\|g\|_{L_{p}(\mathbb{R})}\|f\|_{L_{q}( \mathbb{R})}\|h\|_{L_{r}(\mathbb{R})}. \tag{3.7}\] Furthermore, from (2.4) and (3.3) we have \[\bigg{|}\int_{\mathbb{R}}\big{(}f\underset{n_{1},F}{*}g\big{)}(x) \cdot h(x)dx\bigg{|} \leq\frac{1}{2\sqrt{2\pi}}\int_{\mathbb{R}^{2}}|g(y)||f(x+y)+f(x-y) +if(-x-y)-if(-x+y)||h(x)|dydx\] \[=\frac{1}{2\sqrt{2\pi}}\int_{\mathbb{R}^{2}}\ \prod_{i=1}^{3}A_{i}(x,y)dxdy\] Thus \(\frac{1}{p_{1}}+\frac{1}{q_{1}}+\frac{1}{r_{1}}=1.\) Using Holder inequality and (3.7), then \(\forall g\in L_{p}(\mathbb{R}),\ f\in L_{q}(\mathbb{R}),\ h\in L_{r}(\mathbb{R})\) we have \[\frac{1}{2\sqrt{2\pi}}\int_{\mathbb{R}^{2}}\ \prod_{i=1}^{3}A_{i}(x,y)dxdy\] \[\leq\frac{1}{2\sqrt{2\pi}}\left(\int_{\mathbb{R}^{2}}|A_{1}(x,y)| ^{p_{1}}dxdy\right)^{1/p_{1}}\left(\int_{\mathbb{R}^{2}}|A_{2}(x,y)|^{r_{1}} dxdy\right)^{1/r_{1}}\left(\int_{\mathbb{R}^{2}}|A_{3}(x,y)|^{q_{1}}dxdy \right)^{1/q_{1}}\] \[=\frac{1}{2\sqrt{2\pi}}\|A_{1}\|_{L_{p_{1}}(\mathbb{R}^{2})}\|A_{ 2}\|_{L_{r_{1}}(\mathbb{R}^{2})}\|A_{3}\|_{L_{q_{1}}(\mathbb{R}^{2})}\leq\sqrt {\frac{2}{\pi}}\|g\|_{L_{p}(\mathbb{R})}\|f\|_{L_{q}(\mathbb{R})}\|h\|_{L_{r}( \mathbb{R})}\] **Remark 1**.: _Readers can find another result similar to this theorem through Theorem 2.4 in [25]._ In case the given function \(h(x)\) becomes Hartley-Fourier generalized convolution operator (2.4), the following Young-type inequality is a direct consequence of Theorem 3.1. **Corollary 3.1** (**Young's inequality for Hartley-Fourier generalized convolution**).: _Let \(p,q,r>1\) be real numbers, satisfying \(\frac{1}{p}+\frac{1}{q}=1+\frac{1}{r}\). If \(g\in L_{p}(\mathbb{R})\), \(f\in L_{q}(\mathbb{R})\), then the convolution (2.4) is well-defined and belongs to \(L_{r}(\mathbb{R})\). Moreover, the following inequality holds_ \[\left\|f\underset{H_{1},F}{*}g\right\|_{L_{r}(\mathbb{R})}\leq\sqrt{\frac{2}{ \pi}}\|g\|_{L_{p}(\mathbb{R})}\|f\|_{L_{q}(\mathbb{R})}. \tag{3.8}\] Proof.: Let \(r_{1}\) be the conjugate exponent of \(r\), i.e \(\frac{1}{r}+\frac{1}{r_{1}}=1.\) From the assumptions of Corollary 3.1, we have \(\frac{1}{p}+\frac{1}{q}+\frac{1}{r_{1}}=2,\) which shows the numbers \(p\), \(q\), and \(r_{1}\) satisfy the conditions of Theorem 3.1 (with \(r\) being replaced by \(r_{1}\)). Therefore, if \(g\in L_{p}(\mathbb{R})\), \(f\in L_{q}(\mathbb{R})\), then the linear operator \[Lh:=\int_{\mathbb{R}}\big{(}f\underset{H_{1},F}{*}g\big{)}(x)\cdot h(x)dx\] is bounded in \(L_{r_{1}}(\mathbb{R}).\) Consequently, by the Riesz's representation theorem [19], then generalized convolution \(\big{(}f\underset{H_{1},F}{*}g\big{)}\) belongs to \(L_{r}(\mathbb{R}).\) To prove the inequality (3.8), we choose the function \[h(x):=\text{sign}\bigg{\{}\big{(}f\underset{H_{1},F}{*}g\big{)}(x)\bigg{\}}^{ r}\times\bigg{\{}\big{(}f\underset{H_{1},F}{*}g\big{)}(x)\bigg{\}}^{\frac{r}{r_{1}}}.\] Then \(h\in L_{r_{1}}(\mathbb{R}),\) with the norm \(\|h\|_{L_{r_{1}}(\mathbb{R})}=\big{\|}f\underset{H_{1},F}{*}g\|_{L_{r}( \mathbb{R})}^{\frac{r}{r_{1}}}.\) Applying inequality (3.1) to such function \(h(x),\) we get \[\big{\|}f\underset{H_{1},F}{*}g\big{\|}_{L_{r}(\mathbb{R})}^{r} =\int_{\mathbb{R}}\big{|}\big{(}f\underset{H_{1},F}{*}g\big{)}(x) \big{|}^{r}dx=\bigg{|}\int_{\mathbb{R}}\big{(}f\underset{H_{1},F}{*}g\big{)}( x)\cdot h(x)\bigg{|}\] \[\leq\sqrt{\frac{2}{\pi}}\|g\|_{L_{p}(\mathbb{R})}\|f\|_{L_{q}( \mathbb{R})}\|h\|_{L_{r_{1}}(\mathbb{R})}=\sqrt{\frac{2}{\pi}}\|g\|_{L_{p}( \mathbb{R})}\|f\|_{L_{q}(\mathbb{R})}\big{\|}f\underset{H_{1},F}{*}g\big{\|}_ {L_{r}(\mathbb{R})}^{\frac{r}{r_{1}}}.\] Since \(r-\frac{r}{r_{1}}=1\), from the above equality, we arrive at the conclusion of the corollary. Notice that, inequalities (3.1) and (3.8) do not hold for the important case \(f,g\in L_{2}(\mathbb{R}).\) We know that the problem is determining the duals of the space \(L_{p}\) with \(p\in[1,\infty).\) There are basically two cases of this problem that are \(p=1,\) and the case \(1<p<\infty.\) The major difference between these two cases is the fact that for \(1<p<\infty,\) there is a "nice"characterization of the dual of \(L_{p}\) which identifies it with \(L_{q}\) (with \(1/p+1/q=1\)), and this identification holds without any restriction on the underlying space. Otherwise, the duality of \(L_{1}\) will be defined by \(L_{\infty}\) locally, but this identification will work only if the underlying measure space is decomposable. Now let us consider the case \(p=q=r=1,\) this means \(f\) and \(g\) are functions belonging to the \(L_{1}(\mathbb{R}).\) We deduce that \(\big{(}f\underset{H_{1},F}{*}g\big{)}\) belongs to \(L_{1}(\mathbb{R})\) (see [21, 26]) and get the estimate \(L_{1}\)-norm as follows. **Theorem 3.2**.: _For any \(f,g\in L_{1}(\mathbb{R})\), then_ \[\big{\|}f\underset{H_{1},F}{*}g\big{\|}_{L_{1}(\mathbb{R})}\leq\sqrt{\frac{2}{ \pi}}\|f\|_{L_{1}(\mathbb{R})}\|g\|_{L_{1}(\mathbb{R})}. \tag{3.9}\] Proof.: Indeed, by the Definition 2, we infer that \[\big{\|}f\underset{H_{1},F}{*}g\big{\|}_{L_{1}(\mathbb{R})} \leq\frac{1}{2\sqrt{2\pi}}\int_{\mathbb{R}^{2}}|g(y)|\ |f(x+y)+f(x-y)+if(-x-y)-if(-x+y)|dydx\] \[\leq\frac{1}{2\sqrt{2\pi}}\bigg{\{}\int_{\mathbb{R}^{2}}|g(y)||f(x +y)|dydx+\int_{\mathbb{R}^{2}}|g(y)|\ |f(x-y)|dydx\] \[\quad+\int_{\mathbb{R}^{2}}|g(y)|\ |if(-x-y)|dydx+\int_{\mathbb{R}^{2}}| g(y)|\ |if(-x+y)|dydx\bigg{\}}.\] Since \(f,g\in L_{1}(\mathbb{R})\), through the change of variables combined with Fubini's theorem, we obtain \[\big{\|}f\underset{H_{1},F}{*}g\big{\|}_{L_{1}(\mathbb{R})}\leq\frac{4}{2 \sqrt{2\pi}}\bigg{(}\int_{\mathbb{R}}|g(y)|dy\bigg{)}\bigg{(}\int_{\mathbb{R}} |f(t)|dt\bigg{)}=\sqrt{\frac{2}{\pi}}\|f\|_{L_{1}(\mathbb{R})}\|g\|_{L_{1}( \mathbb{R})}.\] What about the case \(r=\infty\)? **Theorem 3.3**.: _Suppose that \(p,q>1\) and satisfy \(\frac{1}{p}+\frac{1}{q}=1.\) If \(g\in L_{p}(\mathbb{R}),f\in L_{q}(\mathbb{R})\), then convolution operator (2.4) is a bounded function for any \(x\in\mathbb{R}\). Moreover, the following inequality holds_ \[\|f\underset{H_{1},F}{*}g\|_{L_{\infty}(\mathbb{R})}\leq\sqrt{\frac{2}{\pi}} \|g\|_{L_{p}(\mathbb{R})}\|f\|_{L_{q}(\mathbb{R})}. \tag{3.10}\] Proof.: From (2.4), we have \[|(f\underset{H_{1},F}{*}g)|\leq\frac{1}{2\sqrt{2\pi}}\int\limits_{\mathbb{R}} |g(y)|\ (|f(x+y)|+|f(x-y)|+|if(-x-y)|+|if(-x+y)|)dy.\] Using Holder's inequalities for the pair of conjugate exponents \(p\) and \(q\), we deduce that \[|(f\underset{H_{1},F}{*}g)|\leq\frac{1}{2\sqrt{2\pi}}\left(\int_{\mathbb{R}} |g(\xi)|^{p}d\xi\right)^{\frac{1}{p}}\left(\int_{\mathbb{R}}4^{q}|f(t)|^{q}dt \right)^{\frac{1}{q}}=\frac{4}{2\sqrt{2\pi}}|g\|_{L_{p}(\mathbb{R})}\|f\|_{L_{ q}(\mathbb{R})}<\infty.\] This implies that the convolution operator \(f\underset{H_{1},F}{*}g\) is bounded function \(\forall x\in\mathbb{R}\) and the desired conclusion inequality (3.10). Even in the case of the classical Fourier convolution, one has only the Young's convolution (3.9) for the case \(p=q=r=1\). Therefore, the Young-type inequality (3.9) is a specific characteristic of the \(\underset{H_{1},F}{*}\) convolution introduced here. Indeed, suppose that \(f,g\) are functions belonging to \(L_{1}(\mathbb{R})\) space, then the Fourier convolution \((f\underset{F}{*}g)\in L_{1}(\mathbb{R})\) and \(L_{1}\)-norm estimation inequality are as follows \(\|(f\underset{F}{*}g)\|_{L_{1}(\mathbb{R})}\leq\|f\|_{L_{1}(\mathbb{R})}\|g\|_ {L_{1}(\mathbb{R})}\), (see [16]). It shows, on the \(L_{1}(\mathbb{R})\) space, when being equipped with the multiplication defined as Fourier convolution multiplication, we get \(\Big{(}L_{1}(\mathbb{R}),\underset{F}{*}\Big{)}\) as a commutative normed ring [16] for the aforementioned convolution multiplication. However, the ring structure is no longer commutative if the convolution multiplication is replaced by the Hartley-Fouier generalized convolution (2.4), this will be demonstrated in detail in Subsection 4.1 by applying inequality (3.9). ### Young's inequalities for Hartley convolution In a similar way, we obtain Young type theorem for the Hartley convolution (2.5). The results of this part are proved to be similar to those in Subsection 3.1, respectively. **Theorem 3.4**.: _Let \(p,q,r>1\) be real numbers satisfying \(\frac{1}{p}+\frac{1}{q}+\frac{1}{r}=2.\) Then the following inequalities hold true for all \(f\in L_{p}(\mathbb{R}),\)\(g\in L_{q}(\mathbb{R})\) and \(h\in L_{r}(\mathbb{R})\)._ \[\bigg{|}\int_{\mathbb{R}}\big{(}f\underset{H_{\{\frac{1}{2}\}}}{*}g\big{)}(x )\cdot h(x)dx\bigg{|}\leq\sqrt{\frac{2}{\pi}}\|f\|_{L_{p}(\mathbb{R})}\|g\|_{L_ {q}(\mathbb{R})}\|h\|_{L_{r}(\mathbb{R})}, \tag{3.11}\] **Corollary 3.2**.: _If \(p,q,r>1\) and satisfy \(\frac{1}{p}+\frac{1}{q}=1+\frac{1}{r}\), for any \(f\in L_{p}(\mathbb{R})\), \(g\in L_{q}(\mathbb{R})\), then convolution \(\big{(}f\underset{H_{\{\frac{1}{2}\}}}{*}g\big{)}\) belongs to \(L_{r}(\mathbb{R}).\) Then we obtain the following norm estimation_ \[\big{\|}f\underset{H_{\{\frac{1}{2}\}}}{*}g\big{\|}_{L_{r}(\mathbb{R})}\leq \sqrt{\frac{2}{\pi}}\|f\|_{L_{p}(\mathbb{R})}\|g\|_{L_{q}(\mathbb{R})}. \tag{3.12}\] The proof is similar to Corollary 3.1. And \(L_{1}\)-norm estimate of Hartley convolution (2.5) is as follows. \[\left\|f\underset{H\left\{\frac{1}{2}\right\}}{*}g\right\|_{L_{1}(\mathbb{R})} \leq\sqrt{\frac{2}{\pi}}\|f\|_{L_{1}(\mathbb{R})}\|g\|_{L_{1}(\mathbb{R})} \quad\forall f,g\in L_{1}(\mathbb{R}). \tag{3.13}\] The same thing for the case \(r=\infty\) is also obtained with convolution (2.5). ### Weighted \(L_{p}\)-boundedness for the Hartley convolutions By considering the \(L_{p}\) norms in more naturally determined weighted spaces, S. Saitoh [15] gave Fourier convolution norm inequalities in the form \[\left\|\big{(}(F_{1}\rho_{1})\underset{F}{*}(F_{2}\rho_{2})\big{)}\cdot(\rho_ {1}\underset{F}{*}\rho_{2})^{\frac{1}{p}-1}\right\|_{L_{p}(\mathbb{R})}\leq \left\|F_{1}\right\|_{L_{p}(\mathbb{R},|\rho_{1}|)}\left\|F_{2}\right\|_{L_{p} (\mathbb{R},|\rho_{2}|)},\quad p>1,\] where \(\rho_{j}\) are non-vanishing functions, \(F_{j}\in L_{p}(\mathbb{R},|\rho_{j}|),\,j=1,2\). Here, the norm of \(F_{j}\) in the weighted space \(L_{p}(\mathbb{R},\rho_{j})\) is understood as \(\left\|F_{j}\right\|_{L_{p}(\mathbb{R},\rho_{j})}=\bigg{\{}\int \limits_{\mathbb{R}}|F_{j}(x)|^{p}\rho_{j}(x)\mathrm{d}x\bigg{\}}^{\frac{1}{p}}\). This type of inequality is very convenient as many applications require the "same"\(L_{p}\) norms. Saitoh's basic idea in weighted \(L_{p}\) norm inequalities is fairly simple, however, it has been successfully implemented in many application [15]. Saitoh's inequality also can be applied to estimating the solution to a parabolic integro-differential equation [10] modeling a scattered acoustic field. Based on the above aspects, together with using Holder's inequality, and Fubini's theorem, we established another result in weighted space \(L_{p}(\mathbb{R},\rho_{j})\) for Hartley convolution (2.5). Some techniques used in the proof of our theorem come from [23, 27], and we follow closely the strategy of these results. **Theorem 3.5**.: _Suppose that \(\rho_{1},\rho_{2}\) are non-vanishing positive functions such that convolution \((\rho_{1}\underset{H\left\{\frac{1}{2}\right\}}{*}\rho_{2})\), given by (2.5), is well-defined. For any functions \(F_{1}\in L_{p}(\mathbb{R},\rho_{1})\) and \(F_{2}\in L_{p}(\mathbb{R},\rho_{2})\) with \(p>1\), the following \(L_{p}(\mathbb{R})\)-weighted inequality holds true_ \[\left\|\big{(}F_{1}\rho_{1}\underset{H\left\{\frac{1}{2}\right\}}{*}F_{2}\rho _{2}\big{)}(\rho_{1}\underset{H\left\{\frac{1}{2}\right\}}{*}\rho_{2})^{\frac {1}{p}-1}\right\|_{L_{p}(\mathbb{R})}\leq\sqrt{\frac{2}{\pi}}\|F_{1}\|_{L_{p} (\mathbb{R},\rho_{1})}\big{\|}F_{2}\|_{L_{p}(\mathbb{R},\rho_{2})}. \tag{3.14}\] Proof.: From (2.5), we have \[\begin{split}&\left\|\big{(}F_{1}\rho_{1}\underset{H\left\{ \frac{1}{2}\right\}}{*}F_{2}\rho_{2}\big{)}(\rho_{1}\underset{H\left\{\frac{1} {2}\right\}}{*}\rho_{2})^{\frac{1}{p}-1}\right\|_{L_{p}(\mathbb{R})}^{p}=\int \limits_{\mathbb{R}}\bigg{|}\big{(}F_{1}\rho_{1}\underset{H\left\{\frac{1}{2} \right\}}{*}F_{2}\rho_{2}\big{)}(x)(\rho_{1}\underset{H\left\{\frac{1}{2} \right\}}{*}\rho_{2})^{\frac{1}{p}-1}(x)\bigg{|}^{p}dx\\ &=\frac{1}{2\sqrt{2\pi}}\times\int\limits_{\mathbb{R}}\bigg{\{} \bigg{|}\int\limits_{\mathbb{R}}(F_{1}\rho_{1})(y)\bigg{(}(F_{2}\rho_{2})(x+y) +(F_{2}\rho_{2})(x-y)+(F_{2}\rho_{2})(-x+y)-(F_{2}\rho_{2})(-x-y)\bigg{)}dy \bigg{|}^{p}\\ &\times\bigg{|}\int\limits_{\mathbb{R}}\rho_{1}(y)\bigg{(}\rho_{2} (x+y)+\rho_{2}(x-y)+\rho_{2}(-x+y)-\rho_{2}(-x-y)\bigg{)}dy\bigg{|}^{1-p} \bigg{\}}dx\\ &\leq\frac{1}{2\sqrt{2\pi}}\int\limits_{\mathbb{R}}\big{(} \mathcal{A}_{1}(x,y)+\mathcal{A}_{2}(x,y)+\mathcal{A}_{3}(x,y)+\mathcal{A}_{4 }(x,y)\big{)}\mathscr{T}_{(x,y)}^{1-p}dx,\,\,p>1,\end{split} \tag{3.15}\] where we put \[A_{1}(x,y) =\int\limits_{\mathbb{R}}|(F_{1}\rho_{1})(y)|\,\,|(F_{2}\rho_{2})(x +y)|dy,\] \[A_{2}(x,y) =\int\limits_{\mathbb{R}}|(F_{1}\rho_{1})(y)|\,\,|(F_{2}\rho_{2})( x-y)|dy,\] \[A_{3}(x,y) =\int\limits_{\mathbb{R}}|(F_{1}\rho_{1})(y)|\,\,|(F_{2}\rho_{2})( -x+y)|dy,\] \[A_{4}(x,y) =\int\limits_{\mathbb{R}}|(F_{1}\rho_{1})(y)|\,\,|(F_{2}\rho_{2})( -x-y)|dy.\] And \[\mathscr{T}_{(x,y)}^{1-p}=\bigg{\{}\int\limits_{\mathbb{R}}\rho_{1}(y)\big{(} \rho_{2}(x+y)+\rho_{2}(x-y)+\rho_{2}(-x+y)+\rho_{2}(-x-y)\big{)}dy\bigg{\}}^{1-p}.\] Using Holder's inequalities for the pair of conjugate exponents \(p\) and \(q\) towards operators \(A_{i},i=\overline{1,4}\), we have \[A_{1}(x,y) \leq\biggl{\{}\int_{\mathbb{R}}|F_{1}(y)|^{p}\rho_{1}(y)|F_{2}(x+y )|^{p}\rho_{2}(x+y)dy\biggr{\}}^{\frac{1}{p}}\times\biggl{\{}\int_{\mathbb{R}} \rho_{1}(y)\rho_{2}(x+y)dy\biggr{\}}^{\frac{1}{q}},\] \[A_{2}(x,y) \leq\biggl{\{}\int_{\mathbb{R}}|F_{1}(y)|^{p}\rho_{1}(y)|F_{2}(x- y)|^{p}\rho_{2}(x-y)dy\biggr{\}}^{\frac{1}{p}}\times\biggl{\{}\int_{\mathbb{R}} \rho_{1}(y)\rho_{2}(x-y)dy\biggr{\}}^{\frac{1}{q}},\] \[A_{3}(x,y) \leq\biggl{\{}\int_{\mathbb{R}}|F_{1}(y)|^{p}\rho_{1}(y)|F_{2}(- x+y)|^{p}\rho_{2}(-x+y)dy\biggr{\}}^{\frac{1}{p}}\times\biggl{\{}\int_{\mathbb{R}} \rho_{1}(y)\rho_{2}(-x+y)dy\biggr{\}}^{\frac{1}{q}},\] \[A_{4}(x,y) \leq\biggl{\{}\int_{\mathbb{R}}|F_{1}(y)|^{p}\rho_{1}(y)|F_{2}(- x-y)|^{p}\rho_{2}(-x-y)dy\biggr{\}}^{\frac{1}{p}}\times\biggl{\{}\int_{\mathbb{R}} \rho_{1}(y)\rho_{2}(-x-y)dy\biggr{\}}^{\frac{1}{q}}.\] Therefore, recalling that \(t^{\frac{1}{p}},t^{\frac{1}{q}}\)\((p,q>1)\) are concave functions, together with the above-estimates, we obtain \[\sum_{i=1}^{4}A_{i}(x,y)\leq\biggl{\{}\int_{\mathbb{R}}|(F_{1} \rho_{1})(y)|^{p}\rho_{1}(y)\biggl{(}|F_{2}(x+y)|^{p}\rho_{2}(x+y)+|F_{2}(x-y) |^{p}\rho_{2}(x-y) \tag{3.16}\] \[+|F_{2}(-x+y)|^{p}\rho_{2}(-x+y)+|F_{2}(-x-y)|^{p}\rho_{2}(-x-y) \biggr{)}dy\biggr{\}}^{\frac{1}{p}}\times\mathscr{T}_{(x,y)}^{\frac{1}{q}}.\] From (3.15) and (3.16), we deduce that \[\bigl{\|}\bigl{(}F_{1}\rho_{1}\underset{\{\frac{1}{2}\}}{*}F_{2} \rho_{2}\bigr{)}\bigl{(}\rho_{1}\underset{\{\frac{1}{2}\}}{*}\rho_{2}\bigr{)}^ {\frac{1}{p}-1}\bigr{\|}_{L_{p}(\mathbb{R})}^{p}\] \[\leq\frac{1}{2\sqrt{2\pi}}\int_{\mathbb{R}}\biggl{\{}\int_{ \mathbb{R}}|F_{1}(y)|^{p}\rho_{1}(y)\biggl{(}|F_{2}(x+y)|^{p}\rho_{2}(x+y)+|F_ {2}(x-y)|^{p}\rho_{2}(x-y)+|F_{2}(-x+y)|^{p}\rho_{2}(-x+y)\] \[+|F_{2}(-x-y)|^{p}\rho_{2}(-x-y)\biggr{)}dy\times\mathscr{T}_{(x, y)}^{\frac{p}{q}+1-p}\biggr{\}}dx.\] Since \(p\) and \(q\) are a conjugate pair \((\frac{1}{p}+\frac{1}{q}=1)\), it implies that \(\frac{p}{q}+1-p=0\), by using the Fubini theorem, we obtain \[\bigl{\|}\bigl{(}F_{1}\rho_{1}\underset{\{\frac{1}{2}\}}{*}F_{2} \rho_{2}\bigr{)}\bigl{(}\rho_{1}\underset{\{\frac{1}{2}\}}{*}\rho_{2}\bigr{)}^ {\frac{1}{p}-1}\bigr{\|}_{L_{p}(\mathbb{R})}^{p} \leq\frac{4}{2\sqrt{2\pi}}\int_{\mathbb{R}^{2}}|F_{1}(y)|^{p} \rho_{1}(y)\ |F_{2}(t)|^{p}\rho_{2}(t)\ dydt\] \[=\sqrt{\frac{2}{\pi}}\biggl{(}\int_{\mathbb{R}}|F_{1}(y)|^{p} \rho_{1}(y)dy\biggr{)}\biggl{(}\int_{\mathbb{R}}|F_{2}(t)|^{p}\rho_{2}(t)dt \biggr{)}.\] From the above equality, we infer the desired conclusion of the theorem. Clearly, if \(\rho_{2}(x)\) is an identical function \(1\) for all \(x\in\mathbb{R}\) and \(0<\rho_{1}(x)\in L_{1}(\mathbb{R})\), then we have \[\bigl{|}\bigl{(}\rho_{1}\underset{\{\frac{1}{2}\}}{*}1)(x)\bigr{|}=\frac{2}{2 \sqrt{2\pi}}\int_{\mathbb{R}}\rho_{1}(x)dx=\frac{1}{\sqrt{2\pi}}\|\rho_{1}\|_{L _{1}(\mathbb{R})}<\infty.\] This means that \((\rho_{1}\underset{\{\frac{1}{2}\}}{*}1)\) is well-defined and consequently \[\bigl{|}\bigl{(}\rho_{1}\underset{\{\frac{1}{2}\}}{*}1)(x)\bigr{|}^{1-\frac{1}{ p}}\leq\biggl{(}\frac{1}{\sqrt{2\pi}}\biggr{)}^{1-\frac{1}{p}}\|\rho_{1}\|_{L_{1}( \mathbb{R})}^{1-\frac{1}{p}}.\] Combining with Theorem 3.5, we arrive at the following corollary **Corollary 3.3**.: _Let \(\rho_{1}\) be a positive function belonging to \(L_{1}(\mathbb{R})\). If \(F_{1},F_{2}\) are functions belonging to \(L_{p}(\mathbb{R},\rho_{1})\) and \(L_{p}(\mathbb{R})\), respectively, with \(p>1\), then we have the following estimate_ \[\|F_{1}\rho_{1}\underset{\{\frac{1}{2}\}}{*}F_{2}\|_{L_{p}(\mathbb{R})}\leq \frac{1}{\pi}(\sqrt{2\pi})^{\frac{1}{p}}\|\rho_{1}\|_{L_{1}(\mathbb{R})}^{1- \frac{1}{p}}\|F_{1}\rho_{1}\|_{L_{p}(\mathbb{R},\rho_{1})}\|F_{2}\|_{L_{p}( \mathbb{R})}. \tag{3.17}\] Note that, the inequalities obtained in Theorem 3.5 and Corollary 3.3 are still true for case \(p=2\). In a similar way as above, we also obtained \(L_{p}\)-boundedness in weighted space \(L_{p}(\mathbb{R},\rho_{j})\) for generalized convolution operator (2.4). ## 4. Some applications ### Structure Nonerd ring To avoid confusion with the notation of exponential function \((e)\), we denote \(\mathcal{U}\) as the unit element for the multiplicative on vector space \(V\). **Definition 3**.: (See [11]). A vector space \(V\) is with a ring structure and a vector norm is called a _normed ring_ if \(\|v\ w\|\leq\|v\|\|w\|\), for all \(v,w\in V\). If \(V\) has a multiplicative unit element, then it also requires that \(\|\mathcal{U}\|=1\). **Theorem 4.1**.: _The Banach space \(L_{1}(\mathbb{R})\), when being equipped with the Hartley-Fourier convolution (2.4) multiplication, \(\big{(}L_{1}(\mathbb{R}),\underset{H_{1},F}{*}\big{)}\) becomes a non-commutative normed ring and has no unit element._ Proof.: For any \(f,g\) which belong to \(L_{1}(\mathbb{R})\) space, we get \((f\underset{H_{1},F}{*}g)\in L_{1}(\mathbb{R})\) (refer [21]). Without loss of generality, we may assume that \(\|f\|_{L_{1}(\mathbb{R})}=\sqrt[4]{\frac{2}{\pi}}\int_{\mathbb{R}}|f(x)|dx\), by inequality (3.9), we get \(\big{\|}f\underset{H_{1},F}{*}g\big{\|}_{L_{1}(\mathbb{R})}\leq\|f\|_{L_{1}( \mathbb{R})}\|g\|_{L_{1}(\mathbb{R})},\forall f,g\in L_{1}(\mathbb{R})\). This implies that structure \((L_{1}(\mathbb{R}),\underset{H_{1},F}{*})\) is a normed ring. On the other hand, by the factorization identity (2.6), we conclude that the normed ring here is non-commutative for the aforementioned convolutional multiplication. Next, we need to prove that \((L_{1}(\mathbb{R}),\underset{H_{1},F}{*})\) has no unit element. Suppose that, there exists an element \(\mathcal{U}\in L_{1}(\mathbb{R})\) such that \((\mathcal{U}\underset{H_{1},F}{*}f)(x)=(f\underset{H_{1},F}{*}\mathcal{U})(x )=f(x)\), for any \(f\in L_{1}(\mathbb{R})\). The factorization identity (2.6) implies \(H_{1}(f\underset{H_{1},F}{*}\mathcal{U})(y)=(H_{1}f)(y),\forall f\in L_{1}( \mathbb{R})\). This is equivalent to \((H_{1}f)(y)(F\mathcal{U})(y)=(H_{1}f)(y)\), so we deduce that \[(H_{1}f)\big{[}(F\mathcal{U})(y)-1\big{]}=0,\forall f\in L_{1}(\mathbb{R}). \tag{4.1}\] Now, we choose \(f(x)=\sqrt{\frac{\pi}{2}}e^{-|x|}\in L_{1}(\mathbb{R})\). Obviously \(f(x)\) belongs to the \(L_{1}(\mathbb{R})\) and we obtain \(H_{1}\left(\sqrt{\frac{\pi}{2}}e^{-|x|}\right)(y)=\frac{1}{1+y^{2}}>0\). Then the equality (4.1) holds if and only if \((F\mathcal{U})(y)=1\), this means that \[\mathcal{U}(y)=(F1)(y)=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}e^{-ixy}dx=\frac {1}{\sqrt{2\pi}}\int_{\mathbb{R}}(\cos xy-i\sin xy)dx\notin L_{1}(\mathbb{R}).\] Therefore, the equality (4.1) is not satisfied \(\forall f\in L_{1}(\mathbb{R})\), implying that the convolution multiplications have no unit element. In a similar way, we also obtain a structure non-commutative normed ring, that has no unit element, when \(L_{1}(\mathbb{R})\) space is equipped with the Hartley convolution (2.5) multiplication. We next turn our attention to the zero-divisor properties of the normed ring, which is Titchmarsh's theorem [22, 11]. The Titchmarsh convolution theorem asserts that if \(f,g\in L_{1}(\mathbb{R})\) vanish on \((-\infty,0)\), and \((f*g)(x)=0\) for \(x\leq T\), then there exist real numbers \(\alpha\) and \(\beta\), such that \(\alpha+\beta=T\), and the functions \(f\) and \(g\) vanish on \([0,\alpha]\) and \([0,\beta]\), respectively. Based on this, we give a new version of the Titchmarsh-type theorem for the Hartley-Fourier generalized convolution operator (2.4). To get started, we need the following definition. **Definition 4**.: _Denote \(L_{1}(\mathbb{R},e^{|x|})\) be the normed space of all measurable functions \(f(x)\) on \(\mathbb{R}\) such that \(\int\limits_{\mathbb{R}}|f(x)|\ e^{|x|}dx<+\infty,\) with the norm defined by \(||f||_{L_{1}(\mathbb{R},e^{|x|})}:=\int\limits_{\mathbb{R}}|f(x)|\ e^{|x|}dx.\)_ **Theorem 4.2** (**The Titchmarsh-type Theorem**).: _Let \(f,g\) be continuous functions on \(\mathbb{R}\) and such that \(f,g\in L_{1}(\mathbb{R},e^{|x|})\). It follows that \((f\underset{H_{1},F}{*}g)(x)\equiv 0\) almost everywhere on \(\mathbb{R}\) if and only if \(f(x)\) or \(g(x)\) is vanish almost everywhere on \(\mathbb{R}\)._ Proof.: If \((f\underset{H_{1},F}{*}g)\equiv 0\) almost everywhere on \(\mathbb{R}\), then \(H_{1}(f\underset{H_{1},F}{*}g)(y)\) is vanish almost everywhere on \(\mathbb{R}\). From (2.6), we obtain \[H_{1}(f\underset{H_{1},F}{*}g)(y)=(H_{1}f)(y)(Fg)(y)\equiv 0\text{ almost everywhere on }\ \mathbb{R}. \tag{4.2}\] We prove that \((H_{1}f)(y)\) and \((Fg)(y)\) are analytic functions. Indeed, using (2.1) together with the Lebesgue's dominated convergence theorem, we can exchange orders of the differentiation with the integration, we infer that \[\left|\frac{d^{k}}{dy^{k}}(H_{1}f)(y)\right| \leq\frac{1}{2\sqrt{2\pi}}\int_{\mathbb{R}}\left|f(x)\left(\frac{d ^{k}}{dy^{k}}\text{Cas}\,xy\right)\right|dx=\frac{1}{2\ \sqrt{2\pi}}\int_{\mathbb{R}}\left|f(x)x^{k}\right|\left|\left|\cos\left(xy+ \frac{k\pi}{2}\right)+\sin\left(xy+\frac{k\pi}{2}\right)\right|dx\] \[\leq\frac{1}{2\ \sqrt{\pi}}\int_{\mathbb{R}}\left|f(x)x^{k}\right|dx \leq\frac{1}{2\ \sqrt{\pi}}\int_{\mathbb{R}}e^{-|x|}\frac{|x|^{k}}{k!}k!|f(x)|e^{|x|}dx\] \[\leq\frac{1}{2\ \sqrt{\pi}}\int_{\mathbb{R}}k!|f(x)|e^{|x|}dx\quad \left(\text{since}\ e^{-|x|}|x^{k}|=e^{-|x|}\frac{|x^{k}|}{k!}k!\leq k!\right)\] \[=\frac{1}{2\ \sqrt{\pi}}k!\|f\|_{L_{1}(\mathbb{R},e^{|x|})}=k!C,\ \text{where}\ \ C=\frac{1}{2\ \sqrt{\pi}}\|f\|_{L_{1}(\mathbb{R},e^{|x|})}.\] On the other hand, we evaluate the remainder of Taylor's series expansion in neighborhood of a point \(y_{0}\) as follows \[\left|\frac{1}{k!}\frac{d^{k}(H_{1}f)(\theta+y_{0})}{dy^{k}}(y-y_{0})^{k} \right|\leq\frac{1}{k!}k!C|y-y_{0}|^{k}=C|y-y_{0}|^{k},\ \text{with}\ 0<\theta<1.\] This means that at \(y_{0}\) and in the neighborhood of \(y_{0}\)\((H_{1}f)(y)\) can be expanded to Taylor series and converge on \(|y-y_{0}|<1\), which implies that \((H_{1}f)(y)\) is an analytic function \(\forall y\in\mathbb{R}\). In a similar way, we can also prove \((Fg)(y)\) is an analytic function. Therefore, from (4.2), we infer that \((H_{1}f)(y)\) or \((Fg)(y)\) is vanish almost everywhere on \(\mathbb{R}\). Furthermore, the uniqueness in \(L_{1}(\mathbb{R})\) of Hartley, Fourier transforms (refer [5, 14, 17]), implies that \(f(x)\) or \(g(x)\) are functions of vanishing almost everywhere on \(\mathbb{R}\). Theorem 4.2 can also be stated for convolution (2.5) with a similar proof. ### \(L_{1}\)-solutions of the Fredholm integral equation second kind Now, we consider the Fredholm integral equation of the second kind as follows (see [18]) \[f(x)-\lambda\int_{\mathbb{S}}K(x,y)f(y)d(y)=g(x), \tag{4.3}\] Here the right-hand side \(g(x)\) and the kernel \(K(x,y)\) are some known functions, \(\lambda\) is a known parameter, \(S\) is a piecewise-smooth surface (or line), and \(f\) is an unknown function. Up to now, during the last two decades, the theory of abstract Volterra and Fredholm integral equation has undergone rapid development. To a large extent, this was due to the applications of this theory to problems in mathematical physics, such as viscoelasticity, heat conduction in materials with memory, electrodynamics with memory, and to the need for tools to tackle the problems arising in these fields. Many interesting phenomena are not found with differential equations but observed in specific examples of integral equations (refer [13, 20]). For general kernels \(K(x,y)\), an explicit solution to Eq. (4.3) is not known, and approximate solutions have been sought instead. Nevertheless, some authors tried to get explicit analytic solutions to particular cases, for example, in [18] have found analytic solutions to Eq. (4.3) for the kernels form \(K(x,\tau)=K(x-\tau)=(x-\tau)^{\alpha};e^{-a|x-\tau|};\sinh(a(x-\tau))\), and \(aJ_{1}(a(x-\tau)\) with \(J_{1}\) being the Bessel functions. We will consider solvability Eq. (4.3) in \(L_{1}(\mathbb{R})\) for case \(\lambda=-1\), \(S=\mathbb{R}\) with kernel \(K(x,y)=\frac{1}{2\sqrt{2\pi}}[\varphi(x+y)+\varphi(x-y)+\varphi(-x+y)-\varphi( -x-y)]\), and \(g(x)=(\varphi\underset{H_{1},F}{*}\xi)(x)\). We can be rewrite (4.3) in the convolution form as follows \[f(x)+\big{(}f\underset{H_{1}\underset{\{2\}}{*}}{*}\varphi\big{)}(x)=(\varphi \underset{H_{1},F}{*}\xi)(x),\quad x\in\mathbb{R}. \tag{4.4}\] **Theorem 4.3**.: _Suppose that \(\varphi\) and \(\xi\) are given functions belonging to \(L_{1}(\mathbb{R})\) and satisfy the condition \(1+(H_{1}\varphi)(y)\neq 0\) for any \(y\in\mathbb{R}\). Then (4.4) has the unique solution in \(L_{1}(\mathbb{R})\) which can be represented in the form \(f(x)=(\ell\underset{H_{1},F}{*}\xi)(x)\). Here \(\ell\in L_{1}(\mathbb{R})\) is defined by \((H_{1}\ell)(y)=\frac{(H_{1}\varphi)(y)}{1+(H_{1}\varphi)(y)}\). Furthermore, the following \(L_{1}\)-norm estimate holds \(\|f\|_{L_{1}(\mathbb{R})}\leq\sqrt{\frac{2}{\pi}}\|\ell\|_{L_{1}(\mathbb{R})} \|\xi\|_{L_{1}(\mathbb{R})}\)._ Proof.: Applying the \((H_{1})\) transform to both sides of (4.4), we deduce that \[(H_{1}f)(y)+H_{1}(f\underset{H_{1}\underset{\{2\}}{*}}{*}\varphi)(y)=H_{1} \big{(}\varphi\underset{H_{1},F}{*}\xi\big{)}(y).\] Using the factorization properties (2.7) and (2.6), together with the condition \(1+(H_{1}\varphi)(y)\neq 0\) for any \(y\in\mathbb{R}\), we obtain \((H_{1}f)(y)[1+(H_{1}\varphi)(y)]=(H_{1}\varphi)(y)(F\xi)(y),\quad y\in\mathbb{R}\) This is equivalent to \(\frac{(H_{1}\varphi)(y)}{1+(H_{1}\varphi)(y)}(F\xi)(y).\) By Lemma (2.1), the existence of a function \(\ell\in L_{1}(\mathbb{R})\) such that \((H_{1}\ell)(y)=\frac{(H_{1}\varphi)(y)}{1+(H_{1}\varphi)(y)}\). This means that \[(H_{1}f)(y)=(H_{1}\ell)(y)(F\xi)(y)=H_{1}(\ell\underset{H_{1},F}{*}\xi)(y).\] Therefore, \(f(x)=\big{(}\ell\underset{H_{1},F}{*}\xi\big{)}(x)\) almost everywhere for any \(x\in\mathbb{R}\). Moreover, since \(\ell,\xi\) are functions belonging \(L_{1}(\mathbb{R})\), we deduce that \(f\in L_{1}(\mathbb{R})\) (see [21, 26]). Applying the inequality (3.9) we obtain norm estimation of the solution on \(L_{1}\) space as follows \(\|f\|_{L_{1}(\mathbb{R})}\leq\sqrt{\frac{2}{\pi}}\|\ell\|_{L_{1}(\mathbb{R})}\| \xi\|_{L_{1}(\mathbb{R})}\). **Remark 2**.: _If \(p,q,r>1\) satisfy the condition \(\frac{1}{p}+\frac{1}{q}=1+\frac{1}{r}\), applying inequality (3.8), for any \(f\in L_{r}(\mathbb{R})\), \(\ell\in L_{p}(\mathbb{R})\), and \(\xi\in L_{q}(\mathbb{R})\), we get the following solution estimate \(\|f\|_{L_{r}(\mathbb{R})}\leq\sqrt{\frac{2}{\pi}}\|\ell\|_{L_{p}(\mathbb{R})}\| \xi\|_{L_{q}(\mathbb{R})}\)._ _Furthermore, in case \(r=\infty\), by applying (3.10), we obtain \(L_{\infty}\)-boundedness of solution for Eq.(4.4) as follows \(\|f\|_{L_{\infty}(\mathbb{R})}\)\(\leq\sqrt{\frac{2}{\pi}}\|\ell\|_{L_{p}(\mathbb{R})}\|\xi\|_{L_{q}(\mathbb{R})}\)._ ### \(L_{1}\)-solutions for Barbashin's equations We are concerned with integro-differential equation of the form \[\frac{\partial f(x,y)}{\partial y}=c(x,y)f(x,y)+\int_{a}^{b}K(x,y,t)f(x,t)dt+ \varphi(x,y). \tag{4.5}\] Here \(c:J\times[a,b]\to\mathbb{R},K:J\times[a,b]\times[a,b]\to\mathbb{R}\), and mapping \(\varphi:J\times[a,b]\to\mathbb{R}\) are given functions, where \(J\) is a bounded or unbounded interval, the function \(f\) is unknown. The equation (4.5) was first studied by E.A. Barbashin [3] and his pupils. For this reason, this is nowadays called integro-differential equation of Barbashin type or simply the Barbashin equation. The (4.5) has been applied to many fields such as mathematical physics, radiation propagation, mathematical biology and transport problems, e.g., for more details refer [2]. One of the characteristics of Barbashin equation is that studying solvability of the equation is heavily dependent on the kernel \(K(x,s,\rho)\) of the equation. In many cases, we can reduce equation (4.5) to the form of an ordinary differential equation and use the Cauchy integral operator or evolution operator to study it when the kernel does not depend on \(x\). In some other cases, we need to use the partial integral operator to study this equation (see [2]). However, in the general case of \(K(x,s,\rho)\) as an arbitrary kernel, the problem of finding a solution for Barbashin equation remains open. In some other cases, we need to use the partial integral operator to study this equation (see [2]). On the other hand, if we view \(A\) as the operator defined by \(A:=\partial/\partial y-c(x,y)\mathcal{I}\), where \(\mathcal{I}\) is the identity operator, then Eq. (4.5) is written in the following form \(\mathcal{A}f(x,y)=\int\limits_{a}^{b}K(x,y,t)f(x,t)dt+\varphi(x,y)\). We will consider solvability Eq. (4.5) in \(L_{1}(\mathbb{R})\) for case operator \(\mathcal{A}f(x,y)=\Big{(}1-\frac{d^{2}}{dx^{2}}\Big{)}\left(f\underset{H_{1}^{ \{1\}}}{*}g\right)\)(\(x\)) with kernel \(K(x,y,t)=-\frac{1}{2\sqrt{2\pi}}[h(x+y)+h(x-y)+h(-x+y)-h(-x-y)]\), domain \((a,b)=\mathbb{R}\), and \(\varphi(x,y)=\Big{(}h\underset{H_{1}^{\{1\}}}{*}\xi\Big{)}(x)\). Then original equation (4.5) becomes the linear equation and can be rewritten in the convolution form \[\left(1-\frac{d^{2}}{dx^{2}}\right)\big{(}f\underset{H_{1}^{\{1\}}}{*}g\big{)} (x)+\big{(}f\underset{H_{1}^{\{1\}}}{*}h\big{)}(x)=\big{(}h\underset{H_{1}^{ \{1\}}}{*}\xi\big{)}(x), \tag{4.6}\] where \(g\), \(h\), \(\xi\) are given functions, convolution \(\underset{\{H_{1}^{\{1\}}}}{*}\) is defined by (2.5), and \(f\) is the function to find. **Theorem 4.4**.: _Let \(\xi\in L_{1}(\mathbb{R})\), \(g(x)=\sqrt{\frac{\pi}{2}}e^{-|x|}\), and \(h\) be functions belonging to \(L_{1}(\mathbb{R})\) such that \(1+\big{(}H_{\left\{\frac{1}{2}\right\}}h\big{)}(y)\neq 0\), \(\forall y\in\mathbb{R}\). Then Eq. (4.6) has the unique solution \(f(x)=\Big{(}\eta\underset{H_{1}^{\{1\}}}{*}\xi\Big{)}(x)\in L_{1}(\mathbb{R})\), where \(\eta\in L_{1}(\mathbb{R})\) defined by \(\big{(}H_{\left\{\frac{1}{2}\right\}}\eta\big{)}(y)=\frac{\big{(}H_{\left\{ \frac{1}{2}\right\}}h\big{)}(y)}{1+\big{(}H_{\left\{\frac{1}{2}\right\}}h\big{)} (y)}\) for all \(y\in\mathbb{R}\). Moreover, the estimation of \(L_{1}\)-norm is as follows \(\|f\|_{L_{1}(\mathbb{R})}\leq\sqrt{\frac{2}{\pi}}\|\eta\|_{L_{1}(\mathbb{R})}\| \xi\|_{L_{1}(\mathbb{R})}\)._ Proof.: It is easy to check \(g(x)=\sqrt{\frac{2}{\pi}}e^{-|x|}\) actually satisfies the assumptions of Lemma 2.2. By using (2.10), we obtain \(H_{\left\{\frac{1}{2}\right\}}\bigg{\{}(1-\frac{d^{2}}{dx^{2}})\bigg{(}f \underset{H_{1}^{\{1\}}}{*}\sqrt{\frac{2}{\pi}}e^{-|t|}\bigg{)}(x)\bigg{\}}(y )=(1+y^{2})H_{\left\{\frac{1}{2}\right\}}\big{(}f\underset{H_{1}^{\{2\}}}{*} \sqrt{\frac{\pi}{2}}e^{-|t|}\big{)}(y)\). Applying the \(H_{\left\{\frac{1}{2}\right\}}\) transform to both sides of (4.6) combining the factorization equality (2.7), we get \[(1+y^{2})H_{\left\{\frac{1}{2}\right\}}\left(f\underset{H_{\left\{\frac{1}{2} \right\}}}{*}\sqrt{\frac{\pi}{2}}e^{-|t|}\right)(y)+H_{\left\{\frac{1}{2} \right\}}\big{(}f\underset{H_{\left\{\frac{1}{2}\right\}}}{*}h\big{)}(y)=H_{ \left\{\frac{1}{2}\right\}}\big{(}h\underset{H_{\left\{\frac{1}{2}\right\}}}{* }\xi\big{)}(y),\forall y>0,\] equivalent to \((1+y^{2})\big{(}H_{\left\{\frac{1}{2}\right\}}f\big{)}(y)H_{\left\{\frac{1}{2} \right\}}\left(\sqrt{\frac{\pi}{2}}e^{-|t|}\right)(y)+\big{(}H_{\left\{\frac{1} {2}\right\}}f\big{)}(y)\big{(}H_{\left\{\frac{1}{2}\right\}}h\big{)}(y)=\big{(} H_{\left\{\frac{1}{2}\right\}}h\big{)}(y)\big{(}H_{\left\{\frac{1}{2}\right\}}\xi \big{)}(y).\) Since \(H_{\left\{\frac{1}{2}\right\}}\Big{(}\sqrt{\frac{\pi}{2}}e^{-|t|}\Big{)}(y)= \frac{1}{1+y^{2}}\) is finite (see [4], 1.4.1, page 23), under the condition \(1+\big{(}H_{\left\{\frac{1}{2}\right\}}h\big{)}(y)\neq 0,\)\(\forall y\in\mathbb{R},\) we deduce that \(\big{(}H_{\left\{\frac{1}{2}\right\}}f\big{)}(y)=\frac{\big{(}H_{\left\{\frac{1} {2}\right\}}h\big{)}(y)}{1+\big{(}H_{\left\{\frac{1}{2}\right\}}h\big{)}(y)} \big{(}H_{\left\{\frac{1}{2}\right\}}\xi\big{)}(y).\) Following Lemma 2.1, there exists a function \(\eta\) which belongs to \(L_{1}(\mathbb{R}_{+})\) such that \[\big{(}H_{\left\{\frac{1}{2}\right\}}\eta\big{)}(y)=\frac{\big{(}H_{\left\{ \frac{1}{2}\right\}}h\big{)}(y)}{1+\big{(}H_{\left\{\frac{1}{2}\right\}}h \big{)}(y)},\quad\forall y\in\mathbb{R}.\] Thus, we infer that \(\big{(}H_{\left\{\frac{1}{2}\right\}}f\big{)}(y)=\big{(}H_{\left\{\frac{1}{2} \right\}}\eta\big{)}(y)\big{(}H_{\left\{\frac{1}{2}\right\}}\xi\big{)}(y)=H_{ \left\{\frac{1}{2}\right\}}\big{(}\eta\underset{H_{\left\{\frac{1}{2}\right\}} }{*}\xi\big{)}(y),\) implies that \(f(x)=\big{(}\eta\underset{H_{\left\{\frac{1}{2}\right\}}}{*}\xi\big{)}(x)\) almost everywhere on \(\mathbb{R}.\) Since \(\eta,\xi\in L_{1}(\mathbb{R}),\) then \(f\) belongs to \(L_{1}(\mathbb{R})\) (see [21, 26]). Applying the inequality (3.13), we obtain the boundedness in \(L_{1}\) of solution\(\|f\|_{L_{1}(\mathbb{R})}\leq\sqrt{\frac{2}{\pi}}\|\eta\|_{L_{1}(\mathbb{R})}\|\xi \|_{L_{1}(\mathbb{R})}\). **Remark 3**.: _Let \(p,q,r>1\) such that \(\frac{1}{p}+\frac{1}{q}=1+\frac{1}{r}\) and if \(f\in L_{r}(\mathbb{R})\), \(\eta\in L_{p}(\mathbb{R})\), and \(\xi\in L_{q}(\mathbb{R}).\) Applying the inequality (3.12), we get the following solution estimate \(\|f\|_{L_{r}(\mathbb{R})}\leq\sqrt{\frac{2}{\pi}}\|\eta\|_{L_{p}(\mathbb{R})}\| \xi\|_{L_{q}(\mathbb{R})}.\)_ **Remark 4**.: _If there is a case where \(\eta\) is the product of two functions (\(\eta=\varphi.\rho\)), \(\rho\) is a positive function belonging to \(L_{1}(\mathbb{R})\), and \(\varphi\in L_{1}(\mathbb{R},\rho)\cap L_{p}(\mathbb{R},\rho)\), with \(p>1.\) For any function \(\xi\in L_{1}(\mathbb{R})\cap L_{p}(\mathbb{R})\), apply (3.17), then we have the following estimate \(\|f\|_{L_{p}(\mathbb{R})}\leq\frac{1}{\pi}(\sqrt{2\pi})^{\frac{1}{p}}\|\rho\|_{ L_{1}(\mathbb{R})}^{1-\frac{1}{p}}\|\varphi\cdot\xi\|_{L_{p}(\mathbb{R},\rho)}\|\xi\|_{L_{p}( \mathbb{R})}.\)_ ### The Cauchy-type initial value problem We consider the following Cauchy-type problem \[\left\{\begin{array}{l}f(x)-f^{\prime\prime}(x)+\big{(}1-\frac{d^{2}}{dz^{2} }\big{)}\big{(}f\underset{H_{1},F}{*}g\big{)}(x)=\big{(}h\underset{H_{1},F}{*} g\big{)}(x),\\ \lim_{|x|\to\infty}f(x)=\lim_{|x|\to\infty}f^{\prime}(x)=0,\end{array}\right. \tag{4.7}\] where \(g\) is a given function belonging to \(g\in L_{1}(\mathbb{R}),\)\(h\in C^{2}(\mathbb{R})\cap L_{1}(\mathbb{R})\) such as \(h^{\prime},h^{\prime\prime}\in L_{1}(\mathbb{R}),\) and \(f\) is the function to find. **Theorem 4.5**.: _With the assumption of the function \(h,\)\(g\) us given as above, so that the condition \(1+(Fg)(y)\neq 0\) for any \(y\in\mathbb{R}.\) Then problem 4.7 has a unique solution in \(C^{2}(\mathbb{R})\cap L_{1}(\mathbb{R})\) and is represented by_ \[f(x)=\left(\left(h\underset{H_{1},F}{*}\xi\right)\underset{H_{1},F}{*}\sqrt{ \frac{\pi}{2}}e^{-|t|}\right)(x).\] _Moreover, we have \(\|f\|_{L_{1}(\mathbb{R})}\leq 2\sqrt{\frac{2}{\pi}}\|h\|_{L_{1}(\mathbb{R})}\|\xi\|_{L_{1}( \mathbb{R})},\) where \(\xi\) is a function belonging to \(L_{1}(\mathbb{R})\) determined by \((F\xi)(y)=\frac{(Fg)(y)}{1+(Fg)(y)},\) and \(\underset{H_{1},F}{*}\) defined by (2.4)._ Proof.: Without loss of generality, we may assume that \(f\in C^{2}(\mathbb{R})\cap L_{1}(\mathbb{R})\) is a satisfied function condition of the Problem (4.7). Applying Lemma 2.2 to the function \(f\) instead of the role of \(g,\) by (2.10), we obtain \[H_{1}\left(\left(1-\frac{d^{2}}{dx^{2}}\right)(f\underset{H_{1},F}{*}g)(x) \right)(y)=(1+y^{2})H_{1}(f\underset{H_{1},F}{*}g)(y),\quad\forall g\in L_{1}( \mathbb{R}),\text{ }y\in\mathbb{R}. \tag{4.8}\] From (2.11), we deduce that \[\left(H_{1}f^{\prime\prime}(x)\right)(y)=-y^{2}(H_{1}f)(y). \tag{4.9}\] Applying the \((H_{1})\) transform to both sides of Eq. (4.7), combining (4.8), (4.9), and using the factorization property (2.6), we infer \((H_{1}f)(y)+y^{2}(H_{1}f)(y)+(1+y^{2})H_{1}(f\underset{H_{1},F}{*}g)(y)=H_{1}(f \underset{H_{1},F}{*}g)(y),\forall y\in\mathbb{R}.\) This equivalent to \((1+y^{2})(H_{1}f)(y)[1+(Fg)(y)]=(H_{1}h)(y)(Fg)(y),\) implies \[(H_{1}f)(y)=\frac{1}{1+y^{2}}\frac{(Fg)(y)}{1+(Fg)(y)}(H_{1}h)(y). \tag{4.10}\] By Wiener-Levy's Theorem [11] for the Fourier transform, we deduce the existence of a function \(\xi\) belonging to \(L_{1}(\mathbb{R})\) such that \((F\xi)(y)=\frac{(Fg)(y)}{1+(Fg)(y)}.\) On the other hand, \(F\left(\sqrt{\frac{\pi}{2}}e^{-|t|}\right)(y)=\frac{1}{1+y^{2}}\)[4] is finite, and coupling (4.10) with (2.6), we have \[(H_{1}f)(y)=H_{1}\left((h\underset{H_{1},F}{*}\xi)\underset{H_{1},F}{*}\sqrt{ \frac{\pi}{2}}e^{-|t|}\right)(y).\] Therefore \(f(x)=\left((h\underset{H_{1},F}{*}\xi)\underset{H_{1},F}{*}\sqrt{\frac{\pi}{2 }}e^{-|t|}\right)(x)\) almost everywhere for any \(x\in\mathbb{R}.\) Moreover, since \(h,\xi,\) and \(\sqrt{\frac{\pi}{2}}e^{-|t|}\) are functions belonging to \(L_{1}(\mathbb{R}),\) by [26, 21] lead to \(f\in L_{1}(\mathbb{R}).\) We putting \(G(x)=(h\underset{H_{1},F}{*}\xi)(x)\in L_{1}(\mathbb{R}).\) Thus \(h\in C^{2}(\mathbb{R})\cap L_{1}(\mathbb{R}),\) and \(h^{\prime},h^{\prime\prime},\xi\in L_{1}(\mathbb{R}),\) by (2.4), we infer \(G\in C^{2}(\mathbb{R})\cap L_{1}(\mathbb{R}).\) This means that \(f\in C^{2}(\mathbb{R})\cap L_{1}(\mathbb{R}).\) Now, we prove solution \(f(x)\) of Problem (4.7) actually satisfies the initial condition. Indeed, by setting \(G(x),\) we obtain \[f(x) =\left(G\underset{H_{1},F}{*}\sqrt{\frac{\pi}{2}}e^{-|t|}\right) (x)=\frac{\sqrt{\pi}}{2\sqrt{2}}\bigg{\{}\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R }}e^{-|y|}G(x+y)dy+\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}e^{-|y|}G(x-y)dy\] \[+\frac{i}{\sqrt{2\pi}}\int_{\mathbb{R}}e^{-|y|}G(-x-y)dy-\frac{i} {\sqrt{2\pi}}\int_{\mathbb{R}}e^{-|y|}G(-x+y)dy\bigg{\}} \tag{4.11}\] \[=2\sqrt{\frac{\pi}{2}}\big{\{}J_{1}(x)+J_{2}(x)+iJ_{3}(x)-iJ_{4}( x)\big{\}},\quad\forall x\in\mathbb{R}.\] Observe the integral \(J_{1}\) then \(J_{1}(x)=\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}}e^{-|y|}G(x+y)dy=-\frac{ 1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}}e^{-|t|}G(x-t)dt=-\big{(}e^{-|t|} \underset{F}{*}G\big{)}(x).\) We apply the Wiener-Tauberian's theorem [28] to the function \(\omega(t)=e^{-|t|}\) with bounded \(\forall t\in\mathbb{R},\) and \(\Phi=e^{-|x-y|}\in L_{1}(\mathbb{R}).\) It is easy to see \(F\left(e^{-|x-y|}\right)=\sqrt{\frac{2}{\pi}}\frac{1}{1+(x-y)^{2}}\neq 0, \forall x,y\in\mathbb{R}.\) On the other hand, \[\big{(}\omega\underset{F}{*}\Phi\big{)}(x)=\frac{1}{\sqrt{2\pi}}\int_{ \mathbb{R}}e^{(-|y|-|x-y|)}dy=\left\{\begin{array}{ll}\frac{1}{\sqrt{2\pi}}( x+1)e^{-x}&\quad\text{if $x>0,$}\\ \frac{1}{\sqrt{2\pi}}(1-x)e^{x}&\quad\text{if $x<0,$}\end{array}\right.\quad \text{tends to $0$ when $|x|\to\infty$}.\] By Wiener-Tauberian's theorem, we deduce that \(\lim\limits_{|x|\to\infty}J_{1}(x)=-\lim\limits_{|x|\to\infty}\big{(}e^{-|t|} \underset{F}{*}G\big{)}(x)=0,\forall G\in L_{1}(\mathbb{R}).\) By the same argument as above, we obtain \(\lim\limits_{|x|\to\infty}J_{i}(x)=0,\) with \(i=2,3,4.\) From (4.11) we get \(\lim_{|x|\to\infty}f(x)=0.\) Furthermore, we also deduce \(f^{\prime}(x)=2\sqrt{\frac{2}{2}}\left\{J_{1}^{\prime}(x)-J_{2}^{\prime}(x)-iJ_ {3}^{\prime}(x)+iJ_{4}^{\prime}(x)\right\},\) where for each integral \(J_{i}^{\prime}(x),\) with \(i=\overline{1,4}\) can be represented by the Fourier convolution of the form \(\big{(}e^{-|t|}\underset{F}{*}G^{\prime}\big{)},\)\(\forall G^{\prime}\in L_{1}(\mathbb{R}).\) Hence, we apply the Wiener-Tauberian's theorem again [28](also refer [14]) to the function \(G^{\prime},\) with \(\omega,\)\(\Phi\) as above implies that \(\lim\limits_{|x|\to\infty}J_{i}^{\prime}(x)=0,\) with \(i=\overline{1,4}.\) This yields \(\lim\limits_{|x|\to\infty}f^{\prime}(x)=0,\) and we can conclude that \(f(x)\) satisfies the initial conditions of the Problem (4.7). Applying the equality (3.9), we obtain the boundedness in \(L_{1}\) of solution as follows \[\|f\|_{L_{1}(\mathbb{R})} =\big{\|}(h\underset{H_{1},F}{*}\xi)\underset{H_{1},F}{*}\sqrt{ \frac{\pi}{2}}e^{-|t|}\big{\|}_{L_{1}(\mathbb{R})}\leq\sqrt{\frac{2}{\pi}} \big{\|}h\underset{H_{1},F}{*}\xi\big{\|}_{L_{1}(\mathbb{R})}\|\sqrt{\frac{\pi} {2}}e^{-|t|}\big{\|}_{L_{1}(\mathbb{R})}\] \[\leq\sqrt{\frac{2}{\pi}}\|h\|_{L_{1}(\mathbb{R})}\|\xi\|_{L_{1}( \mathbb{R})}\|e^{-|t|}\|_{L_{1}(\mathbb{R})}=2\sqrt{\frac{2}{\pi}}\|h\|_{L_{1}( \mathbb{R})}\|\xi\|_{L_{1}(\mathbb{R})}.\] **Remark 5**.: _Assume that \(p,q,r,s>1\) and satisfy the condition \(\frac{1}{p}+\frac{1}{q}+\frac{1}{r}=2+\frac{1}{s}.\) If \(f\in L_{s}(\mathbb{R})\), \(h\in L_{p}(\mathbb{R})\), and \(\xi\in L_{q}(\mathbb{R})\), based on (3.8), we get the following solution of Problem (4.7) estimate_ \[\|f\|_{L_{s}(\mathbb{R})}\leq\sqrt{\frac{2}{\pi}}\left(\frac{2}{r}\right)^{\frac{ 1}{p}}\|h\|_{L_{p}(\mathbb{R})}\|\xi\|_{L_{q}(\mathbb{R})}.\] ## 5. Examples In the last section of article, we give numerical examples to illustrate the calculation to ensure the validity and applicability of the results in Section 4. First, we get started with an illustrative example of Theorem 4.3 and Remark 2. **Example 5.1**.: _Let \(\varphi(x)=\xi(x)=\sqrt{\frac{\pi}{2}}e^{-|x|}.\) It is easy to check \(\varphi(x)\) and \(\xi\) are functions belonging to \(L_{1}(\mathbb{R}).\) According to formula 1.4.1, p. 23 in [4], we have \(H_{1}\left(\sqrt{\frac{\pi}{2}}e^{-|x|}\right)(y)=\frac{1}{1+y^{2}}.\) Therefore \(1+(H_{1}\varphi)(y)\neq 0,\)\(\forall y\in\mathbb{R}\) and \(\frac{(H_{1}\varphi)(y)}{1+(H_{1}\varphi)(y)}=\frac{1}{2+y^{2}}=(H_{1}\ell)(y),\) by Lemma 2.1, we infer there existence of a function \(\ell\in L_{1}(\mathbb{R})\) is satisfied. In this case \(\ell(x)=\frac{\sqrt{\pi}}{2}e^{-\sqrt{2}|x|}\in L_{1}(\mathbb{R}).\) We obtain the solution in \(L_{1}(\mathbb{R})\) of Eq. (4.4) as follows \(f(x)=\left(\frac{\sqrt{\pi}}{2}e^{-\sqrt{2}|t|}\right.\underset{H_{1},F}{*} \sqrt{\frac{\pi}{2}}e^{-|t|}\right)(x).\) We get the \(L_{1}\)-norm estimation of \(f(x)\) as follows \(\|f\|_{L_{1}(\mathbb{R})}\leq\sqrt{\frac{\pi}{2}}\|\frac{\sqrt{\pi}}{2}e^{- \sqrt{2}|t|}\|_{L_{1}(\mathbb{R})}\|\sqrt{\frac{\pi}{2}}e^{-|t|}\|_{L_{1}( \mathbb{R})}=\sqrt{2\pi}.\) Moreover, if there exist real numbers \(p,q,r>1\) and satisfy \(\frac{1}{p}+\frac{1}{q}=1+\frac{1}{r}\), then_ \[\|f\|_{L_{r}(\mathbb{R})}\leq\sqrt{\frac{2}{\pi}}\big{\|}\frac{\sqrt{\pi}}{2}e ^{-\sqrt{2}|t|}\big{\|}_{L_{p}(\mathbb{R})}\big{\|}\sqrt{\frac{\pi}{2}}e^{-|t| }\big{\|}_{L_{q}(\mathbb{R})}=\frac{\sqrt{\pi}}{2}\left(\frac{\sqrt{2}}{p} \right)^{\frac{1}{p}}\left(\frac{2}{q}\right)^{\frac{1}{q}}.\] _For case \(r=\infty\), from inequality (3.10), we obtain \(\|f\|_{L_{\infty}(\mathbb{R})}\leq\frac{\sqrt{\pi}}{2}\left(\frac{\sqrt{2}}{p} \right)^{\frac{1}{p}}\left(\frac{2}{q}\right)^{\frac{1}{q}}.\)_ Next example illustrates the Theorem 4.4. **Example 5.2**.: _Choose functions \(h(x)=\xi(x)=\sqrt{\frac{\pi}{2}}e^{-|x|}.\) Obviously, these functions are belonging to \(L_{1}(\mathbb{R}).\) According to formula 1.4.1, p. 23 in [4], we deduce that \(H_{\left\{\frac{1}{2}\right\}}\left(\sqrt{\frac{\pi}{2}}e^{-|x|}\right)(y)= \frac{1}{1+y^{2}}.\) Thus \(1+H_{\left\{\frac{1}{2}\right\}}\left(\sqrt{\frac{\pi}{2}}e^{-|x|}\right)\neq 0,\forall y\in\mathbb{R}\) and \(\frac{(H_{\left\{\frac{1}{2}\right\}}^{h})^{(y)}(y)}{1+(H_{\left\{\frac{1}{2} \right\}}^{h})^{(y)}}=\frac{1}{2+y^{2}}=(H_{\left\{\frac{1}{2}\right\}}\eta)(y),\) by Lemma 2.1, we infer there existence of a function \(\eta=\frac{\sqrt{\pi}}{2}e^{-\sqrt{2}|x|}\in L_{1}(\mathbb{R})\) is satisfied. Therefore, Eq. (4.6) has the unique solution \(f(x)=\left(\frac{\sqrt{\pi}}{2}e^{-\sqrt{2}|t|}\underset{H_{\left\{\frac{1}{2 }\right\}}}{*}\sqrt{\frac{\pi}{2}}e^{-|t|}\right)(x)\) belonging to \(L_{1}(\mathbb{R}).\) The estimate of \(f(x)\) is similar in Example 5.1 means \(\|f\|_{L_{1}(\mathbb{R})}\leq\sqrt{2\pi}\) and \(\|f\|_{L_{r}(\mathbb{R})}\leq\frac{\sqrt{\pi}}{2}\left(\frac{\sqrt{\pi}}{p} \right)^{\frac{1}{p}}\left(\frac{2}{q}\right)^{\frac{1}{q}}.\) Now we rewrite function \(\eta\) in the following form:_ \[\eta=\frac{\sqrt{\pi}}{2}e^{-\sqrt{2}|t|}=\frac{\sqrt{\pi}}{2}e^{-\frac{\sqrt{ 2}}{2}|t|}.e^{-\frac{\sqrt{2}}{2}|t|}.\] _We set \(\varphi=\frac{\sqrt{\pi}}{2}e^{-\frac{\sqrt{2}}{2}|t|}\) and weighted function \(\rho=e^{-\frac{\sqrt{2}}{2}|t|}.\) Clearly, in this case, \(\varphi\in L_{1}(\mathbb{R},\rho)\cap L_{p}(\mathbb{R},\rho)\) and \(\rho\) is a positive function belonging to \(L_{1}(\mathbb{R}).\) According to Remark 4, we obtain_ \[\|f\|_{L_{p}(\mathbb{R})} \leq\frac{1}{\pi}(\sqrt{2\pi})^{\frac{1}{p}}\|e^{-\frac{\sqrt{2}} {2}|t|}\|_{L_{1}(\mathbb{R})}^{1-\frac{1}{p}}\|\frac{\sqrt{\pi}}{2}e^{-\frac{ \sqrt{2}}{2}|t|}\|_{L_{p}(\mathbb{R},\rho)}\|\frac{\sqrt{\pi}}{2}e^{-\frac{ \sqrt{2}}{2}|t|}\|_{L_{p}(\mathbb{R})},\ \forall p>1.\] \[\leq\sqrt{2}\Big{(}\frac{2\sqrt{2\pi}}{p(p+1)}\Big{)}^{\frac{1}{p}}.\] We end the article with an example for the Theorem 4.5. **Example 5.3**.: _A simple example is provided by \(g=h=\sqrt{\frac{\pi}{2}}e^{-|x|}.\) It is not difficult to check that \(g\) belongs to \(L_{1}(\mathbb{R}),\) and \(h\in C^{2}(\mathbb{R})\cap L_{1}(\mathbb{R}).\) Obviously \(h^{\prime},h^{\prime\prime}\in L_{1}(\mathbb{R}),\) following [4], we have \(F\big{(}\sqrt{\frac{\pi}{2}}e^{-|x|}\big{)}(y)=\frac{1}{1+y^{2}},\) therefore \(1+(Fg)(y)\neq 0,\)\(\forall y\in\mathbb{R}.\) Moreover \((F\xi)(y)=\frac{(Fg)(y)}{1+(Fg)(y)}=\frac{1}{2+y^{2}},\) so there is the function \(\xi(x)=\frac{\sqrt{\pi}}{2}e^{-\sqrt{2}|x|}\in L_{1}(\mathbb{R}).\) In this case, the solution of Problem (4.7) is_ \[f(x)=\left(\left(\sqrt{\frac{\pi}{2}}e^{-|t|}\underset{H_{1},F}{*}\frac{\sqrt{ \pi}}{2}e^{-\sqrt{2}|t|}\right)\underset{H_{1},F}{*}F\sqrt{\frac{\pi}{2}}e^{-|t| }\right)(x),\] _and boundedness in \(L_{1}\) is as follows \(\|f\|_{L_{1}(\mathbb{R})}\leq 2\sqrt{\frac{\pi}{2}}\big{\|}\frac{\sqrt{\pi}}{2}e^{-\sqrt{2}|t|} \big{\|}_{L_{1}(\mathbb{R})}\big{\|}\sqrt{\frac{\pi}{2}}e^{-|t|}\big{\|}_{L_{1 }(\mathbb{R})}=2\sqrt{2\pi}.\) Furthermore, if we assume that \(p,q,r,s>1\) satisfy condition \(\frac{1}{p}+\frac{1}{q}+\frac{1}{r}=2+\frac{1}{s}\), we obtain_ \[\|f\|_{L_{s}(\mathbb{R})}\leq\sqrt{\frac{2}{\pi}}\left(\frac{2}{r}\right)^{ \frac{1}{r}}\left\|\sqrt{\frac{\pi}{2}}e^{-|t|}\right\|_{L_{p}(\mathbb{R})} \left\|\frac{\sqrt{\pi}}{2}e^{-\sqrt{2}|t|}\right\|_{L_{q}(\mathbb{R})}=\frac{ \sqrt{\pi}}{2}\left(\frac{2}{r}\right)^{\frac{1}{r}}\left(\frac{2}{p}\right)^{ \frac{1}{p}}\left(\frac{\sqrt{2}}{q}\right)^{\frac{1}{q}}.\] **Acknowledgements.** The author would like to thank the referee for carefully reading the manuscript and giving valuable comments to improve the article. ## Data Availability My manuscript has no associated data. ## Funding Statement No funding was received for conducting this study. ## Conflict of Interest The author has no conflicts of interest to declare that are relevant to the content of this article. ## Orcid _Trinh Tuan_ [https://orcid.org/0000-0002-0376-0238](https://orcid.org/0000-0002-0376-0238)
2305.13536
Subspace-Configurable Networks
While the deployment of deep learning models on edge devices is increasing, these models often lack robustness when faced with dynamic changes in sensed data. This can be attributed to sensor drift, or variations in the data compared to what was used during offline training due to factors such as specific sensor placement or naturally changing sensing conditions. Hence, achieving the desired robustness necessitates the utilization of either an invariant architecture or specialized training approaches, like data augmentation techniques. Alternatively, input transformations can be treated as a domain shift problem, and solved by post-deployment model adaptation. In this paper, we train a parameterized subspace of configurable networks, where an optimal network for a particular parameter setting is part of this subspace. The obtained subspace is low-dimensional and has a surprisingly simple structure even for complex, non-invertible transformations of the input, leading to an exceptionally high efficiency of subspace-configurable networks (SCNs) when limited storage and computing resources are at stake.
Dong Wang, Olga Saukh, Xiaoxi He, Lothar Thiele
2023-05-22T23:19:45Z
http://arxiv.org/abs/2305.13536v3
# Representing Input Transformations ###### Abstract Deep models lack robustness to simple input transformations such as rotation, scaling, and translation, unless they feature a particular invariant architecture or undergo specific training, e.g., learning the desired robustness from data augmentations. Alternatively, input transformations can be treated as a domain shift problem, and solved by post-deployment model adaptation. Although a large number of methods deal with transformed inputs, the fundamental relation between input transformations and optimal model weights is unknown. In this paper, we put forward the _configuration subspace hypothesis_ that model weights optimal for parameterized continuous transformations can reside in low-dimensional linear subspaces. We introduce _subspace-configurable networks_ to learn these subspaces and observe their structure and surprisingly low dimensionality on all tested transformations, datasets and architectures from computer vision and audio signal processing domains. Our findings enable efficient model reconfiguration, especially when limited storage and computing resources are at stake.1 Footnote 1: Our source code is available at: [https://github.com/osaukh/subspace-configurable-networks.git](https://github.com/osaukh/subspace-configurable-networks.git). ## 1 Introduction In real-world applications of deep learning, it is common for systems to encounter environments that differ from those experienced during model training. These differences often manifest as transformations of the input data, such as rotation, scaling, and translation. For a deep learning model to perform well in post-deployment settings, it must be capable of handling these discrepancies and maintaining high performance despite the differences between the training and the post-deployment environments. To address the challenge, there are two primary approaches: designing robust, invariant models and employing domain adaptation techniques. Both strategies aim to mitigate the performance degradation resulting from the discrepancies between the source and the target domains. Invariant architectures focus on making the model robust, insensitive, or invariant to specific transformations of the input data. This can be achieved by various means, including training with data augmentation (Botev et al., 2022; Geiping et al., 2022), canonicalization of the input data (Jaderberg et al., 2015; Kaba et al., 2022), adversarial training (Engstrom et al., 2017), and designing network architectures that inherently incorporate the desired invariances (Marcus, 2018; Kauderer-Abrams, 2018; Blything et al., 2020; Biscione and Bowers, 2022). Domain adaptation, on the other hand, seeks to transfer the knowledge acquired from a source domain to a target domain, where the data distributions may differ. This approach leverages the learned representations or features from the source domain and fine-tunes or adapts them to better align with the target domain (Russo et al., 2017; Xu et al., 2018; Loghmani et al., 2020). Aforementioned approaches can be seen as static or dynamic as to whether they make post-deployment changes to model weights. While static methods boil down to carefully designed architectures or training procedures optimized w.r.t. specific input transformations, dynamic approaches focus on updating the model with minimal overhead in terms of data samples, memory, and computational resources. Static methods either require a careful model design to account for the desired input transformations, which is not always a trivial task, or need sufficient model capacity to learn transformed data. Dynamic methods assume that model adaptation, or calibration, is a rare task and shift the overhead of adjusting the model to the post-deployment phase. However, understanding the relationship between input transformations and the changes they cause to the optimal model parameters has been largely overlooked in the relevant literature. We argue that understanding this relationship is critical to developing viable adaptation methods for input transformations. In this work, we hypothesize and empirically validate that _optimal model weights that correspond to parameterized continuous transformations of the input reside in a low-dimensional linear subspace_. The learned configuration subspaces present a linear combination of the weights of other optimal solutions. This finding connects our work to the recent literature on linear mode connectivity (Frankle et al., 2020; Nagarajan and Kolter, 2019) of solutions trained from different initializations (Entezari et al., 2021; Ainsworth et al., 2022; Wortsman et al., 2021) and on data splits (Jordan et al., 2022; Ainsworth et al., 2022). Our hypothesis builds on these works and extends linear mode connectivity phenomenon to optimal model parameters trained on different input transformations connected by continuously changing transformation parameters. Our theoretical results facilitate insights into the hypothesis by proving a connection between continuity of parameterization and connectivity of optimal model parameters. We provide empirical evidence for our hypothesis by studying a wide range of real-world transformations: from reversible transformations such as 2D rotation and translation to complex irreversible transformations like 3D rotation-and-projection. We cover computer vision and audio signal processing domains and dedicated network architectures. To uncover configuration subspaces for a set of input transformations, Figure 1: **Verifying the configuration subspace hypothesis using subspace-configurable networks**. We train _subspace-configurable networks_ (SCNs), where an optimal network for a fixed transformation parameter vector is part of the subspace retained by few configuration parameters. **Left:** Given input transformation parameters \(\alpha\), _e.g._, a rotation angle for a 2D rotation, we train a _configuration network_ which yields a \(D\)-dimensional configuration subspace (\(\beta\)_-space_) to construct an efficient inference network with weights \(\theta=\sum\beta_{i}\cdot\theta_{i}\), where \(\theta_{i}\) are the weights of the _base models_, and \(\beta\) is a _configuration vector_. **Middle:** Optimal model parameters in the configuration subspace as functions of the rotation angle \(\alpha\) given by \((\cos(\phi),\sin(\phi))\) for 2D rotation transformations applied to FMNIST (Xiao et al., 2017). Here SCN has three base models with parameters \(\theta_{i}\) and three configuration vectors \(\beta_{i}\) to compose the weights of the 1-layer MLP inference model. **Right:** Test accuracy of SCNs with \(D=1..64\) dimensions compared to a single network trained with data augmentation (One4All), classifiers trained on canonicalized data achieved by applying inverse rotation transformation with the corresponding parameters (Inverse), and networks trained and tested on datasets where all images are rotated by a fixed degree (One4One).2 Each violin shows the performance of the the models evaluated on all degrees with a discretization step of \(1^{\circ}\), expect for One4One where the models are independently trained and evaluated on 0, \(\pi/6\), \(\pi/4\), \(\pi/3\), \(\pi/2\) rotated input. this paper introduces _subspace-configurable networks (SCNs)_ to learn optimal inference models for each specific transformation in the set (Figure 1 left). To offer additional insights, we visualize the relation between the input transformation parameters and the configuration vector in the configuration subspace for a number of transformations (an example of the configuration subspace for 2D rotation is shown in Figure 1 middle). Interestingly, the configuration parameter vectors form well-structured geometric objects, highlighting the underlying structure of the optimal parameter subspaces. If the network capacity is fixed, SCNs can quickly beat training with data augmentation (One4All) and match or outperform solutions trained for input transformation parameters optimized for each input transformation separately (One4One), see Figure 1 right. SCNs thus offer a practical methodology for model reconfiguration on resource-constrained devices as an efficient alternative to resource-hungry backpropagation. Furthermore, we offer a novel measure of complexity of input transformations by counting the minimum number of dimensions of the configuration subspace to span the optimal model parameters. The contributions of this paper are summarized as follows: * We formulate the _configuration subspace hypothesis_, which provides novel insights into the relation between transformations of input data, and the corresponding optimized network weights. We provide theoretical results which connect our hypothesis to the recent results on linear mode connectivity of deep networks (Section 2). * We design _subspace-configurable networks (SCNs)_ to learn the configuration subspace and generate optimal networks for specific transformations. SCNs are used to empirically validate the hypothesis and are of practical value on low-resource platforms (Section 2). * Leveraging SCNs, we empirically validate the configuration subspace hypothesis on ten real-world transformations, using five deep model backbones and five benchmark datasets from computer vision and audio signal processing domains (Section 3). Section 4 concludes this paper. Further related work is discussed in Appendix D. ## 2 Configuration Subspace Hypothesis ### Transformations and their continuous parameterization Let \(\mathbb{X}\times\mathbb{Y}=\{(x,y)\}\) be a dataset comprising labelled examples \(x\in\mathbb{X}\subset\mathbb{R}^{N}\) with class labels \(y\in\mathbb{Y}\subset\mathbb{R}^{M}\). We apply a transformation \(T:\mathbb{R}^{S}\times\mathbb{R}^{N}\to\mathbb{R}^{N}\) parameterized by the vector \(\alpha=(\alpha_{1},\cdots,\alpha_{S})\in\mathbb{A}\subseteq\mathbb{R}^{S}\) to each input example \(x\). A transformed dataset is denoted as \(T(\alpha,\mathbb{X})\times\mathbb{Y}:=\{(T(\alpha,x),y)\}\). For instance, let \(\mathbb{X}\) be a collection of \(P\times P\) images, then we have \(x\in\mathbb{R}^{P^{2}}\) where each dimension corresponds to the pixel intensity of a single pixel. Transformation \(T(\alpha,\mathbb{X}):\mathbb{A}\times\mathbb{R}^{P^{2}}\to\mathbb{R}^{P^{2}}\) is modulated by pose parameters \(\alpha\), such as rotation, scaling, translation or cropping. We assume that data transformations \(T(\alpha,\mathbb{X})\) preserve the label class of the input and represent a continuous function of \(\alpha\in\mathbb{A}\), _i.e._, for any two transformation parameters \(\alpha_{1}\) and \(\alpha_{2}\) there exists a continuous curve in \(\mathbb{A}\) that connects two transformation parameters. Note that by changing \(\alpha\) we transform all relevant data distributions the same way, _e.g._, the data used for training and test. The set \(\{T(\alpha,x)\,|\,\alpha\in\mathbb{A}\}\) of all possible transformations of input \(x\) is called an _orbit_ of \(x\). We consider an infinite orbit defined by a continuously changing \(\alpha\). Related literature (Gandikota et al., 2021; Weiler et al., 2018; Weiler and Cesa, 2019) primarily considers transformations that originate from a _group action3_ on \(\mathbb{X}\). We do not make this assumption. Footnote 3: A group action of a set of transformations \(\mathbb{U}\) with the identity element \(\epsilon\) on a set \(\mathbb{X}\) is a mapping \(\sigma:U\times\mathbb{X}\to\mathbb{X}\), that satisfies that \(\sigma(\epsilon,x)=x\) and \(\sigma(u,\sigma(v,x))=\sigma(uv,x),\forall u,v\in\mathbb{U}\) and \(\forall x\in X\). We consider a network \(\mathcal{G}\) to represent a function \(g:\mathbb{X}\times\mathbb{R}^{L}\to\mathbb{Y}\) that maps data \(x\in\mathbb{X}\) from the input space \(\mathbb{X}\) to predictions \(g(x,\theta)\in\mathbb{Y}\) in the output space \(\mathbb{Y}\), where the mapping depends on the weights (parameters) \(\theta\in\mathbb{R}^{L}\). \(E(\theta,\alpha)\) denotes the expected loss of the network \(\mathcal{G}\) and its function \(g\) over the considered training dataset \(T(\alpha,\mathbb{X})\). Since the expected loss may differ for each dataset transformation parameterized by \(\alpha\), we write \(E(\theta,\alpha)\) to make this dependency explicit. Optimal network parameters \(\theta^{*}_{\alpha}\) are those that minimize the loss \(E(\theta,\alpha)\) for a given transformation vector \(\alpha\). ### Configuration subspace hypothesis Input transformations, _e.g._, 2D rotation, may have a profound impact on both low-level and high-level feature extraction and thus lead to a substantially different set of optimal network parameters. Existing solutions include transformation-invariant architectures (Weiler and Cesa, 2019; Bronstein et al., 2021; Libera et al., 2019; Zhang, 2019), injection of specifically designed layers to canonicalize the output (Libera et al., 2019; Tai et al., 2019), and domain adaptation methods to reduce the gap between the source and target domains (Russo et al., 2017; Xu et al., 2018; Loghmani et al., 2020). Post-deployment model fine-tuning may, however, not be sufficient to match the accuracy of training from scratch (Vucetic et al., 2022). In contrast to these works, we investigate the subspace of optimal model parameters for a set of parameterized continuous transformations and put forward the following hypothesis: **Hypothesis 2.1** (**Configuration subspace hypothesis**).: _Let \(T(\alpha,\mathbb{X})\) with \(\alpha\in\mathbb{A}\) be a parameterized continuous transformation \(T\) of the input data \(\mathbb{X}\) that does not change the label function \(\mathbb{Y}\). Let \(\mathcal{G}_{\alpha}\) be a neural network with optimal weights \(\theta^{*}_{\alpha}\) for the transformed input \(T(\alpha,\mathbb{X})\), i.e., the weights \(\theta^{*}_{\alpha}\) minimize the loss \(E(\theta,\alpha)\). We claim that there is a low-dimensional linear subspace containing optimal vectors \(\theta^{*}_{\alpha}\) for all \(\alpha\in\mathbb{A}\)._ The linearity of the subspace is inspired by the linear mode connectivity literature. Our early experiments with different functions over the optimal parameter space were by far less successful. The surprising low dimensionality of the subspace follows from our empirical observations on all tested transformations, datasets and architecture. As next, we introduce subspace-configurable networks (SCNs) to investigate the hypothesis and construct low-dimensional linear configuration subspaces for the desired input transformations. ### Subspace-configurable networks (SCNs) The architecture of SCNs is sketched in Figure 1 left. Excited by the hypernet (Ha et al., 2016) design, we train a configuration network \(\mathcal{H}\) with function \(h(\cdot)\) and an inference network \(\mathcal{G}\) with function \(g(\cdot)\) connected by a linear transformation of network parameters \(\theta=f(\beta)\) computed in the configuration block: \[\theta=f(\beta)=\sum_{i=1}^{D}\beta_{i}\cdot\theta_{i}, \tag{1}\] where \(\theta_{i}\in\mathbb{T}\subseteq\mathbb{R}^{L}\) for \(i\in[1,D]\) denote the static weights (network parameters) of the base models that are the result of the training process. The configuration network \(\mathcal{H}\) with the function \(h:\mathbb{R}^{S}\rightarrow\mathbb{R}^{D}\) yields a low-dimensional _configuration space_ of vectors \(\beta\in\mathbb{R}^{D}\), given transformation parameters \(\alpha\in\mathbb{A}\). Along with learning the mapping \(h\), we train the \(D\)_base models_ with weights \(\theta_{i}\in\mathbb{R}^{L}\) to construct the weights of inference networks \(\theta=f(\beta)\), _i.e._, \(\theta_{i}\) are the base vectors of the corresponding linear subspace and \(\beta\) the coordinates of \(\theta\). The whole SCN training process minimizes the expected loss \(E(\theta,\alpha)=E(f(h(\alpha)),\alpha)\) to determine the configuration network with function \(h\) and the base model parameters \(\theta_{i}\). We use the standard categorical cross-entropy loss in all our experiments. During inference, a transformed input example \(\hat{x}=T(\alpha,x)\) is classified by the inference network \(\mathcal{G}\) with weights \(\theta\) according to (1) with \(\beta=h(\alpha)\), _i.e._, \(y=g(\hat{x},f(h(\alpha)))\). Note that degenerated solutions, where \(\beta\) is constant for all \(\alpha\) are part of the solution space. In this case, SCN ends up representing a single model \(\mathcal{G}\) for all transformation parameters \(\alpha\in\mathbb{A}\), which is essentially the One4All model, _i.e._, a model trained with data augmentation over all transformation parameters \(\alpha\in\mathbb{A}\). To avoid degenerated cases, we enhance the cross-entropy loss function with a regularization term as a squared cosine similarity \(\cos^{2}(\beta^{(1)},\beta^{(2)})\) between the configuration vector \(\beta^{(1)}\) for a randomly chosen \(\alpha^{(1)}\) applied to transform the current batch, and a vector \(\beta^{(2)}\) obtained from \(\mathcal{H}\) for another randomly sampled \(\alpha^{(2)}\in\mathbb{A}\). The applied regularization (with a weighting factor of 1.0) improves the performance of SCNs by reinforcing them to construct unique dedicated inference networks for different transformation parameters \(\alpha\). ### Continuity of the learned subspaces Figure 1 middle exemplifies a learned \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST. Transformation parameters \((\alpha_{1},\alpha_{2})=\alpha_{1}\) and \((\alpha_{2},\alpha_{3})=\alpha_{2}\) are the same as in Figure 1. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 2. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation transformation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. The \(\beta\)-space for the 2D rotation learned by SCN for \(D=3\) with a MLP inference network architecture trained on FMNIST is shown in Figure 3. \((\cos(\phi),\sin(\phi))\) with \(\phi=0..2\pi\) yield 3-dimensional \(\beta\) vectors \((\beta_{1},\beta_{2},\beta_{3})\) with each \(\beta_{i}\) being in charge of a specific contiguous region of the \(\alpha\)-space. Transitions between regions are covered by models that are a linear combination of optimal solutions for other \(\alpha\) values. This structural property of the \(\beta\)-space is independent of the dataset and architecture we used to train SCNs, as shown in the next section. Moreover, we observe a continuous change of \(\beta\) as we change \(\alpha\). Another observation we make from Figure 1 right is that SCNs match the high performance of the baselines already for small dimensions \(D\) of the linear subspace. In other words, the solution subspace spanned by only \(D\) base models learned by SCN pays no penalty for its very simple structure. We provide theoretical results that help to understand the structure of the \(\beta\)-space and support our hypothesis using continuity arguments of the optimal solution space. Informally, the following theorem shows under certain conditions that for every continuous curve connecting two transformation parameters in \(\mathbb{A}\), there exists a corresponding continuous curve in the network parameter space \(\mathbb{T}\). These two curves completely map onto each other where the network parameters are optimal for the corresponding data transformations. In particular, the curve in the network parameter space \(\mathbb{T}\) is continuous. To simplify the formulation of the theorems and proofs, see Appendix A, we suppose that the set of admissible parameters \(\theta\in\mathbb{T}\) is a bounded subspace of \(\mathbb{R}^{L}\) and all optimal parameter vectors (weights) \(\theta_{\alpha}^{*}\) are in the interior of \(\mathbb{T}\). **Theorem 2.2** (**Continuity**).: _Suppose that the loss function \(E(\theta,\alpha)\) satisfies the Lipschitz condition_ \[|E(\theta,\alpha^{(2)})-E(\theta,\alpha^{(1)}|\leq K_{\alpha}||\alpha^{(2)}- \alpha^{(1)}||_{2} \tag{2}\] _for \(\alpha^{(1)},\alpha^{(2)}\in\mathbb{A}\), and \(E(\theta,\alpha)\) is differentiable w.r.t. to \(\theta\) and \(\alpha\). Then, for any continuous curve \(\alpha(s)\in\mathbb{A}\) with \(0\leq s\leq\hat{s}\) in the parameter space of data transformations there exists a corresponding curve \(\theta(t)\in\mathbb{T}\) with \(0\leq t\leq\hat{t}\) in the parameter space of network weights and a relation \((s,t)\in R\) such that_ * _the domain and range of_ \(R\) _are the intervals_ \(s\in[0,\hat{s}]\) _and_ \(t\in[0,\hat{t}]\)_, respectively, and_ * _the relation_ \(R\) _is monotone,_ i.e._, if_ \((s_{1},t_{1}),(s_{2},t_{s})\in R\) _then_ \((s_{1}\geq s_{2})\Rightarrow(t_{1}\geq t_{2})\)_, and_ * _for every_ \((s,t)\in R\) _the network parameter vector_ \(\theta(t)\) _minimizes the loss function_ \(E(\theta,\alpha)\) _for the data transformation parameter_ \(\alpha(s)\)_._ We are also interested in the relation between \(\alpha\) and corresponding optimal vectors \(\beta\) that define optimal locations on the linear subspace of admissible network parameters as defined by (1). To simplify the formulation of the further theoretical result and proof, we suppose that \(\beta\in\mathbb{B}\) where \(\mathbb{B}\) is a bounded subspace of \(\mathbb{R}^{D}\), and all basis vectors \(\theta_{j}\) that define \(f(\beta)\) in (1) have bounded elements. Under these assumptions, we can derive a corollary from Theorem 2.2. **Corollary 2.3**.: _Suppose that the loss function \(E(\theta,\alpha)\) satisfies (2), and for any \(\alpha\in\mathbb{A}\) all local minima of \(E(f(\beta),\alpha)\) w.r.t. \(\beta\) are global. Then the following holds: For any continuous curve \(\alpha(s)\in\mathbb{A}\) with \(0\leq s\leq\hat{s}\) in the parameter space of data transformations there exists a corresponding curve \(\beta(t)\in\mathbb{B}\) with \(0\leq t\leq\hat{t}\) on the linear network parameter subspace according to (1) and a relation \((s,t)\in R\) such that_ * _the domain and range of_ \(R\) _are the intervals_ \(s\in[0,\hat{s}]\) _and_ \(t\in[0,\hat{t}]\)_, respectively, and_ * _the relation_ \(R\) _is monotone,_ i.e._, if_ \((s_{1},t_{1}),(s_{2},t_{s})\in R\) _then_ \((s_{1}\geq s_{2})\Rightarrow(t_{1}\geq t_{2})\)_, and_ * _for every_ \((s,t)\in R\) _the network parameter vector_ \(\beta(t)\) _minimizes the loss function_ \(E(f(\beta),\alpha)\) _for the data transformation parameter_ \(\alpha(s)\)_._ The proof of the corollary is in Appendix A. The above corollary provides a theoretical argument for the continuous curves in Figure 1 middle, _i.e._, the curves explicitly show the relation \(R\). The existence of such a relation \(R\) is also apparent for all the dataset-architecture pairs in Section 3. Appendix B contains a further result: Small changes to transform parameters \(\alpha\) result in small changes of optimal configuration vectors \(\beta_{\alpha}^{*}\) for suitable loss functions \(E(f(\beta),\alpha)\). In other words, the relation \(R\) can be represented as a continuous function \(r\) with \(t=r(s)\), _i.e._, the parameter vector \(\beta(t)\) that minimizes the loss function can be determined as a function of the data transformation \(\alpha(s)\). ## 3 Experimental Results We evaluate the performance of SCNs on 10 popular transformations (2D rotation, scaling, translation, brightness, contrast, saturation, sharpness, 3D rotation-and-projection, pitch shift and audio speed change) and five dataset-architecture pairs from computer vision and audio signal processing domains (MLPs on FMNIST (Xiao et al., 2017), ShallowCNNs (Neyshabur, 2020) on SVHN (Netzer et al., 2011), LeNet-5 (Lecun et al., 1998) on ModelNet10 (Wu et al., 2015), ResNet18 (He et al., 2015) on CIFAR10 (Krizhevsky et al., 2009), and M5 (Dai et al., 2016) on Google Speech Commands Dataset (Warden, 2018)). All considered transformations are continuous, and their parameterization is straightforward: For example, a rotation angle for a 2D rotation, and 3 rotation angles for a 3D rotation of a point cloud. The main paper evaluates the configuration subspace hypothesis on popular geometric transformations of computer vision datasets. We highlight interesting findings, while further results, also for other transformations, can be found in the appendix. Training hyper-parameters, architectural choices, dataset description and samples of the transformed images are listed in Appendix C. We compare SCNs to the following baselines. _One4All_ represents a trained network trained with data augmentation obtained by transforming the input by randomly chosen parameters \(\alpha\in\mathbb{A}\). _Inverse_ classifier is trained on a canonicalized data representation achieved by first applying the inverse transformation to the transformed input. Note that 2D rotation is a fully invertible transformation in theory, yet introduces small distortions in practice due to rounding effects. Translation is fully invertible if the relevant part of the input stays within the input boundaries. Scaling and 3D rotation bring significant distortion to the input, and inversion leads to a considerable loss of input quality. Finally, _One4One_ represents a set of networks, each trained and tested on the dataset transformed with a fixed parameter vector \(\alpha\). Note Figure 2: **SCN test accuracy for 2D rotation and scaling transformations. Left and middle: 2D rotation parameterized by a rotation degree \(\phi=0..2\pi\) input to the configuration network as \(\alpha=(\cos(\phi),\sin(\phi)\). For each \(\alpha\), SCN determines a configuration vector \(\beta\) used to build a dedicated model for every angle shown on the right. The left polar plot shows the performance of a single model (\(\phi=0^{\circ}\)) on all angles. The model works best for the input transformed with \(T(\phi=0^{\circ})\). Inference network architecture is a 1-layer MLP with 32 hidden units trained on FMNIST. The models constructed by SCN outperform One4All approaching Inverse and One4One accuracy already for small \(D\). Right top: Scaling transformation parameterized by the scaling factor \(\alpha=0.2..2.0\). Right bottom: SCN performance for a single model (\(\alpha=1.0\)) on all inputs. The dedicated model gets increasingly optimized for the target input parameters with higher \(D\). Inference network is a 5-layer MLP with 32 hidden units in each layer trained on FMNIST. Also see Appendix E.1.** that for a fixed architecture, dataset, and loss function, a well-trained One4One baseline achieves the best in-distribution generalization. In this sense, it upper bounds the performance which can be achieved by any domain adaptation method using the same data. When comparing model performance throughout this work, all baselines feature the same model architecture and have the same capacity as the SCN inference network. We use a 1-layer MLP with 64 hidden units as the configuration network architecture to learn the configuration subspace \(\beta=h(\alpha)\). Our main evaluation metric is the test accuracy, but we also analyze the impact of SCN dimensionality \(D\) on its performance, and the structure of the \(\beta\)-space. ### SCN test set accuracy Figure 2 and Figure 3 present different views on the SCN test accuracy as a function of the number of dimensions \(D\) when the concept is applied to different transformations, datasets and architectures. Figure 2 left shows the performance of SCNs on \(0-2\pi\) rotation angles. The test accuracy for \(D=1\) matches One4All but quickly approaches Inverse and One4One baselines for higher \(D\). For the scaling transformation shown in Figure 2 top right, SCNs for \(D>1\) easily outperform One4All. They also outperform Inverse for \(\alpha<0.3\) and \(\alpha>1.2\) already for small \(D\). Non-invertible transformations introduce significant distortion to the input data complicating feature re-use across inputs for different \(\alpha\). A large performance gap between One4One and Inverse for \(\alpha=0.2\) suggests that at small scales different features in the input become relevant than in the original dataset. In some cases, SCNs achieves higher accuracy than One4One networks trained and tested only on the transformed data for some fixed value of \(\alpha\), since One4One does not make use of data augmentation but SCN implicitly does due to its structure given in Equation 1. Figure 3 presents an aggregated view on the SCN test accuracy for 2D rotations on ShallowCNN-SVHN and ResNet18-CIFAR10, and also for translation on MLP-FMNIST and ShallowCNN-SVHN. Each violin comprises accuracies achieved by models tested on all parameter settings traversed with a very small discretization step (with a granularity of \(1^{\circ}\), \(0.05\) and \(1\) pixel for 2D rotation, scaling and translation respectively). The only exception here is the One4One baseline, were a violin comprises the performance of five models independently trained and tested on the transformed inputs for a fixed parameter setting. These fixed settings are listed in the captions of the respective figures and shown with grey stars in Figure 2 left and right top. The fixed parameters are chosen to cover \(\mathbb{A}\) from the most beneficial (_e.g._, \(\alpha=(0,0)\) for translation) to the most suboptimal (\(\alpha=(\pm 8,\pm 8)\) for translation) setting. This is why the violins for scaling and translation transformations have a long tail of low accuracies. The performance of SCNs is consistent across dataset-architecture pairs, matching the best performing baselines already for a small number of dimensions \(D\) (also see Appendix E.1). These results provide empirical evidence for our configuration subspace hypothesis. Figure 3: **SCNs achieve high test accuracy already for low \(D\)**, outperforming One4All and approaching (and in some cases outperforming) both Inverse and One4One baselines. **2 plots on the left:** 2D rotation on ShallowCNN–SVHN and ResNet18–CIFAR10. **2 plots on the right:** Scaling on FMNIST–MLP and ShallowCNN–SVHN. The plots are complementary to Figure 2 evaluating the performance of SCN on different transformations and dataset-architecture pairs. For translation, the violin for One4One comprises prediction accuracy of independently trained models for (0,0) and (\(\pm 8\),\(\pm 8\)) shift parameters. A detailed evaluation of SCNs for translation is in Appendix F. SCN-composed models get increasingly specialized with growing \(D\). Figure 2 middle and bottom right show the performance of SCN for a fixed \(\alpha\): While the model accuracy for a target \(\alpha\) improves with higher \(D\), the performance of the model on other degrees declines increasingly fast. ### Structure of the configuration subspace (\(\beta\)-space) and SCN dimensionality The \(\beta\)-space learned by the configuration network \(h\) for different transformations, datasets and inference network architectures is shown in Figure 4 and Appendix E.2. For 2D rotation, the transformation parameters \(\alpha=(\cos(\phi),\sin(\phi))\) are drawn from a circle and result in all \(\beta_{i}\) being continuous curves arranged around the cylinder in our \(\alpha\)-\(\beta\) visualization. For all transformations, if \(D=1\), the SCN training yields \(\beta_{1}=1\) due to the use of softmax in the last layer of the configuration network and a single base model. For \(D\neq 1\), each \(\beta_{i}\) is high for a certain contiguous range of \(\alpha\)s and low outside of this range. For small \(D\), the regions of high \(\beta\)s are largely disjoint, yet overlap as \(D\) is scaled up. Interestingly, the shape of the learned transformation is preserved across datasets and inference network architectures, although minor differences do exist, see Appendix E.2. We claim that the subspace of optimized configurations for data transformations parameterized by \(\alpha\) is _nicely structured_: (i) We achieve good accuracies even for a linear subspace of low degrees of freedom \(D\) Figure 4: **A typical view of the \(\beta\)-space for 2D rotation, scaling and translation, \(D=1..8\). The \(\beta\)-space is nicely shaped, with each \(\beta\) being responsible for a specific range of inputs with smooth transitions. Top: SCNs for 2D rotation on ResNet18–CIFAR10. Transformation parameters are a vector \(\alpha=(\alpha_{1},\alpha_{2})=(\cos(\phi),\sin(\phi))\), with \(\phi\) being a rotation angle. Middle: SCNs for scaling on ShallowCNN–SVHN, with a scaling factor \(\alpha\) between 0.2 and 2.0. Bottom: SCNs for translation on MLP–FMNIST. A shift is specified by two parameters \((\alpha_{x},\alpha_{y})\) varying in the range (-8,8) along \(x\) and \(y\) axes. Visualization for other dataset-architecture pairs is in Appendix E.2.** (ii) We observe a nice structure of optimal solutions in the space, as represented by the function \(\beta=h(\alpha)\) and shown in Corollary 2.3. This finding is related to the recent literature on linear mode connectivity of independently trained solutions (Enetezari et al., 2021), the solutions that share a part of the training trajectory (Frankle et al., 2020), and those trained on data splits (Ainsworth et al., 2022). SCNs and the configuration space hypothesis establish linear mode connectivity between models trained for different transformation parameters, enhancing the existing literature. Although the inference network architecture seems to have little impact on the shape of the learned \(\beta\)-space, there are interesting exceptions. How does configuration subspace look like if the inference network architecture is invariant to the applied transformation? We trained a SCN for translation with a translation-invariant CNN as inference network architecture. The learned configuration space, in this case, appears degenerated with only one translation-invariant model being learned, regardless of \(D\), _i.e._, all but one \(\beta_{i}\) are zero for all transformation parameters \(\alpha\) (see Appendix F). SCNs yield high performance already for low \(D\), and we observe diminishing returns from adding dimensions for all tested transformations and dataset-architecture pairs, see Figure 3. SCN dimensionality \(D\) impacts the overhead of training SCN, including the weights of the configuration network to compute \(\beta\)s and the weights \(\theta_{i}\) of the base models. It also affects the overhead of computing a new inference model \(\mathcal{G}\) if the transformation parameters \(\alpha\) change, _e.g._, to adapt an object detection model to a new position of a camera. Our configuration subspace hypothesis and the empirical evidence suggest that small \(D\) is sufficient to achieve high SCN performance. The optimal \(D\) depends on the inference network architecture and capacity. These trade-offs are explored in Appendix E.3 when scaling inference network architectures along the width and depth dimensions. ### 3D rotation-and-projection transformation We evaluate SCNs on 3D rotations that present complex transformations with multiple suboptimal views of the object that hurt classification accuracy. We sample a point cloud from a 3D object representation part of the MobileNet10 dataset, rotate it using a vector of Euler angles \((\phi_{1},\phi_{2},\phi_{3})\), and then project the point cloud to a 2D plane. The projection is then used as input to a classifier. We use LeNet5 as a backbone for the inference network to train SCNs. Figure 5 presents the view of the \(\beta\)-space as a function of two rotation angles \(\phi_{1}\) and \(\phi_{3}\), while \(\phi_{2}\) is fixed at \(-\pi\). The configuration space nicely reflects the characteristics of the input \(\alpha\), provided as \(\sin(\cdot)\) and \(\cos(\cdot)\) of the input angles. One can observe that \(\beta\)s are invariant to changes of \(\phi_{3}\). Here \(\phi_{3}\) correspond to object rotations in the plane that does not change the object's visibility and thus leads to stable classification predictions, similarly to the 2D rotation transformation. The effect is best visible for low \(D\) and can be verified using the interactive visualization we provide3 and by inspecting further graphics in Appendix G. Figure 5: **A typical view of the SCN \(\beta\)-space for 3D rotation on LeNet5–ModelNet10**. Transformation parameters are a vector of ordered Euler angles \((\phi_{1},\phi_{2},\phi_{3})\), each taking values from \((-\pi,\pi)\). We show the learned \(\beta\)-space for \(\phi_{2}=-\pi\) with \(D=1..8\). Further views can be found in Appendix G. The structure follows typical sine and cosine curves along multiple dimensions. Figure 6 compares SCN to One4All and One4One baselines. Inverse is not feasible due to the projection of the point cloud on the 2D plane. Each violin comprises the model test accuracy evaluated on 30 randomly chosen angles. By comparing the accuracy for the same rotation angle (dotted lines in the plot), we observe a positive correlation between \(D\) and SCN test accuracy. The result is similar to the SCN performance on 2D transformations. ## 4 Conclusion, Limitations, Societal Impact and Future Work This paper puts forward the configuration subspace hypothesis that optimal model weights that correspond to parameterized continuous transformations of the input data reside in a low-dimensional linear subspace. We design subspace-configurable networks (SCNs) that learn such configuration subspaces and draw optimal model weights from this structure. We achieve surprisingly high accuracies even for low number of configuration dimensions and observe a simple and intuitive structure of the subspace of optimal solutions for all investigated input transformations. Our findings open up both practical applications and theoretical advancements. **Post-deployment adaptation**. SCNs can be used in post-deployment model adaptation on resource-constrained devices as an alternative to costly backpropagation. SCN-configured inference networks are compact and can easily be deployed on devices with limited memory and processing power, _e.g._, in robotics applications, edge computing, or classical embedded systems. **SCNs as invariant architectures**. SCNs can be used to build invariant network architectures on top of them by replacing the supply of \(\alpha\) parameter vector with a search algorithm operating in the \(\alpha\)-space. We can leverage the fact that the correct input parameters \(\alpha\) should produce a confident low-entropy classification result (Wortsman et al., 2020; Hendrycks and Gimpel, 2016). The modified SCN training procedure and a sample search algorithm are detailed in Appendix H. Note that SCNs without search are of interest in their own right, since various sensor modalities can serve as input parameter \(\alpha\), _e.g._, using IMU (Inertial Measurement Unit) sensor measurements to determine the current 2D rotation parameter \(\alpha\). **Transformation complexity measure**\(D\). The configuration-subspace hypothesis provides a structured framework to quantify and compare the complexity of continuous transformations. This may help to develop effective and robust architectures that can better account for diverse transformations. **Limitations, societal impact and future work**. One of the major difficulties of this work is to effectively train SCNs for high \(D\). The effects of the learning rate and training schedule are significant. With carefully chosen hyperparameters, we were able to train SCNs with \(D=32\) for 3D rotation using LeNet5 without any degenerated dimension. Although the current work is limited to continuous transformations, the extension of the hypothesis to discrete transformations is imaginable, yet requires rethinking of the theoretical arguments to provide a better understanding of the SCN design choices, obtained performance and limitations. Finally, SCNs require the knowledge of correct \(\alpha\) and may present an additional vector of input manipulation and adversarial attacks. This direction requires future research and should be carefully considered before SCNs can be safely deployed. ## Acknowledgements We thank Mitchell Wortsman and Rahim Entezari for their insightful comments on the early draft of the manuscript. This work was partly funded by the Austrian Research Promotion Agency (FFG) and Pro2Future Figure 6: **Impact of \(D\) on the SCN’s test accuracy for 3D rotation**. Each line is associated with a specific test angle and connects accuracies tested on the same rotation \((\phi_{1},\phi_{2},\phi_{3})\). Some rotations of a 3D object lead to a suboptimal view of the object and may significantly hurt classification accuracy. With increasing \(D\), SCN outperforms One4All and approaches the One4One baseline. (STRATP II 4.1.4 E-MINDS strategic project). The results presented in this paper were computed using computational resources of ETH Zurich and the HLR resources of the Zentralen Informatikdienstes of Graz University of Technology.
2310.10289
Moving Object Localization based on the Fusion of Ultra-WideBand and LiDAR with a Mobile Robot
Localization of objects is vital for robot-object interaction. Light Detection and Ranging (LiDAR) application in robotics is an emerging and widely used object localization technique due to its accurate distance measurement, long-range, wide field of view, and robustness in different conditions. However, LiDAR is unable to identify the objects when they are obstructed by obstacles, resulting in inaccuracy and noise in localization. To address this issue, we present an approach incorporating LiDAR and Ultra-Wideband (UWB) ranging for object localization. The UWB is popular in sensor fusion localization algorithms due to its low weight and low power consumption. In addition, the UWB is able to return ranging measurements even when the object is not within line-of-sight. Our approach provides an efficient solution to combine an anonymous optical sensor (LiDAR) with an identity-based radio sensor (UWB) to improve the localization accuracy of the object. Our approach consists of three modules. The first module is an object-identification algorithm that compares successive scans from the LiDAR to detect a moving object in the environment and returns the position with the closest range to UWB ranging. The second module estimates the moving object's moving direction using the previous and current estimated position from our object-identification module. It removes the suspicious estimations through an outlier rejection criterion. Lastly, we fuse the LiDAR, UWB ranging, and odometry measurements in pose graph optimization (PGO) to recover the entire trajectory of the robot and object. Extensive experiments were performed to evaluate the performance of the proposed approach.
Muhammad Shalihan, Zhiqiang Cao, Khattiya Pongsirijinda, Lin Guo, Billy Pik Lik Lau, Ran Liu, Chau Yuen, U-Xuan Tan
2023-10-16T11:23:22Z
http://arxiv.org/abs/2310.10289v1
# Moving Object Localization based on the Fusion of Ultra-WideBand and LiDAR with a Mobile Robot ###### Abstract Localization of objects is vital for robot-object interaction. Light Detection and Ranging (LiDAR) application in robotics is an emerging and widely used object localization technique due to its accurate distance measurement, long-range, wide field of view, and robustness in different conditions. However, LiDAR is unable to identify the objects when they are obstructed by obstacles, resulting in inaccuracy and noise in localization. To address this issue, we present an approach incorporating LiDAR and Ultra-Wideband (UWB) ranging for object localization. The UWB is popular in sensor fusion localization algorithms due to its low weight and low power consumption. In addition, the UWB is able to return ranging measurements even when the object is not within line-of-sight. Our approach provides an efficient solution to combine an anonymous optical sensor (LiDAR) with an identity-based radio sensor (UWB) to improve the localization accuracy of the object. Our approach consists of three modules. The first module is an object-identification algorithm that compares successive scans from the LiDAR to detect a moving object in the environment and returns the position with the closest range to UWB ranging. The second module estimates the moving object's moving direction using the previous and current estimated position from our object-identification module. It removes the suspicious estimations through an outlier rejection criterion. Lastly, we fuse the LiDAR, UWB ranging, and odometry measurements in pose graph optimization (PGO) to recover the entire trajectory of the robot and object. For a static robot and a moving object scenario, we show in experiments that the proposed approach improves the average relative translational and rotational accuracy by 44% and 31.6%, respectively, compared to the conventional UWB ranging localization. Additionally, we extend the approach to a moving robot and a moving object scenario and show that our approach improves the average relative translation and rotational accuracy by 13.5% and 36%, respectively. ## I Introduction Object localization is essential for many applications [1]. The literature shows several approaches for object localization in environments with a prior map or known infrastructure [2]. For example, the Global Positioning System (GPS) can provide meter-level accuracy but is unsuitable for indoor applications due to blocked signals by surrounding buildings [3]. However, some applications may not have prior information on the environment. Therefore, in these environments, localization between a robot and an object is crucial to accomplish a number of tasks, such as object-following scenarios where a robot needs to follow a moving object. Various sensors can be utilized for object localization, such as odometry, Inertial Measurement Unit (IMU), Light Detection and Ranging (LiDAR), visual cameras, and Ultra-wideband (UWB) sensors. Odometry through wheel encoders or the IMU is commonly used for localization as it provides an estimated position with reference to the starting position. The odometry shows good accuracy when used for short periods. However, odometry is commonly known to drift over-time [4]. Similarly, the IMU can measure position changes but deteriorates over time due to the accumulation of error [5]. Therefore, localization has been extensively researched with the use of different sensors to improve the localization accuracy of objects [6]. Approaches for localization using LiDAR and visual-based sensors such as the monocular camera provide accurate localization results [7]. With this, LiDAR and visual-based sensors are beneficial for object detection when in line-of-sight. However, it is difficult to track the object of interest from the rest of the obstacles and moving objects in the environment. This difficulty arises because the LiDAR and visual-based sensors can only distinguish between different objects by applying computer vision approaches such as [8] or the help of additional sensors such as the UWB sensor. The UWB is popularly used to improve localization results [9][10]. The UWB provides up to centimetre-level ranging accuracy under line-of-sight conditions. However, the UWB does not provide the bearing information and is susceptible to multi-path measurements under non-line-of-sight conditions [11]. Non-line-of-sight UWB measurement mitigation approaches, such as in [11], require data collection and training of the neural network model before application, which could take up much time. Therefore, we focus on the pose estimation of a robot and a moving object through odometry, UWB ranging, and LiDAR measurements. The main idea of our approach is to improve the localization results of current UWB-ranging localization approaches by incorporating LiDAR measurements as a constraint through a pose graph optimization (PGO) framework. In our experiments, a moving object has both odometry and UWB data. The robot provides odometry, carries a UWB for ranging, and also carries a 2D-LiDAR to identify the moving object. A simple illustration of the proposed method can be found in Figure 1, where the map is only used for visualization purpose only, as our approach targets real-life scenarios where a prior map of the environment is not available We propose fusing UWB ranging, odometry, and LiDAR measurements in line-of-sight conditions to produce accurate pose estimations. Even though the LiDAR measurements are only available during line-of-sight scenarios, the overall trajectory of the object will be improved given accurate object-identification pose estimates. To eliminate false positives, we introduce a heuristic outlier rejection mechanism that compares the estimation of the object's moving direction from our module and the current estimated object's orientation from PGO. Finally, inlier LiDAR measurements are fused with UWB ranging and odometry through PGO to estimate the object's trajectory. The contributions of this paper are summarized as follows: * We propose an approach for pose estimation between a robot and an object using UWB ranging, odometry, and LiDAR measurements. In particular, an object-identification module is used to identify the object through LiDAR scans and improve localization accuracy using the detected object position compared to the conventional UWB ranging localization. * We present an approach to determine the moving direction of an object based on successive LiDAR scans to improve localization accuracy and introduce a mechanism to reject incorrect measurements by comparing results from the object-identification module with estimated pose from optimization. * We perform extensive experiments to evaluate the performance of the proposed approach in a complex environment of size 16m\(\times\)12m. We have achieved an improvement of 44% and 31.6% in translational and rotational accuracy, respectively, for a static robot and a moving object scenario. For the moving robot and moving object scenario, we achieve an improvement of 13.5% and 36%, respectively, in translation and rotational accuracy. We organize the remaining of this paper as follows: Section II introduces the related work. Section III describes the proposed localization technique to fuse UWB ranging, odometry, and LiDAR measurements. Section IV shows the experimental setups and results. Finally, Section V concludes this paper and discusses future work. ## II Related work The robotics community shows a growing interest in object localization, especially for real-world applications such as when a robot needs to follow an object. As a result, there is growing research on object localization through different approaches. For example, Long _et. al._[12] proposed a method for accurate object localization by introducing three processes: region proposal, classification, and accurate object localization. Features extracted from a region of interest from an image through a convolutional neural network go through an unsupervised bounding box regression algorithm that localizes and optimizes the position of the detected object. Similarly, Tychsen-Smith _et. al._[13] proposed a similar approach but designed the Fitness Non-Max Suppression (NMS) and derived a novel bounding box regression based on a set of Intersection-over-Union (IoU) upper bounds to obtain greater localization accuracy. However, the methods mentioned above require training a model. Although these methods can be implemented on smaller training data, the performance is unsatisfactory due to limitations in feature representation and model complexity [14]. Instead of processing images from a visual camera with deep-learning methods, point cloud data from the LiDAR can provide high positioning accuracy for object localization. For example, Huang _et. al._[15] proposed a frame-to-frame scan matching algorithm based on an attention mechanism. In this method, the selected landmark is not switched to another before it becomes invisible. Therefore, the approach will not accumulate errors while the landmark is not changed, giving high matching accuracy. Successive scans can then be compared, similar to the work of Mihalik _et. al._[16] to identify moving objects based on Euclidean distance moved by the points in the point cloud. However, with multiple moving objects in the environment, an additional sensor, such as the UWB, may be required to accurately narrow down the results to identify the object of interest. The UWB is popular among the robotics community to help compensate for odometry errors caused by drift over long periods due to its low cost, and low power level consumption. For example, Liu _et. al._[6] and Cao _et. al._[17] improve pose estimation results by minimizing UWB rangings taken at different positions and fusing the UWB ranging measurements with odometry through PGO. In addition to the UWB helping in object-identification, and improving pose estimations in PGO, the Fig. 1: Overview of the proposed localization approach with a robot (blue colour) and a moving object (red colour). We perform pose estimation with UWB ranging (green lines) and odometry (red and blue dotted lines) measurements, as well as LiDAR measurements produced by our object-identification module (see yellow points). The estimated poses from the object-identification module are passed for outlier rejection and optimized through a pose graph optimization module. Mapped obstacles are for visualization purposes only. LiDAR measurements from the object-identification can also be fused with the UWB ranging measurements to improve results further. Research on the fusion of UWB ranging and LiDAR measurements have been performed with promising results. Song _et. al._[18] and Zhou _et. al._[19] proposed a UWB/LiDAR fusion for cooperative range-only SLAM. Building on the work of Ding _et. al._[20] which employs LiDAR Inertial Odometry (LIO) for robust LiDAR localization, Nguyen _et. al._[21] proposed the LiDAR-Inertia-Ranging Odometry (LIRO) localization by introducing UWB ranging measurements for fusion with LiDAR and inertial measurements. The LIRO approach improves localization accuracy compared to LIO by having only two or three anchors deployed in the environment. With the LiDAR giving a more comprehensive picture of the surrounding environment, fusing LiDAR measurements with the UWB ranging measurements helps to remove errors accumulated in the LiDAR-based SLAM algorithms. However, the experiment mentioned above includes UWB beacons placed around the environment to provide ranging measurements between robots and nearby obstacles, which is not ideal, especially in emergencies. We propose a method to fuse odometry, UWB ranging, and LiDAR measurements in a PGO without the need for infrastructure. ## III Localization between a robot and an object In this section, we formulate the problem for localization between a robot and an object using UWB ranging, odometry and LiDAR measurements without prior knowledge about the infrastructure. An overall view of our proposed approach is shown in Figure 2. It consists of three modules, which are (1) Object-Identification with 2D LiDAR, (2) Object Moving Direction Estimation and Outlier Rejection, and (3) Pose Graph Optimization with UWB, Odometry, and LiDAR. First, we identify the object's pose based on the LiDAR measurements from the robot in the object-identification module. Next, we compute the moving direction of the object and perform outlier rejection by comparing the object's pose output from our object-identification algorithm with the current estimated pose from PGO through a heuristic strategy. Lastly, we estimate the robot's and object's pose by incorporating UWB ranging, odometry, and the object's pose identified by our object-identification module through PGO. ``` Input: LiDAR point cloud with reference to the robot \(\mathbf{P}_{R}^{t}\) at time \(t\) and UWB ranging between the robot and object: \(r^{t}\) at time \(t\). Output: Object pose estimation from LiDAR at time \(t\)\(\overline{\mathbf{x}}_{L}^{t}\). // Adaptive Clustering of \(\mathbf{P}_{R}^{t}\) according to [22] returns array of clusters \(\mathcal{C}^{t}\). 1\(\rhd\) Compare \(\mathcal{C}^{t-1}\) with \(\mathcal{C}^{t}\) based on Euclidean distance and return array of moving objects \(\mathcal{C}^{dynamic}_{dynamic}\) at time \(t\) according to [16]. // Narrow down to the position of interest from the cluster array of moving objects. 2for\(\mathcal{C}_{k}^{t}\in\mathcal{C}_{dynamic}^{t}\)do 3\(\rhd\) Compute Euclidean distance of \(\mathcal{C}_{k}^{t}\in\mathcal{C}_{dynamic}^{t}\) with respect to the robot. 4\(\rhd\) Cluster position with a distance to the robot closest to the current UWB ranging \(r^{t}\) is estimated as the object 2D position \(\mathbf{x}_{L}^{t}\). 5 end for 6\(\rhd\) Compute the estimated moving direction of the object \(\theta_{L}^{t}\) at time \(t\) with 2D positions \(\mathbf{x}_{L}^{t}\) and \(\mathbf{x}_{L}^{t-1}\). 7\(\rhd\) Return the estimated object pose identified by LiDAR \(\overline{\mathbf{x}}_{L}^{t}\) at time \(t\) given by the 2D the position \(\mathbf{x}_{L}^{t}\) and moving direction \(\theta_{L}^{t}\). ``` **Algorithm 1**Estimate object pose at time \(t\)\(\overline{\mathbf{x}}_{L}^{t}\) from LiDAR with reference to the robot. An overview of the object-identification module is shown in Algorithm 1. The pose of the object obtained through our object-identification and object moving-direction estimation module at time \(t\) is denoted as \(\overline{\mathbf{x}}_{L}^{t}=[\mathbf{x}_{L}^{t},\theta_{L}^{t}]\), where \(\mathbf{x}_{L}^{t}\) is the 2D position and \(\theta_{L}^{t}\) is the heading. The subscript \(L\) indicates that the estimation is from our proposed object-identification and object-moving-direction estimation module. The laser scans from the 2D LiDAR with reference to the robot are converted into a 2D point cloud \(\mathbf{P}_{R}^{t}\) = \(\{p_{i}|p_{i}=(x_{i},y_{i})\in\mathbb{R}^{2},i=1,...,I\}\), where \(I\) is the total number of points in a single scan. Next, adaptive clustering is performed according to [22]. Adaptive clustering was chosen instead of the conventional clustering algorithm [23]. Instead of a fixed-distance threshold, which can be inaccurate, the adaptive clustering algorithm will return the pose of the segmented objects centre based on a Euclidean distance threshold: \[d_{i}^{*}=2\cdot cr_{i}\cdot\tan\frac{\Theta}{2}, \tag{1}\] where \(\Theta\) refers to the angular resolution of the LiDAR. A set of values \(d_{i}^{*}\) are considered at fixed intervals to compute the maximum cluster range \(cr_{i}\) using the inverse of Equation 1. This returns cluster positions that include the object's position with reference to the robot in a point cloud array \(C^{t}\) = \(\{c_{k}^{t}|c_{k}^{t}=(x_{k}^{t},y_{k}^{t})\in\mathbb{R}^{2},k=1,...,K\}\) at time \(t\), where \(K\) is the total number of clusters in the point cloud \(\mathbf{P}_{R}^{t}\). The position of the object \(\mathbf{x}_{L}^{t}\) can be estimated by comparing two successive point clouds from the LiDAR. This Fig. 2: Flowchart of the proposed object localization approach by fusing odometry, UWB ranging, and LiDAR measurements. will result in an array of positions from all moving objects, including all the ones not intended. Therefore, we narrow down to the object of interest from the array of moving objects (\(C^{t}_{dynamic}\)) using UWB ranging. First, we compute the distances between the robot and the different positions \(C^{t}_{dynamic}\). Next, we compare the difference between UWB ranging and the computed distances. The position from \(C^{t}_{dynamic}\) with the closest distance to the current UWB-range measurement and within 0.3m of the current UWB-range is identified as the object's pose from the LiDAR. Due to the criterion employed here, false positives are minimized. This estimated position is used as the input for our object-moving-direction estimation and outlier rejection module, which will be explained in the next subsection. ### _Object Moving Direction Estimation and Outlier Rejection_ In this module, we estimate the object's moving direction using two successive object 2D positions, \(\mathbf{x}^{t-1}_{L}\) and \(\mathbf{x}^{t}_{L}\), from our object-identification module. The estimated object moving direction is given by: \[\theta^{t}_{L}=\mathrm{atan2}(y^{t}_{L}-y^{t-1}_{L},x^{t}_{L}-x^{t-1}_{L}), \tag{2}\] where \(\mathbf{x}^{t-1}_{L}=[x^{t-1}_{L},y^{t-1}_{L}]\) denotes the previous estimated object 2D position and \(\mathbf{x}^{t}_{L}=[x^{t}_{L},y^{t}_{L}]\) denotes the current estimated object 2D position from the LiDAR in our proposed object-identification algorithm. As there may be multiple misidentified objects from the object-identification module due to multiple dynamic obstacles in the environment, we introduce a heuristic outlier rejection mechanism to remove suspicious moving direction measurements estimated by our object-identification module in the previous subsection. Given the estimated moving direction from our object-identification module \(\theta^{t}_{L}\), we compare it with the current estimated object orientation from PGO \(\theta^{t}_{O}\) as follows: \[\Omega^{t}_{L,\theta}=\left\{\begin{aligned} &\omega,&\text{if}\ |\theta^{t}_{L}-\theta^{t}_{O}|\leq\vartheta\\ & 0,&\text{otherwise}\end{aligned}\right., \tag{3}\] where \(\vartheta\) is the error threshold set in radians, and \(\omega\) is the value set for the LiDAR moving direction information matrix \(\Omega^{t}_{L,\theta}\). If the condition is satisfied, the moving direction information matrix value will be set to a high value \(\omega\) to indicate that the measurement is trusted. If the condition is not satisfied, the moving direction information matrix will be set to 0 to indicate that the measurement cannot be trusted. It is unlikely for the orientation of the moving object to change drastically from its previous orientation. Therefore, a threshold value to reject false positives using the currently estimated object orientation from PGO works well and does not break the estimation due to a fast-moving orientation change. PGO is then performed after rejecting outlier moving direction measurements to improve localization accuracy. The parameters \(\vartheta\) and \(\omega\) will be further studied in the next section. ### _Pose Graph Optimization using UWB, Odometry, and LiDAR_ The objective is to estimate the trajectory of the robot \(\mathbf{\overline{x}}^{1:T}_{R}\)={\(\mathbf{\overline{x}}^{1}_{R}\),..., \(\mathbf{\overline{x}}^{T}_{R}\)} and the object \(\mathbf{\overline{x}}^{1:T}_{O}\)={\(\mathbf{\overline{x}}^{1}_{O}\),..., \(\mathbf{\overline{x}}^{T}_{O}\)} from time 1 up to time \(T\), where \(\mathbf{\overline{x}}^{t}_{O}\)=\([x^{t}_{O}\), \(y^{t}_{O}\), \(\theta^{t}_{O}]\) and \(\mathbf{\overline{x}}^{t}_{R}\)=\([x^{t}_{R}\), \(y^{t}_{R}\), \(\theta^{t}_{R}]\) represents the pose of the object and robot obtained through PGO at time \(t\) respectively. We use \(r^{t}\) to denote the UWB ranging between the robot and the at time \(t\). The LiDAR measurements, which include the object's 2D position and moving direction obtained through our object-identification and object-moving-direction estimation module at time \(t\), are denoted as \(\mathbf{x}^{t}_{L}\) and \(\theta^{t}_{L}\) respectively. We estimate the robot and object pose given the odometry, UWB ranging, and LiDAR measurements through a centralized PGO solution. The PGO technique uses the poses of the robot and objects as nodes in a graph to be estimated. Each node represents a specific pose at a given time step. The graph's edges connect these nodes based on the measurements provided, forming constraints between them. To find the optimal configuration of poses, PGO minimizes the error of these constraints using methods such as maximum likelihood estimation or nonlinear optimization techniques. This process leads to a refined and globally consistent estimation of the poses in the system's trajectory. In our approach, the errors to be minimized through maximum likelihood estimation is as followed: \[\underset{\mathbf{\overline{x}}^{1:T}_{R},\mathbf{\overline{x}}^{1:T}_{O}}{ \arg\min}\sum_{t=2}^{T} \underbrace{\mathbf{e}(\mathbf{\overline{x}}^{t-1}_{R},\mathbf{ \overline{x}}^{t}_{R},\Delta\mathbf{\overline{x}}^{t}_{R})^{T}\Omega^{t}_{R} \mathbf{e}(\mathbf{\overline{x}}^{t-1}_{R},\mathbf{\overline{x}}^{t}_{R}, \Delta\mathbf{\overline{x}}^{t}_{R})}_{\text{Robot odometry constraint}}+ \tag{4}\] \[\sum_{t=2}^{T}\underbrace{\mathbf{e}(\mathbf{\overline{x}}^{t-1}_ {O},\mathbf{\overline{x}}^{t}_{O},\Delta\mathbf{\overline{x}}^{t}_{O})^{T} \Omega^{t}_{O}\mathbf{e}(\mathbf{\overline{x}}^{t-1}_{O},\mathbf{\overline{x} }^{t}_{O},\Delta\mathbf{\overline{x}}^{t}_{O})}_{\text{Object odometry constraint}}+\] \[\sum_{t=1}^{T}\underbrace{\mathbf{e}(\mathbf{\overline{x}}^{t}_{R}, \mathbf{\overline{x}}^{t}_{O},r^{t})^{T}\Omega^{t}_{R,O}(\mathbf{\overline{x} }^{t}_{R},\mathbf{\overline{x}}^{t}_{O},r^{t})}_{\text{UWB ranging constraint}}+\] \[\sum_{t=1}^{T}\underbrace{\mathbf{e}(\mathbf{\overline{x}}^{t}_{R}, \mathbf{\overline{x}}^{t}_{O},\mathbf{x}^{t}_{L})^{T}\Omega^{t}_{L,\mathbf{x} }\mathbf{e}(\mathbf{\overline{x}}^{t}_{R},\mathbf{\overline{x}}^{t}_{O}, \mathbf{x}^{t}_{L})}_{\text{Object position constraint (Module 1)}}+\] \[\sum_{t=1}^{T}\underbrace{\mathbf{e}(\mathbf{\overline{x}}^{t}_{R}, \mathbf{\overline{x}}^{t}_{O},\theta^{t}_{L})^{T}\Omega^{t}_{L,\mathbf{\theta} }\mathbf{e}(\mathbf{\overline{x}}^{t}_{R},\mathbf{\overline{x}}^{t}_{O}, \theta^{t}_{L})}_{\text{Object moving direction constraint (Module 2)}},\] where \(\mathbf{e}(\cdot)\) denotes the residual function that computes the residual error of odometry, UWB ranging and a LiDAR pose measurement from our object-identification module given a pose configuration for the robot \(\mathbf{\overline{x}}^{t}_{R}\) and the object \(\mathbf{\overline{x}}^{t}_{O}\). Constraints are additionally parameterized with a certain degree of uncertainty, which is denoted as the information matrix (i.e., \(\Omega^{t}_{R}\),\(\Omega^{t}_{O}\),\(\Omega^{t}_{R,O}\),\(\Omega^{t}_{L,\mathbf{x}}\),\(\Omega^{t}_{L,\theta}\)) in Equation 4. Due to the non-convexity of Equation 4, the optimization converges to a local minimum without a reasonable initial guess, and there is no guarantee of finding the best solution. Therefore, we use the known initial robot and object position with reference to the robot as an initial guess for optimization. Based on the initial guess, we then perform nonlinear optimization via g2o [24]. Next, we study all three modules using real-world experiments to show their effectiveness. ## IV Experimental results In this section, we present the experimental results of our approach and compare them with two other methods: pure odometry and conventional UWB ranging localization. Furthermore, we include results for the fusion of odometry and LiDAR to provide a comprehensive comparison. We begin by describing the experimental setups in Section IV-A. Subsequently, we present the results obtained using a setup comprising a static robot equipped with a 2D LiDAR and a moving object (robot without LiDAR), and evaluate the pose estimation outcomes in Section IV-B. Building upon that, we extend the proposed approach to a setup involving a moving robot (with LiDAR) and a moving object (robot without LiDAR), and assess the performance of our method in Section IV-C. The first experiment focuses on verifying the effectiveness of the proposed approach using a static robot and a moving object. Additionally, in this experiment, we identify the optimal values for \(\vartheta\) and \(\omega\). The second experiment aims to validate the robustness of our approach on a moving robot and a moving object, utilizing the best values for \(\vartheta\) and \(\omega\) based on the previous experiment. All experiments were conducted on a system equipped with an Intel Core i7-6600U CPU for reliable and consistent performance. ### _Experiment Setups_ In this subsection, we present the experimental setup to demonstrate the proposed approach with a robot (a turtlebot with LiDAR) and an object (a turtlebot without LiDAR) in an indoor environment of 16m\(\times\)12m which consists of static and dynamic obstacles. Fig. 3 shows a snapshot of the experimental setup. The robot and object each carry one UWB node (NoopLoop LinkTrack) with a range of up to 100m and a sampling frequency of 50Hz. The robot and object output odometry measurements with a frequency of 10Hz. In addition, the robot also carries a 2D LiDAR (Hokuyo LiDAR), which publishes laser scans at 20Hz. All modules are run using the Robot-Operating-System (ROS) [25]. For the experiments, ground truth was obtained using the Hokuyo LiDAR to perform Adaptive Monte Carlo Localization (AMCL) [26] given a map created through GMapping [27], which provides accurate pose estimations for both the robot and the object. The accuracy of our proposed approach was evaluated by comparing the computed relative translational and rotational errors of the robot and the object against the ground truth data. ### _Experiments with a Static Robot and a Moving Object_ This section presents the experimental results to demonstrate the proposed approach with a static robot and a moving object. The robot is static at its initial position, while the moving object moves along different paths. #### Iv-B1 Evaluation of Pose Estimation using UWB, Odometry and LiDAR We commence our evaluation by examining pure odometry, a method known to exhibit drift over time. Next, we assess UWB ranging localization [17], which combines UWB ranging and odometry measurements to minimize the residual error of the UWB range. The third approach under evaluation is the fusion of odometry and LiDAR measurements, without the incorporation of UWB ranging as constraints. Lastly, we evaluate our proposed approach, which integrates LiDAR, UWB ranging, and odometry measurements for accurate pose estimation. Furthermore, we investigate the impact of the LiDAR moving direction information matrix value on the precision of the pose estimation results. The outcomes of the various approaches are summarized in Table I, providing a comprehensive overview. For pure odometry, the average translational error amounted to 0.28m, accompanied by an average rotational error of 0.066rad. As depicted in Fig. 4(a), we observe a noticeable drift of the odometry from the ground truth as time progresses. The Odom + UWB approach (UWB ranging localization) in Figure 4(b) improved the error caused by drift, with a 0.25m average translational error and a 0.038rad rotational error. Although introducing UWB ranging as a constraint improved translational and rotational accuracy, UWB ranging measurements between the static robot and the moving object in non-line-of-sight scenarios will be longer than the actual distance. Fortunately, the data from LiDAR attached to the static robot could help further improve the localization accuracy and compensate for this error. \begin{table} \begin{tabular}{|c|c|c|} \hline Approach & Rel. trans error (m) & Rel. rot error (rad) \\ \hline Pure Odom & 0.28 \(\pm\) 0.12 & 0.066 \(\pm\) 0.024 \\ \hline Odom + UWB & 0.25 \(\pm\) 0.092 & 0.038 \(\pm\) 0.011 \\ \hline \begin{tabular}{c} Odom + LiDAR \\ (with moving direction \\ and rejection) \\ \end{tabular} & 0.26 \(\pm\) 0.056 & 0.039 \(\pm\) 0.009 \\ \hline \begin{tabular}{c} Odom + UWB + LiDAR \\ (w/o moving direction) \\ \end{tabular} & 0.29 \(\pm\) 0.15 & 0.061 \(\pm\) 0.031 \\ \hline \begin{tabular}{c} Odom + UWB + LiDAR \\ (w/o rejection) \\ \end{tabular} & 0.23 \(\pm\) 0.070 & 0.057 \(\pm\) 0.024 \\ \hline \begin{tabular}{c} Odom + UWB + LiDAR \\ (with moving direction \\ and rejection) \\ \end{tabular} & 0.14 \(\pm\) 0.032 & 0.026 \(\pm\) 0.006 \\ \hline \end{tabular} \end{table} TABLE I: Evaluation of different approaches based the average relative translational error (metres) and relative rotational error (radians) between the static robot and the moving object. Fig. 3: Snapshot of experiment setup with a robot (turtlebot equipped with LiDAR) and moving object (turtlebot without LiDAR) in an indoor environment with static and dynamic obstacles. The Odom + LiDAR approach (fusion of odom and LiDAR without odometry) which introduces LiDAR measurements in the localization algorithm through our object-identification module produced comparable results to UWB ranging localization with a 0.26m average translational error and a 0.039rad rotational error. However, the localization accuracy can be further improved by adding an additional constraint through UWB ranging measurements. Our proposed approach builds on the UWB-ranging localization approach. We evaluated our proposed method without moving direction, without outlier rejection, and with both moving direction and outlier rejection to show the importance of the moving direction estimation and rejecting outliers. With moving direction estimations and the outlier rejection mechanism, the proposed approach produced the best results with a 0.14m average translational error and a 0.026rad rotational error. We compare the estimated moving direction of the moving object from our moving object identification module with the current estimated orientation from PGO. If the difference is within a threshold of 0.3rad, we set a high value for the moving direction in the information matrix. In addition, the average translational and rotational error of the different approaches over time was also evaluated, as shown in Figure 5. We show that our proposed approach maintains a consistent translational and rotational error over time, with a maximum of 0.225m and 0.041rad, respectively. #### Iv-A2 Impact of Different Object Moving Direction Information Matrix Values In this experiment, the information matrix for the static robot odometry \(\Omega_{R}^{t}\), moving object \begin{table} \begin{tabular}{|c|c|c|} \hline Value \(\vartheta\) & Rel. trans error (m) & Rel. rot error (rad) \\ \hline 0.1 & 0.28 \(\pm\) 0.10 & 0.055 \(\pm\) 0.020 \\ \hline 0.2 & 0.19 \(\pm\) 0.069 & 0.039 \(\pm\) 0.012 \\ \hline 0.3 & 0.14 \(\pm\) 0.032 & 0.026 \(\pm\) 0.006 \\ \hline 0.4 & 0.17 \(\pm\) 0.030 & 0.039 \(\pm\) 0.012 \\ \hline 0.5 & 0.23 \(\pm\) 0.076 & 0.049 \(\pm\) 0.016 \\ \hline \end{tabular} \end{table} TABLE II: Evaluation of different outlier rejection threshold values \(\vartheta\) on the average relative translational error (metres) and relative rotational error (radians) between the static robot and the moving object. Fig. 4: Trajectories estimated by different approaches for the static robot and a moving object scenario. The green lines denote the UWB ranging constraints, and the point of intersection for the green lines is the static robot. Fig. 5: Experimental evaluation of the relative translational and rotational error between the static robot and the moving object with the proposed approach (with moving direction and rejection) over time. \begin{table} \begin{tabular}{|c|c|c|} \hline Value \(\omega\) & Rel. trans error (m) & Rel. rot error (rad) \\ \hline 1 & 0.30 \(\pm\) 0.15 & 0.063 \(\pm\) 0.031 \\ \hline 10 & 0.24 \(\pm\) 0.11 & 0.052 \(\pm\) 0.018 \\ \hline 100 & 0.21 \(\pm\) 0.079 & 0.042 \(\pm\) 0.016 \\ \hline 1000 & 0.19 \(\pm\) 0.069 & 0.042 \(\pm\) 0.009 \\ \hline 10000 & 0.14 \(\pm\) 0.032 & 0.026 \(\pm\) 0.006 \\ \hline 100000 & 0.15 \(\pm\) 0.045 & 0.030 \(\pm\) 0.009 \\ \hline \end{tabular} \end{table} TABLE III: Evaluation of different orientation information matrix values \(\omega\) on the average relative translational error (metres) and relative rotational error (radians) between the static robot and the moving object. odometry \(\Omega^{t}_{O}\), UWB ranging \(\Omega^{t}_{R,O}\), and moving object position \(\Omega^{t}_{L,\mathbf{x}}\) constraints are set to 1. We investigated different values \(\omega\) for the information matrix of the estimated object moving direction \(\Omega^{t}_{L,\theta}\) provided by our object-identification module for PGO. We perform a simple criterion check for the reliability of the estimated moving direction from our proposed approach by comparing it with the estimated object orientation from PGO. We observe that if the moving direction estimated by our proposed approach and the current orientation estimation from PGO is within a threshold of 0.3rad based on experimentation shown in Table II, setting the value for the object moving direction information matrix \(\Omega^{t}_{L,\theta}\) to 10000 increases the localization accuracy most. We show improved localization accuracy as the value for the object moving direction information matrix \(\Omega^{t}_{L,\theta}\) increases up to 10000. However, when the values go above 10000, it causes the localization accuracy to get worse slowly. ### _Experiments with a Moving Robot and a Moving Object_ This section presents the experimental results demonstrating the proposed approach with a moving robot and a moving object. The moving robot and the moving object were manually controlled to move along different paths. The best values for \(\vartheta\) of 0.3 and \(\omega\) of 10000 from Section IV-B2 were used for evaluating the proposed method against pure odometry, UWB ranging localization, and the fusion of odometry and LiDAR measurements only. Figure 6 visualizes the ground truth trajectory, the estimated trajectory with UWB ranging localization, and also the estimated trajectory with the proposed approach for the moving robot and the moving object. We show in Figure 7 that our proposed approach produces significant improvements in the relative translational and rotational accuracy between the moving robot and the moving object compared to pure odometry, the conventional UWB ranging localization, and fusion of odometry and LiDAR measurements only. Furthermore, we highlight the improvements of our proposed approach in Table IV, where the proposed approach improved the conventional UWB ranging localization by 13.5% and 36% in the relative translation and rotation error respectively. \begin{table} \begin{tabular}{|c|c|c|} \hline Approach & Rel. trans error (m) & Rel. rot error (rad) \\ \hline Pure Odom & 0.79 \(\pm\) 0.34 & 0.28 \(\pm\) 0.100 \\ \hline Odom + UWB & 0.52 \(\pm\) 0.17 & 0.25 \(\pm\) 0.065 \\ \hline Odom + LiDAR & \multirow{2}{*}{0.52 \(\pm\) 0.30 and rejection} & \multirow{2}{*}{0.021 \(\pm\) 0.064} \\ (with moving direction and rejection) & & \\ \hline Odom + UWB + LiDAR & \multirow{2}{*}{0.45 \(\pm\) 0.032} & \multirow{2}{*}{0.16 \(\pm\) 0.050} \\ (with moving direction and rejection) & & \\ \hline \end{tabular} \end{table} TABLE IV: Evaluation of different approaches based on average relative translational error (metres) and relative rotational error (radians) between the moving robot and the moving object. Fig. 6: Trajectory of the moving object and moving robot ground truth trajectory, with UWB ranging localization, and with the proposed approach (with moving direction and rejection). The green lines refer to the UWB-ranging constraints between the moving robot and the moving object. Fig. 7: Experimental evaluation of the relative translational and rotational error between the moving robot and the moving object with the proposed approach (with moving direction and rejection) over time. ## V Conclusions We proposed an approach for moving object localization using UWB ranging, odometry, and LiDAR measurements between a moving object and a robot in unknown environments. Our approach consists of three modules that identify the moving object's position using LiDAR, estimate the moving object's moving direction and reject outlier moving direction estimations, and perform PGO to estimate the moving object's position. The proposed approach was verified between a robot and a moving object in an indoor environment of 16m\(\times\)12m with obstacles. The results showed that the proposed approach achieved an average localization accuracy of 0.14m in translation and 0.026rad in rotation, which significantly improves accuracy compared to the conventional UWB ranging localization in an environment with one static robot and one moving object. We also showed the importance of moving direction estimations and an outlier rejection mechanism to discard suspicious moving direction estimates from our object-identification module. Additionally, the proposed approach was tested with a moving robot and a moving object which produced significant improvements compared to the conventional UWB ranging localization. In future works, we will extend the work with multiple robots identifying multiple moving objects given that the moving objects can provide odometry. Another research direction is to apply the proposed approach for autonomous moving object-following tasks using a moving robot.
2308.03901
FLIPS: Federated Learning using Intelligent Participant Selection
This paper presents the design and implementation of FLIPS, a middleware system to manage data and participant heterogeneity in federated learning (FL) training workloads. In particular, we examine the benefits of label distribution clustering on participant selection in federated learning. FLIPS clusters parties involved in an FL training job based on the label distribution of their data apriori, and during FL training, ensures that each cluster is equitably represented in the participants selected. FLIPS can support the most common FL algorithms, including FedAvg, FedProx, FedDyn, FedOpt and FedYogi. To manage platform heterogeneity and dynamic resource availability, FLIPS incorporates a straggler management mechanism to handle changing capacities in distributed, smart community applications. Privacy of label distributions, clustering and participant selection is ensured through a trusted execution environment (TEE). Our comprehensive empirical evaluation compares FLIPS with random participant selection, as well as three other "smart" selection mechanisms - Oort, TiFL and gradient clustering using two real-world datasets, two benchmark datasets, two different non-IID distributions and three common FL algorithms (FedYogi, FedProx and FedAvg). We demonstrate that FLIPS significantly improves convergence, achieving higher accuracy by 17 - 20 % with 20 - 60 % lower communication costs, and these benefits endure in the presence of straggler participants.
Rahul Atul Bhope, K. R. Jayaram, Nalini Venkatasubramanian, Ashish Verma, Gegi Thomas
2023-08-07T20:28:22Z
http://arxiv.org/abs/2308.03901v2
# FLIPS: Federated Learning using Intelligent Participant Selection ###### Abstract This paper presents the design and implementation of FLIPS, a middleware system to manage data and participant heterogeneity in federated learning (FL) training workloads. In particular, we examine the benefits of label distribution clustering on participant selection in federated learning. FLIPS clusters parties involved in an FL training job based on the label distribution of their data apriori, and during FL training, ensures that each cluster is equitably represented in the participants selected. FLIPS can support the most common FL algorithms, including FedAvg, FedProx, FedDyn, FedOpt and FedYogi. To manage platform heterogeneity and dynamic resource availability, FLIPS incorporates a straggler management mechanism to handle changing capacities in distributed, smart community applications. Privacy of label distributions, clustering and participant selection is ensured through a trusted execution environment (TEE). Our comprehensive empirical evaluation compares FLIPS with random participant selection, as well as three other "smart" selection mechanisms - Oort [51], TiFL [16] and gradient clustering [29] using two real-world datasets, two benchmark datasets, two different non-IID distributions and three common FL algorithms (FedYogi, FedProx and FedAvg). We demonstrate that FLIPS significantly improves convergence, achieving higher accuracy by 17-20 percentage points with 20-60% lower communication costs, and these benefits endure in the presence of straggler participants. ## 1 Introduction Federated Learning (FL) [45] is the process by which multiple participants (parties) collaborate to train a common machine learning (ML) model, without sharing data among themselves or with a centralized cloud-hosted machine learning service. FL allows parties to retain private data within their controlled domains. Only model updates are _typically_ shared to a central aggregation server hosted by one of the parties or a cloud service provider. This, along with transformations applied to model updates (e.g., homomorphic encryption [40] and addition of noise for differential privacy [5]) make FL _privacy preserving_. **Why FL?** From a participant perspective, a key goal of FL is to _access diverse training data_ to enhance the robustness of machine learning models by promoting generalization, outlier detection and bias mitigation. FL allows the training of machine learning models using data distributed across multiple devices or edge nodes from differentographies and geographical locations. This distributed nature allows for diverse datasets, as each device may have unique data reflecting various user behaviors, preferences, or contexts. The privacy-preserving aspect encourages a wider range of participants to contribute their data, including those who may be concerned about sharing personal information or those restricted by regulations like HIPAA and GDPR, further leading to a more representative and inclusive dataset. **Benefits of Diverse Datasets:** A diverse training dataset provides a broader representation of the real-world scenarios and variations that the model may encounter during inference. By exposing the model to a wide range of data samples, including different classes, variations, edge cases and outliers, the model can learn more generalized patterns and make better predictions on unseen data. Outliers can be valuable in identifying rare events or unusual patterns that may not be present in a homogeneous dataset. Also, including diverse samples that represent different demographics, backgrounds, and perspectives, models can learn to be more equitable and avoid perpetuating bias. It enables the model to make predictions that are more representative and fair for a broader range of individuals. **non-IID Data in FL.** The diversity of real-world entities and their private data also implies that FL techniques and algorithms have to be designed to handle non-IID (non Independent and identically distributed) data. Parties not only have different data items, but also often have wildly different types of data items, corresponding to different labels. Several leading FL researchers [84, 45] have noted that the presence of IID data is the exception rather than the norm [84]. **Intermittent Participation.** FL, in general, is characterized by _intermittent_ participation, which means that for every FL round, each party trains at its convenience, or feasibility. This may be when devices are connected to power in the case of mobile phones, tablets and laptops (FL over edge devices); when (local) resource utilization from other computations is low and when there are no pending jobs with higher priority (both in edge and datacenter use cases). The aggregator expects to hear from the parties _eventually_ - typically over several minutes or hours. Parties in FL jobs are often highly unreliable and are expected to drop and rejoin frequently. **Participent Selection:** Due to the intermittent nature of parties, existing FL algorithms like FedAvg, FedProx, FedMA, FedDyn, etc. only select a subset of parties in each round, often employing randomization to select parties [84, 45]. Random selection eventually offers each party an opportunity to participate in the FL job, but does not take into account the type of data present at each party, and does not ensure that parties with diverse datasets are selected in _each round_. There is also significant empirical evidence that Non-IID data combined with random selection significantly increases the time taken for models to converge [62, 33, 53] (we reproduce some of these results in Section 5). There is increasing recognition that random selection is suboptimal for other reasons as well. The selected participant could have compute or communication constraints and might actually not be able to participate in the round. There is existing research on selecting participants for each round based on the ability to participate, amount of data present at each participant, history of reliable participation, and communication constraints [44, 51, 28]. But said research has not considered the label distribution among participants. This is unfortunate because it is vital for model generalization in FL to equitably consider all participants, including outliers. This paper makes the following contributions: * FLIPS, a middleware for effective management of data and platform heterogeneity in FL using clustering techniques based on label distributions. * An algorithm to use the generated clusters to select diverse participants at each round of an FL training job, ensuring that parties are equitably represented while offering each party a fair opportunity to participate. * A private mechanism using trusted execution environments (TEEs) to cluster participants in FL jobs based on the label distributions of their data and identify the diversity in their data in a secure manner. * Oort [51] and gradient clustering [29] using two real world datasets, two benchmark datasets, two different non-IID distributions and three common FL algorithms (FedYogi, FedProx and FedAvg). We demonstrate that FLIPS significantly improves convergence, achieving superior accuracy with much lower communication costs, and these benefits endure in the presence of stragglers. ## 2 Federated Learning and Heterogeneity A typical FL setting consists of parties with local data and an aggregator server (when the number of parties is small) or service (e.g., a microservice using Apache Spark to aggregate model updates when number of parties is large) to orchestrate FL. An FL job proceeds over several rounds, also called _synchronization rounds_. An aggregator typically coordinates the entire FL job, in addition to aggregating model updates and distributing the updated model back to the parties. Co-ordination, includes, agreeing on the following FL job parameters before the job starts: (i) model architecture (ResNet, EfficientNet, etc), (ii) FL algorithm (FedAvg, FedYogi, etc.) and any algorithm specific parameters like minimum number of parties required for each round, (iii) how to initialize the global model (whether random, or from existing pre-trained models), (iv) hyperparameters to be used (batch size, learning rate etc.), and (v) termination criteria, whether the FL job ends after a specific number of rounds, or when a majority of parties decide that the model is satisfactory (e.g., has reached a desired level of accuracy). At each FL job round, each participant trains a local model using its local data. The local model is initialized with the global model parameters received from the aggregator. The local training process typically involves several iterations or epochs (this is part of the hyperparameters agreed at the start) to improve the model's performance on the local dataset. After local training, the participants generate model updates, which typically consist of the updated model parameters or gradients. These updates capture the local knowledge learned from the device's data. The aggregator combines these updates, and applies the aggregated update to construct the global model using the optimizer. The new global model parameters are sent to the participants selected for the next round. The FL process typically involves multiple rounds of local training, model updates, aggregation, and distribution. This iterative process allows the model to be refined and improved over time by leveraging the collective intelligence of the participating devices. ### Common FL Algorithms FL algorithms [84] primarily differ in how model updates are aggregated and the mathematical formula (called the optimizer in machine learning literature) used to apply the aggregated model update to the global model. For FedAvg [64], the aggregator selects a random subset \(\mathcal{S}^{(r)}\subset\mathcal{S}\) of parties for every round \(r\). The aggregator in FedAvg computes the weighted average of all participant updates (gradients): \(\frac{1}{N}\sum_{i\in\mathcal{S}^{(r)}}n_{i}x_{i}^{(r)}\) to compute the global model update \(x^{(r)}\) and update the global model (for the next round) \(m^{(r+1)}\) using SGD as the optimizer. This process proceeds for a set number \(R\) of rounds or (less typically) until a majority of the parties vote to terminate. The term \(n_{i}\) in the weighted average is the number of training samples at party \(i\), and \(N\) is the total number of training samples involved in the round, i.e., \(N=\sum_{i\in\mathcal{S}^{(r)}}n_{i}\). FedAvg is the first widely-deployed FL algorithm but does not result in an optimized global model when the data distribution is not IID. When the entire set of parties \(\mathcal{S}\) is used in every round, and SGD is the optimizer, we get FedSGD. FedAdam and FedAdagrad are the same FedAvg with Adam [49] and AdaGrad [27] as the optimizer, respectively. FedProx [55] is a variation of the FedAvg that aims to produce better global models in the presence of non-IID data. FedProx includes a Proximal term in the optimizer with penalty parameter \(\mu\). If \(F_{k}(x_{r,k})\) is the local loss function at a party \(k\) at round \(r\). Then the local loss function in FedProx includes \(\mu\) which then translates to \(F_{k}(x^{(r,k)})+\frac{\mu}{2}||m^{(r)}-x^{(r,k)}||\). This brings the model \(x^{(r,k)}\) closer to \(m^{(r)}\) at each party \(k\). By requiring the updates to be close to the starting point, a big \(\mu\) could potentially hinder convergence, but a small \(\mu\) might have no effect. FedYoGi [51, 73], which has been shown to outperform FedAvg and FedProx [51] with non-IID data, uses an adaptive optimizer to update the global model. The server optimizer maintains a per-parameter learning rate, updated based on the history of gradients \(gr^{(r,k)}=x^{(r,k)}-m^{(r)}\). This allows the server optimizer to adapt to the local data distributions of the clients. FedYoGi introduces a moving average of gradients term \(m_{t}=\beta_{1}*m_{t}+(1-\beta_{1})*gr^{(r,k)}\) and moving average of squared gradients \(v_{t}=\beta_{2}*v_{t}+(1-\beta_{2})*(gr^{(r,k)})^{2}\), with 2 momentum hyperparameters \(\beta_{1}\) and \(\beta_{2}\). \(m^{(r)}\) is updated by \(m^{(r)}-lr*\frac{m_{t}}{\sqrt{v_{t}+eps}}\), where \(lr\) is the learning-rate and \(eps\) is a small constant to prevent division by 0. Figure 1: Overview of Federated Learning ### Data Heterogeneity : Dealing with Non-IID Data Despite advances in FL, the ability of current techniques to deal with variabilities and diversity in real-world data is still limited. The term non-IID (non-independent and identically distributed) data distributions in the context of FL refers to the situation in which the data on each device or node taking part in the FL process is not independently and identically distributed across all devices. This can happen when data is collected from several sources or under various circumstances, resulting in variances in the data and/or label distribution on each device. In FedAvg, FedProx and FedYoGi at each round \(r\), \(S^{(r)}\) parties are sampled randomly from the given pool. \(|S^{(r)}|\) is typically small when compared to \(|\mathcal{S}|\), typically less than 20% in real deployments [11, 37]. This will result in some rounds, where parties with similar data distributions are chosen and thus lead to class imbalance, when certain classes of data are underrepresented on certain parties. This leads to model divergence in rounds when parties with diverse data are chosen, and the model to be biased towards the overrepresented classes. To understand why this happens, it is helpful to consider the centralized learning setting, where the training typically makes a pass over the entire dataset in every training round (epoch), and outlier and diverse data is considered in every training epoch. With random selection, outlier data may get omitted continuously for several rounds in FL, especially with typical values of \(|S^{(r)}|\). To further illustrate the significance, we consider a real-world use-case from Senior Care focusing on smartspaces and assisted living (name hidden for anonymity). One application investigated for FL is Arrhythmia detection using ECG signals [65] from wearables where data exhibits non-IIDness, as more data points are recorded for normal heartbeats. Abnormal heartbeats are recorded in devices worn by people with heart aliments, a small fraction of all the parties in FL [65]. With random selection in any FL algorithm, there is always a higher chance of selecting a party with majority Normal beats at any round, biasing the model towards classifying most heartbeats as Normal. ### Platform Heterogeneity : Stragglers In federated learning (FL) deployments in the real world, Platform Heterogeneity plays a significant role in the convergence of the global model. Platform heterogeneity across different parties causes some of them to be stragglers which exhibit intermittent failure. Stragglers are devices that take longer than expected in an FL environment to complete their local training. These gadgets have the potential to stall the FL process and perhaps fail. Stragglers can appear in real-world FL deployments for a variety of reasons. Among the most popular explanations are: * Data transfer between devices may experience delays due to network congestion. As a result, some devices cannot get the information they require to finish a task as quickly as they should. * Stragglers can also be caused by device faults. A gadget might crash or run out of battery life, preventing it from finishing a task. * Devices deployed in challenged settings may be more likely to have restricted resources, such as memory or computing power. This is because the workload of an FL task may be too much for these devices to handle quickly. One of the major roadblocks in FL is straggler parties, among the \(S^{(r)}\). These parties stall the overall FL training as they do not communicate their models within the given time threshold for local training. This results in under-representing the straggler's data while aggregating the global model. ### Security and Privacy of FL Recent research [34, 26, 96, 31, 90, 97] on reverse engineering attacks has demonstrated that while parties in federated learning (FL) do not share training data, sharing model updates may not ensure privacy. Two common types of reverse engineering attacks are (i) Data reconstruction attacks and (ii) Membership/Property inference attacks. Data reconstruction [31, 96, 97] aims to reconstruct most, if not all of the original training data of a participant from the global model updates (gradients). Membership inference [76, 34] aims to determine whether a specific data point was used to train the global model. This can be done by analyzing the performance of the global model on carefully crafted inputs. Property inference aims to infer certain properties of the participants' data, such as their location or demographics. This can be done by analyzing the global model updates or by training a new model to predict these properties from the global model updates. Empirically, these attacks work well only on gradients corresponding to individual data items or on aggregated gradients computed from a small number (e.g., up to batch sizes of 16 for [31, 96, 97]) of data items. Therefore, individual model updates have to be private, and FL is not considered private when the number of parties is less than three. Secure aggregation [13, 37, 11] is a vital technique in federated learning (FL) that safeguards data privacy and security from individual devices or clients, facilitating collaborative model training. The primary objective is to preserve the privacy of individual updates while still allowing them to contribute to enhancing the global model. Four key secure aggregation techniques, which can be combined as needed for defense-in-depth, include: (i) Homomorphic Encryption (HE), (ii) Differential Privacy (DP), (iii) Secure Multi-Party Computation (SMPC), and (iv) Trustworthy Execution Environments (TEEs) or Secure Enclaves. HE (e.g., [68, 14]) for FL involves parties sharing a common public/private keypair. Parties encrypt the model updates before transmitting them to the aggregator which performs the aggregation computation on the encrypted data. Aggregated model updates can then be decrypted by the parties. HE does not change model utility, but is computationally expensive - two or three orders of magnitude even with the use of specialized hardware [92, 81, 82], and also results in significant increase in the size of the model update (e.g., 64\(\times\) for Paillier HE [92] which is sufficient for many FL algorithms). While HE is practical for FL in _cross-silo_ datacenter/cloud settings where its latency and bandwidth requirements can be accommodated, the need for all participants to share a common keypair makes it impractical for large scale settings. Differential privacy [5, 77, 79, 89] is a statistical technique that adds controlled noise to the model updates before aggregation. This noise ensures that individual updates do not reveal sensitive information about the data used for training, defeating data reconstruction and membership inference attacks. Some techniques also clip the model updates (gradients) to a predefined range before the addition of noise. This further limits the information that can be extracted from the updates. By carefully controlling the amount of noise, differential privacy provides a trade-off between privacy and utility. However, this is non-trivial, and model utility is very sensitive to an optimal choice of hyperparameters which is difficult in large-scale FL settings. Secure Multi-Party Computation (SMPC) protocols [48, 22, 23, 40, 13] allow multiple parties (devices in this context) to perform computations on their inputs while keeping those inputs private. In FL, SMPC is used to aggregate model updates securely. Each party encrypts its update, and multiple parties perform computations on the encrypted updates without revealing the raw data. The result is an aggregated update that can be used to update the global model. Many SMPC protocols also suffer similar drawbacks as HE, with increased communication and computation time, lower in magnitude than HE but still significant, and the need for effective key distribution. A trusted execution environment (TEE) [1, 2] is a secure area of a main processor that provides security features for isolated execution and guarantees the integrity of applications executing within, along with the confidentiality of their data assets. ARM TrustZone [4] and Intel SGX [3] are examples of TEEs. For aggregation in FL, they are attractive because their computational overhead is low, and their computations and software can be audited by participants with the help of attestation services. At least one FL system deployed at scale - Papaya [37] uses TEEs for aggregation. ## 3 FLIPS: Design Participant selection is a key challenge in FL. It is the process of choosing which devices will participate in each round of training, and is predominantly random [55, 83, 11, 84]. There is existing research on participant selection to optimize communication costs and computation limitations, which does not consider parties' data distribution and data diversity. For example, [36] models the client selection process as a Lyapunov optimization problem. The authors propose a C2MAB-based method to estimate the model exchange time between each client and the server. They then design an algorithm called RBCS-F to solve the problem. VF-PS [43] is a framework for selecting important participants in vertical FL (VFL) efficiently and securely. It works by estimating the mutual information between each participant's data and the target variable and then selects the most important participants based on their scores. To ensure efficiency, VF-PS uses a group testing-based search procedure. To ensure security, it uses a secure aggregation protocol. VF-PS achieves the target accuracy faster than training a naive VFL model. FedMCCS [6] addresses challenges in using FL with IoT devices. FedMCCS considers the CPU, memory, energy, and time of the client devices to predict whether they are able to perform the FL task. In each round, FedMCCS maximizes the number of clients while considering their resources and capability to train and send the needed updates successfully. FedMCCS outperforms other approaches by reducing the number of communication rounds to reach the intended accuracy, maximizing the number of clients, ensuring the least number of discarded rounds and optimizing the network traffic. [67] takes a similar approach as FedMCCS, prioritizing resource availability. Another approach considers data valuation for compensating valuable data owners [87]. They use the Shapley value as a fair allocation mechanism that assigns a value to each data source based on its contribution to the model's performance to enhance system robustness, security, and efficiency; such mechanisms could be used for aiding participant selection. [18] proposes Power-of-Choice, a communication- and computation-efficient client selection framework, which randomly selects a fraction of participants in each round, but biases selection towards those with higher local losses. [18] proves theoretically and empirically that this biased selection leads to faster convergence. Oort [51] takes a similar approach. In this section, we introduce FLIPS, our approach to intelligent participant selection that improves model convergence, addresses diversity in datasets and incurs low communication overheads. First, FLIPS mitigates the above-mentioned class imbalance issue, (Algorithm 1) and improves feature and participant representation in FL by selecting parties that are likely to have dissimilar data at each FL round. The techniques implemented within FLIPS are based on one of the federated learning's core goals - to increase the diversity of data and ensure that the global model in each FL round does not overfit local data. ### Finding Similar Parties using Clustering The objective of FLIPS is to identify sets of similar parties by measuring the label distribution of each party and using it as a semantic representation of a party's local dataset. There are \(N\) parties, \(p_{1},p_{2},\ldots,p_{N}\), with datasets \(d_{1},d_{2},\ldots,d_{N}\) and label distributions \(ld_{1},ld_{2},\ldots,ld_{N}\), respectively. The label distribution vector at party \(p_{i}\) with dataset \(d_{i}\) is \(ld_{i}=\{l_{1},l_{2},\ldots,l_{g}\}\), where \(l_{j}\) is the number of data points for the label \(j^{th}\) present in the party and \(g\) is the number of labels in the dataset. The set of label distributions for all \(N\) parties is denoted as \(LD=\{ld_{1},ld_{2},\ldots,ld_{N}\}\). Our next step is to group the label distributions from various parties into non-intersecting subsets that are similar. Here, we define a similarity metric between subsets that is based on the average distance between all objects in a given subset and the average distance between subsets. Let \(S_{m}\) be the set of all possible subsets of \(LD\) of size \(m\), where \(m\)\(\epsilon\)\([1,N]\). Hence, there are \(\binom{N}{m}\) subsets within \(S_{m}\). Let \(L_{i}=\{ld_{a},ld_{b},....\}\), be a subset \(\epsilon\)\(S_{m}\), \(\Delta(L_{i})\) is the average Euclidean distance between all objects in the set \(L_{i}\). Given 2 subsets \(L_{i}\) and \(L_{j}\), \(\delta(L_{i},L_{j})\) is the average Euclidean distance between objects in sets \(L_{i}\) and those in \(L_{j}\). The idea behind finding similar parties is to find \(k\) disjoint subsets across all \(S_{m}\), where \(m\)\(\epsilon\)\([1,N]\), such that: \[minimizes_{m}\frac{1}{k}\ \sum_{i=1}^{k}\ \sum_{j=1}^{k}\frac{\Delta(L_{i})+ \Delta(L_{j})}{\delta(L_{i},L_{j})},i\neq\ j. \tag{1}\] Note that this problem is a subset enumeration problem, where we have to find \(k\) subsets across all \(S_{m}\), where the condition is (1). This problem is known to be NP-complete [80]. There are several heuristics to solve this problem, we use K-Means [59] clustering to solve this problem which obtains a \(k\)-partition, where \(k\) is unknown, across all \(S_{m}\), denoted by clusters \(C=(C_{1},C_{2},\ldots,C_{k})\), such that : \[\text{minimize}\quad\sum_{x=1}^{N}\sum_{y=1}^{k}\omega_{xy}||ld_{x}-c_{y}||^{2} \tag{2}\] Here, \(\omega_{xy}\) is a binary variable that indicates whether the \(x\)th datapoint is assigned to the cluster \(C_{y}\), whose centroid is \(c_{y}\). We opted for K-Means due to its simplicity and lower time complexity. K-Means clustering has a time complexity of \(O(NkI*d)\), where \(N\) is the number of data points, \(k\) is the number of clusters, \(I\) is the number of iterations, and \(d\) is the number of dimensions. This makes it a suitable choice for resource-limited settings. Furthermore, we use _kmeans_ ++ [9] to initialize the centroids in K-Means clustering. This has been demonstrated to scale to millions of data points, i.e., parties [10]. However, K-means does require the number of clusters \(k\) to be known beforehand, which is a problem in FL. The knowledge about the number of unique label distributions in the parties' datasets is Figure 2: Elbow point determination for optimal \(k\) unknown while performing clustering, as each party's data is kept private. To find the optimal number of clusters \(k\), which is analogous to the number of unique label distributions, we use a purity metric called the Davies-Bouldin index (_dibi_) [24], which is the ratio of the intra-cluster distance to the inter-cluster distance and similar to (1). The optimal k is determined by : \[k_{optimal}=argmin_{k}\big{|}\frac{dbi(k)-dbi(k-1)}{dbi(k-1)}|. \tag{3}\] where dbi is the Davies-Bouldin index. When \(k\) is small, the clusters cannot accurately represent the unique label distributions, impacting FL performance and cost because there is no equitable representation at each round. When \(k\) is large, clustering leads to overfitting; clusters generated are sparse, impacting FL performance and cost. To determine the optimal cluster size in FLIPS _empirically_, we experiment with different cluster sizes in succession, repeated \(T=20\) number of times (because K-Means is sensitive to the centroid initialization) and average the \(dbi\) for each \(k\in\{2,\ldots,K\}\), where \(K=N\). This gives us \(T\) different \(dbi\) for each cluster size \(k\). The cluster size \(k\) for which there is a (first) sharp change in the slope of the curve (elbow point) is chosen as the optimal cluster size. As illustrated in Figure 2, the optimal cluster size at which there is a sharp change in slope for \(k\) vs \(dbi\) is 10. Hence, we choose 10 as the cluster size for K-Means. ### Intelligent Participant selection Given a set of clusters of parties, \(\mathcal{C}\), from the clustering technique described above, FLIPS (Algorithm 1) implements participant selection for a round by choosing one party at a time from each cluster in a round-robin manner until the number of parties required for the round, \(N_{r}\), is reached. Typically in FL training, the number of parties per round \(N_{r}\) is fixed across all the rounds. This ensures that \(N_{r}\) is spread among as many clusters as possible increasing the diversity of data in the FL training process. It is recommended that \(N_{r}\) be a multiple of the number of clusters \(|\mathcal{C}|\) since \(N_{r}\) can then be easily split among the number of available clusters (\(|\mathcal{C}|\)), ensuring equal representation from each cluster (\(\frac{N_{r}}{|\mathcal{C}|}\)). FLIPS also keeps track of the number of times a party was chosen to ensure that each party within a cluster is given an equal opportunity to participate. In the case where the number of parties per round is less than the number of clusters, not every cluster can participate in every round. So FLIPS additionally tracks the number of times a cluster is selected to participate. Consider the example of using ECG signals from wearables for Arrhythmia detection, FLIPS will improve label representation by picking parties with normal and abnormal heartbeats at each round, improving the detection rate for arrhythmia and preventing class/label imbalance. To improve participant representation, FLIPS will try to incorporate parties that were not used in the previous training rounds. This will help bring knowledge from a diverse set of participants and will make the global model more robust. This helps solve the data heterogeneity issues in FL. To mitigate the effect of Stragglers, FLIPS uses the popularly used over-provisioning technique [12]. Once we identify the average straggler rate \(strg\). FLIPS overprovisions \(strg*S^{(r)}\) parties in the subsequent training rounds. The overprovisioned parties in round \(r+1\) are selected from the clusters \(H^{r}_{sc}\) that the straggler parties in round \(r\) were a part of. In this manner, we maintain the representation of all the unique label distributions in FL. This is illustrated in Algorithm 1. ``` 1:\(\mathcal{C}\leftarrow\mathcal{C}\cup\{\mathcal{C}\}\) 2:\(\mathcal{C}\leftarrow\mathcal{C}\cup\{\mathcal{C}\}\) 3:for\(i\in\{1,\ldots,K\}\)do 4:\(\mathcal{C}\leftarrow\mathcal{C}\cup\{\mathcal{C}\}\) 5:\(\mathcal{C}\leftarrow\mathcal{C}\cup\{\mathcal{C}\}\) 6:\(\mathcal{C}\leftarrow\mathcal{C}\cup\{\mathcal{C}\}\) 7:endfor 8:\(\mathcal{C}\leftarrow\mathcal{C}\cup\{\mathcal{C}\}\) 9:\(\mathcal{C}\leftarrow\mathcal{C}\cup\{\mathcal{C}\}\) 10:endfor 11:return\(\mathcal{C}\) 12:endfor 13:return\(\mathcal{C}\) 14:endfor 15:return\(\mathcal{C}\) 16:endfor 17:return\(\mathcal{C}\) 18:endfor 19:return\(\mathcal{C}\) 20:endfor 21:return\(\mathcal{C}\) 22:endfor 23:return\(\mathcal{C}\) [MISSING_PAGE_POST] ``` 1Participant Side 2recv global model (\(m^{(r)}\)) from aggregator 3Local model \(x^{(r,1)}\gets m^{(r)}\)for\(k\in\{1,2,\dots,r\}\)do//\(\tau\)local iterations 4Compute local stochastic gradient \(g_{i}(x^{(r,k)})\)\(x^{(r,k+1)}\leftarrow\)optimizer(\(x^{r,k},-g_{i}(x^{(r,k)}),\eta^{(r)}\)) 5send\(x^{(r,r)}\) to aggregator 6Aggregator using FLIPS 7\(LD=\{ld_{1},ld_{2},\dots,ld_{n}\}\)\(C\leftarrow\)optimized_CLUSTERS(LD) 8\(H\leftarrow\{\}\)// parties used per cluster 9\(H_{c}\leftarrow\)MIN-HEAD()// clusters used 10\(H_{s}^{r}\leftarrow\{\}\), \(H_{sc}^{r}\leftarrow\)MAX-HEAD(), \(count\_strg\gets 0,Stragglers=False\)// track straggler parties, their clusters and counts 11Initial model \(m^{1}\)Parties per round \(N_{r}\)for\(c\in\{1,2,\dots,C\}\)do\(c.picks\gets 0,\)insert\((H_{c},c),\)\(h\leftarrow\)MIN-HEAD() 12for\(p\in\{1,2\dots c\}\)do\(p.picks\gets 0,\)insert\((h,p)\) 13\(H[c.id]\gets h\) 14for\(r\in\{1,2,\dots,R\}\)do 15\(S^{(r)}\leftarrow\{\}\)for\(i\in\{1,2,\dots,N_{r}\}\)do\(c\leftarrow\)extract_MIN(\(H_{c}\)), \(H\gets H[c.id],\)\(p\leftarrow\)extract_MIN(\(H\)) 16increrement(\(p.picks,1\)), insert(\(H,p\)) 17increrement(\(c.picks,1\)), insert(\(H_{c},c\)) 18 Select unique parties \(:S^{(r)}\gets S^{(r)}\bigcup\{p\}\) 19ifStragglersthen 20for\(i\in\{1,2,\dots,int(strg*N^{r})\}\)do\(c\leftarrow\)extract_MAX(\(H_{sc}^{r}\)), \(H\gets H[c.id]\)// choose cluster with most stragglers 21\(c\cdot p\leftarrow\)extract_MIN(\(H\))\(not\)\(in\)\(H_{s}^{r}\)// pick a non-straggler part in \(c\)\(S^{(r)}\gets S^{(r)}\bigcup\{p\}\) 22send\(m^{(r)}\) to each \(i\in\mathcal{S}^{(r)}\)forrecv model update \(x_{i}^{(r)}\) from each \(i\in\mathcal{S}^{(r)}\)doif\(x_{i}^{r}\)not recvthen\(H_{s}^{r}\gets i,H_{sc}^{r}\leftarrow\)cluster of i 23Stragglers=True, \(count\_strg++\) 24elseif\(i\)\(in\)\(H_{s}^{r}\)then\(H_{s}^{r}.remove(i)\)\(H_{sc}^{r}.remove(cluster\ of\ i)\) 25 26if\(len(H_{s}^{r})==0\)then\(Stragglers=False\) 27Aggregate \(x^{(r)}\leftarrow\frac{1}{N}\sum_{i\in S^{(r)}}n_{i}x_{i}^{(r)}\)\(m^{(r+1)}\leftarrow\)optimizer(\(m^{r},x_{i}^{(r)}\))\(strg=\frac{strg*N^{r}+count\_strg}{N^{r}}\) ``` **Algorithm 1**FLIPS Party & Straggler Handling Next, each party establishes a secure channel (eg: TLS channel) with the TEE for transmitting secrets, the label distribution vector in our case. The clustering code (a K-means variant) computes the clusters on these label distribution vectors in the secure enclave. The TEE maintains the clusters securely and deletes all information at the end of the FL job (this can be attested by the attestation server). ### FLIPS in Context FLIPS differs from other FL systems, in that it targets participant selection using label distribution clustering. k-means++ clustering based on label distributions has the advantage of being fast and minuscule relative to FL training time, and has been demonstrated to scale [10, 66] to millions of data points (participants). Clustering has to be performed only once, as long as the set of participants or the data at participants does not change significantly. Centralizing this clustering and participant selection in a TEE also has the advantage of fitting nicely into the predominant mold of FL, as it is deployed today, based on a single aggregator [45]. However, FLIPS can also be used with a distributed aggregator (e.g., [11, 37, 39, 41]) by separating the clustering and participant selection module from the aggregation module. This applies to cloud-hosted aggregation or aggregation using homomorphic encryption or secure multiparty computation. In fact, clustering, participant selection and aggregation are logically separate with clear interfaces, and can be individually hosted and secured. This also implies that FLIPS is as scalable as the underlying aggregation algorithm and the method used to secure it. ## 4 Validating FLIPS Our primary goal is to examine the impact of FLIPS's participant selection on model convergence and accuracy at a reasonable scale. Hence, we implemented and evaluated FLIPS in a distributed cluster environment. All the parties, i.e. participants, were executed as nodes on a cluster for local training as seen in Figure 3. The cluster consists 13 nodes, each with four Nvidia V100 (16GB) GPUs, Intel(R) Xeon(R) Silver 4114 CPU, and \(\sim\) 16 GB RAM per node. We train local models using the GPUs available on this cluster. The aggregator node is executed on a machine with a 2.9Ghz 6-core Ryzen 5 processor with 16GB RAM and 512GB SSD. To enable trusted execution for label distribution clustering, we use the AMD Secure Encrypted Virtualization (SEV) [1] on the aggregator. Figure 4: Workflow: Intelligent Participant Selection in FLIPS Figure 3: End-to-end integrated system design for Private Clustering in FLIPS ### Techniques for Comparison We compare FLIPS with 3 popular participant selection strategies across different datasets and study the impact on convergence in the presence of stragglers. The first is the widely-used random selection [63, 55, 72] method. This selects each party with the same probability and can lead to class imbalance, where certain classes are underrepresented, as explained in Section 2.2. This may cause the model to be biased towards the overrepresented classes, leading to poor performance. This harms convergence as it takes more training rounds to reach the desired accuracy. Second, we compare FLIPS with OORT [51]. OORT uses the idea that parties with a higher local loss can contribute more to an FL job [42] and introduces a statistical and systemic _utility metric_ for participant selection. The system sorts the parties according to the party utility metric which is a combination of its statistical and systemic utility and selects parties with a higher utility and explores new parties at each training round. The third strategy is GradClus [29], which uses the idea of clustering gradients from parties to identify parties with similar data. It performs hierarchical clustering over a similarity matrix constructed across gradients from all the parties in the FL job. The gradients assigned in the beginning are random numbers and get iteratively updated as the party gets picked. At the aggregator, GradClus [29] performs hierarchical clustering and \(S^{(r)}\) number of clusters are formed. GradClus chooses one party from each cluster randomly. The fourth strategy is TiFL [16], which groups parties into tiers based on their training performance and selects parties from the same tier in each training round to mitigate the straggler problem. To further solve the non-IID problem, TiFL employs an adaptive tier selection approach to update the tiering on the fly based on the observed training performance and accuracy. ### Datasets and models used We focus on two real-world datasets from the healthcare/senior care domain and two benchmark datasets: * MIT-BIH-ECG-Dataset [65] partitioned across 200 parties trained using a 1-D CNN [56]. * Skin cancer HAM10000 [19] partitioned across 200 parties trained using Densenet-121 [35]. * FEMNIST [55] partitioned across 200 parties trained using Le-Net5 [52]. * FashionMNIST [88] partitioned across 200 parties trained using Le-Net-5 [52]. **The MIT-BIH ECG dataset**[65] comprises digitized electrocardiogram (ECG) recordings used for arrhythmia identification. Collected by MIT Biomedical Engineering Department and Beth Israel Hospital, it includes both normal and aberrant rhythms. The dataset is annotated with AAMI labels [85], a widely accepted standard for ECG rhythm classification. These labels define performance criteria, improve algorithm generalization, and include N (normal beats), S(supra-ventricular ectopic beats), V (ventricular ectopic beats), F (fusion beats), and Q (unclassifiable beats). The dataset predominantly comprises of N beats, necessitating federated learning (FL) to enhance label and participant representation. The dataset is distributed across 200 parties in a non-IID manner. Training involves a 1-D CNN with a learning rate 0.001 and a decay applied every 20 rounds. FL training is limited to a maximum of 400 rounds. The **HAM10000** dataset [19] contains diverse dermatoscopic images of pigmented skin lesions. It includes 10015 images representing important skin cancer diagnostic categories: akice, bcc, bkl, df, mel, nv, and vasc. The dataset is suitable for training and evaluating machine learning models for automated diagnosis. The nv images are the most abundant, potentially dominating other categories due to their prevalence caused by UV radiation. This non-IID behavior highlights the need for federated learning. The dataset is distributed across 200 parties, and training involves DenseNet121 with a learning rate of 0.001. A decay is applied every 30 FL rounds, and the maximum number of FL rounds is 400. The **EMNIST** (Extended MNIST) dataset [20] contains handwritten letters and numbers and is an expansion of the MNIST dataset. The EMNIST dataset contains characters from numerous alphabets, including numerals and letters from the English alphabet. It also has a collection of symbols from different languages. This dataset is frequently used for testing and training machine learning models for tasks like text classification and handwriting recognition. We subsample 10 lowercase characters ('a'-'j') from EMNIST. This federated variant of EMNIST is known as FEMNIST [55]. We train Le-Net-5 [52] model at each party which outputs a class label between 0 (a) and 9 (j). **Fashion-MNIST**[88] is a dataset of images of clothing items that are commonly used as a benchmark for machine learning models. It consists of 60,000 training and 10,000 test images, each 28x28 pixels in size and labeled with one of 10 different clothing categories, such as t-shirts, trousers, bags, etc. FL applied in the fashion-MNIST dataset will mimic a personalized customer recommendation system. A model trained on the customer's device or organization using FL will understand the customer's preferences, which can then be used to suggest personalized clothing recommendations. The dataset is distributed across 100 parties and training involves Le-Net5 [52]. ### Emulating Non-IID data distributions As in the Tensorflow Federated [12] and LEAF [15] FL benchmarks, we emulate a non-IID setting in our experiments by using different data partitioning strategies. We use Dirichlet Allocation [91] a widely used technique [12, 15] to partition a dataset among several parties in a non-IID manner. It samples \(p\sim Dir_{N}(\alpha)\), where \(\alpha\) is the control parameter and \(p_{l,i}\) becomes the proportion of the number of data points of label \(l\) allocated to the party \(i\). An \(\alpha\) of 0 corresponds to each party having data corresponding to only one label (which is non-IID at its extreme) and an \(\alpha\ \geq 1\) corresponds to an IID distribution. As recommended in other federated learning research [30, 71, 15], we evaluate FLIPS using two different values of \(\alpha=0.3\) and \(0.6\). At each round, we sample 20% and 15% parties which is more than the number of clusters of similar parties formed in FLIPS. ### Metrics In FL, datasets are local to the parties, which partition them into local training and test sets. To compare FLIPS and other techniques for this study, we use a global test set consisting of data corresponding to all the labels in the distributed dataset. The datapoints in this dataset are unknown to any party in our emulation. Hence it is a valid test set. This test set also helps us evaluate the techniques at each training round to get a closer look at convergence. Generally, global test sets are not used in FL practice; just while designing FL algorithms. We maintain the global test set inside the aggregator's TEE in our implementation. We report the test accuracy of the global model against the test dataset after each round. This test accuracy is computed as: \[Acc =\frac{lA_{1}+lA_{2}\ldots+lA_{m}}{m}\] \[lA_{i} =\frac{Correct\ Predictions\ for\ label\ i}{Total\ number\ of\ datapoints\ for\ label\ i}\] This is done to mitigate the effect of label imbalance while computing the accuracy for each test set, as each label may have a different number of datapoints. The accuracy numbers reported are an average of 6 runs for each experiment. We report both (i) the highest accuracy obtained using a specific FL technique within the FL rounds threshold and (ii) the number of communication rounds needed for the global model to reach a target/desired accuracy. The latter depicts how fast any participant selection technique converges. ## 5 Results We our results in this paper Tables 1-24. Without stragglers, we perform a total of 60 (\(5\times 3\times 2\times 2\)) experiments - comparing FLIPS with 4 other participant selection mechanisms (random, GradClus, Oort, TiFL), for 3 different FL algorithms (_FedAvg_, _FedProx_ and _FedToGi_), 2 levels of non-IIDness (\(\alpha=0.3\) and \(\alpha=0.6\)) and 2 levels of Figure 5: Convergence plots on MIT BIH ECG Dataset, FL Algorithm: FedYoGi participation (20% and 15%) per dataset. From these 60 experiments, we choose the 3 best-performing techniques FLIPS, OORT and TiFL and observe how they perform in the presence of stragglers. We emulate stragglers by dropping 10% or 20% of participants involved in an FL round, resulting in another 72 experiments per dataset. ### TEE Clustering Overhead Clustering label distributions, by itself, is fairly efficient and takes less than one second for all our datasets (\(\approx\)100ms for HAM10000 dataset with 200 parties and lower for other datasets on a 2.3 Ghz 4 core Intel Core i9 equipped server with 16GB RAM and 512GB SSD). The overhead of using TEEs to perform clustering is approximately 5% (105.4 ms vs. 100.5ms) in the case of AMD SEV on a server running the aggregator. Hence, using TEEs gives us a reasonable way to implement private label distribution clustering for FLIPS. ### Data Heterogeneity Tables 1 - 8 summarize our results for FedYogi. Tables 9 - 16 summarize our results for FedProx. Tables 17 - 24 summarize our results for FedAvg. At a high level, we observe that FLIPScan reach target accuracies much faster and achieve much higher peak accuracy. \begin{table} \begin{tabular}{c c c|c c c c|c c c|c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c}{**Highest accuracy attained, FL Algorithm : FedYoGi**} \\ \cline{2-13} & \multicolumn{4}{c}{**0 \% Stragglers**} & \multicolumn{4}{c}{**10 \% Stragglers**} & \multicolumn{4}{c}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & 146 & 62 & 81 & 168 & 124 & 69 & 102 & 113 & 78 & 85 & 194 \\ 0.3 & 15 & 181 & 62 & 83 & 168 & 127 & 76 & 105 & 141 & 79 & 103 & 106 \\ 0.6 & 20 & 107 & 68 & 66 & 89 & 104 & 89 & 69 & 103 & 89 & 84 & 94 \\ 0.6 & 15 & 115 & 75 & 55 & 115 & 106 & 86 & 83 & 108 & 78 & 88 & 106 \\ \end{tabular} \end{table} Table 6: FEMNIST dataset : highest accuracy attained in the rounds threshold \begin{table} \begin{tabular}{c c|c c c c c|c c c|c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c}{**Highest accuracy attained, FL Algorithm : FedYoGi**} \\ \cline{2-13} & \multicolumn{4}{c}{**0 \% Stragglers**} & \multicolumn{4}{c}{**10 \% Stragglers**} & \multicolumn{4}{c}{**20 \% Stragglers**} \\ \hline 0.3 & 20 & 44.86 & 78.53 & 61.75 & 48.62 & 48.21 & 74.66 & 37.27 & 50.51 & 74.20 & 43.64 & 44.37 \\ 0.3 & 15 & 45.48 & 76.92 & 71.35 & 45.40 & 47.67 & 71.09 & 49.19 & 48.41 & 67.14 & 42.34 & 49.09 \\ 0.6 & 20 & 48.55 & 63.79 & 63.74 & 48.10 & 41.97 & 57.21 & 42.55 & 52.18 & 60.50 & 47.15 & 48.20 \\ 0.6 & 15 & 48.83 & 61.35 & 57.47 & 53.54 & 53.16 & 60.55 & 49.42 & 54.13 & 58.18 & 56.54 & 54.19 \\ \end{tabular} \end{table} Table 2: MIT ECG Dataset: highest accuracy attained, FL Algorithm : FedYoGi \begin{table} \begin{tabular}{c c|c c c c c|c c c c|c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c}{**Highest accuracy attained, FL Algorithm : FedYoGi**} \\ \cline{2-13} & \multicolumn{4}{c}{**0 \% Stragglers**} & \multicolumn{4}{c}{**10 \% Stragglers**} & \multicolumn{4}{c}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & 48.26 & 66.76 & 61.12 & 46.48 & 42.39 & 63.39 & 46.28 & 49.11 & 64.13 & 38.25 & 41.59 \\ 0.3 & 15 & 41.35 & 66.41 & 59.86 & 45.25 & 43.70 & 62.76 & 43.14 & 47.09 & 64.42 & 49.86 & 51.81 \\ 0.6 & 20 & 46.50 & 62.84 & 62.36 & 54.74 & 45.17 & 60.58 & 41.94 & 44.72 & 60.71 & 43.08 & 44.81 \\ 0.6 & 15 & 46.55 & 62.39 & 59.79 & 54.66 & 55.94 & 61.78 & 48.13 & 50.46 & 60.86 & 43.46 & 46.85 \\ \end{tabular} \end{table} Table 4: HAM10000 (Skin lesion) dataset : highest accuracy attained in the rounds threshold \begin{table} \begin{tabular}{c c|c c c c c|c c c|c c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c}{**Highest accuracy attained, FL Algorithm : FedYoGi**} \\ \cline{2-13} & \multicolumn{4}{c}{**0 \% Stragglers**} & \multicolumn{4}{c}{**10 \% Stragglers**} & \multicolumn{4}{c}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & 48.26 & 66.76 & 61.12 & 46.48 & 42.39 & 63.39 & 46.28 & 49.11 & 64.13 & 38.25 & 41.59 \\ 0.3 & 15 & 41.35 & 66.41 & 59.86 & 45.25 & 43.70 & 62.76 & 43.14 & 47.09 & 64.42 & 49.86 & 51.81 \\ 0.6 & 20 & 46.50 & 62.84 & 62.36 & 54.74 & 45.17 & 60.58 & 41.94 & 44.72 & 60.71 & 43.08 & 44.81 \\ 0.6 & 15 & 46.55 & 62.39 & 59.79 & 54.66 & 55.94 & 61.78 & 48.13 & 50.46 & 60.86 & 43.46 & 46.85 \\ \end{tabular} \end{table} Table 5: FEMNIST : Rounds required to attain Target Accuracy (80%) \begin{table} \begin{tabular}{c c|c c c c c|c c c|c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c}{**Highest accuracy attained, FL Algorithm : FedYoGi**} \\ \cline{2-13} & \multicolumn{4}{c}{**0 \% Stragglers**} & \multicolumn{4}{c}{**10 \% Stragglers**} & \multicolumn{4}{c}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & 146 & 62 & 81 & 168 & 124 & 69 & 102 & 113 & 78 & 85 & 194 \\ 0.3 & 15 & 181 & 62 & 83 & 168 & 127 & 76 & 105 & 141 & 79 & 103 & 106 \\ 0.6 & 20 & 107 & 68 & 66 & 89 & 104 & 89 & 69 & 103 & 89 & 84 & 94 \\ 0.6 & 15 & 115 & 75 & 55 & 115 & 106 & 86 & 83 & 108 & 78 & 88 & 106 \\ \end{tabular} \end{table} Table 5: FEMNIST : Rounds required to attain 780% \begin{table} \begin{tabular}{c c|c c c c c|c c c c|c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c}{**Highest accuracy attained, FL Algorithm : FedYoGi**} \\ \cline{2-13} & \multicolumn{4}{c}{**0 \% Stragglers**} & \multicolumn{4}{c}{**10 \% Stragglers**} & \multicolumn{4}{c}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & \begin{table} \begin{tabular}{c c c|c c c c|c c c|c c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c}{**Highest accuracy attained, FL Algorithm : FedProx**} \\ \cline{2-13} \multicolumn{1}{c}{} & \multicolumn{8}{c}{**0 \% Stragglers**} & \multicolumn{8}{c}{**10 \% Stragglers**} & \multicolumn{8}{c}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & \(>\)400 & 151 & 323 & \(>\)400 & \(>\)400 & 206 & \(>\)400 & 172 & \(>\)400 & \(>\)400 \\ 0.3 & 15 & \(>\)400 & 201 & 298 & \(>\)400 & \(>\)400 & 198 & \(>\)400 & 198 & \(>\)400 & \(>\)400 \\ 0.6 & 20 & \(>\)400 & 276 & \(>\)400 & \(>\)400 & \(>\)400 & \(>\)400 & 364 & \(>\)400 & \(>\)400 \\ 0.6 & 15 & \(>\)400 & 308 & 345 & \(>\)400 & \(>\)400 & 383 & \(>\)400 & \(>\)400 & 363 & \(>\)400 & \(>\)400 \\ \end{tabular} \end{table} Table 10: MIT ECG Dataset: highest accuracy attained within the rounds threshold \begin{table} \begin{tabular}{c c|c c c c c|c c c c|c c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c}{**Highest accuracy attained, FL Algorithm : FedProx**} \\ \cline{2-13} \multicolumn{1}{c}{} & \multicolumn{8}{c}{**0 \% Stragglers**} & \multicolumn{8}{c}{**10 \% Stragglers**} & \multicolumn{8}{c}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & \(>\)400 & 151 & 323 & \(>\)400 & \(>\)400 & 206 & \(>\)400 & 172 & \(>\)400 & \(>\)400 \\ 0.3 & 15 & \(>\)400 & 201 & 298 & \(>\)400 & \(>\)400 & 198 & \(>\)400 & \(>\)400 & 198 & \(>\)400 & \(>\)400 \\ 0.6 & 20 & \(>\)400 & 276 & \(>\)400 & \(>\)400 & \(>\)400 & \(>\)400 & \(>\)400 & 364 & \(>\)400 & \(>\)400 \\ 0.6 & 15 & \(>\)400 & 308 & 345 & \(>\)400 & \(>\)400 & 383 & \(>\)400 & \(>\)400 & 363 & \(>\)400 & \(>\)400 \\ \end{tabular} \end{table} Table 11: HAM10000 (Skin Lesion) dataset : Rounds required to attain Target Accuracy (60%) \begin{table} \begin{tabular}{c c|c c c c|c c c|c c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c}{**Highest accuracy attained, FL Algorithm : FedProx**} \\ \cline{2-13} \multicolumn{1}{c}{} & \multicolumn{8}{c}{**0 \% Stragglers**} & \multicolumn{8}{c}{**10 \% Stragglers**} & \multicolumn{8}{c}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & \(>\)400 & 151 & 323 & \(>\)400 & \(>\)400 & 206 & \(>\)400 & 172 & \(>\)400 & \(>\)400 \\ 0.3 & 15 & \(>\)400 & 201 & 298 & \(>\)400 & \(>\)400 & 198 & \(>\)400 & \(>\)400 & 198 & \(>\)400 & \(>\)400 \\ 0.6 & 20 & \(>\)400 & 276 & \(>\)400 & \(>\)400 & \(>\)400 & \(>\)400 & \(>\)400 & 364 & \(>\)400 & \(>\)400 \\ 0.6 & 15 & \(>\)400 & 308 & 345 & \(>\)400 & \(>\)400 & 383 & \(>\)400 & \(>\)400 & 363 & \(>\)400 & \(>\)400 \\ \end{tabular} \end{table} Table 12: HAM10000 (Skin Lesion) dataset: highest accuracy attained within the rounds threshold \begin{table} \begin{tabular}{c c|c c c c|c c c|c c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c}{**Highest accuracy attained, FL Algorithm : FedProx**} \\ \cline{2-13} \multicolumn{1}{c}{} & \multicolumn{8}{c}{**0 \% Stragglers**} & \multicolumn{8}{c}{**10 \% Stragglers**} & \multicolumn{8}{c}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & \(>\)400 & 129 & 198 & \(>\)400 & \(>\)400 & 143 & \(>\)400 & \(>\)400 & 255 & \(>\)400 & \(>\)400 \\ 0.3 & 15 & \(>\)400 & 146 & 197 & \(>\)400 & \(>\)400 & 204 & \(>\)400 & 215 & \(>\)400 & \(>\)400 \\ 0.6 & 20 & \(>\)400 & 182 & 334 & \(>\)400 & \(>\)400 & 383 & \(>\)400 & \(>\)400 & 389 & \(>\)400 & \(>\)400 \\ 0.6 & 15 & \(>\)400 & 203 & \(>\)400 & \(>\)400 & \(>\)400 & 398 & \(>\)400 & \(>\)400 & \(>\)400 & \(>\)400 \\ \end{tabular} \end{table} Table 9: MIT ECG Dataset: Rounds required to attain 60 % accuracy, FI. Algorithm : FedProx \begin{table} \begin{tabular}{c c|c c c c|c c c|c c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c}{**Highest accuracy attained, FL Algorithm : FedProx**} \\ \cline{2-13} \multicolumn{1}{c}{} & \multicolumn{8}{c}{**0 \% Stragglers**} & \multicolumn{8}{c}{**10 \% Stragglers**} & \multicolumn{8}{c}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & \(>\)400 & 151 & 323 & \(>\)400 & \(>\)400 & 206 & \(>\)400 & 172 & \(>\)400 & \(>\)400 \\ 0.3 & 15 & \(>\)400 & 201 & 298 & \(>\)400 & \(>\)400 & \(>\)400 & 198 & \(>\)400 & \ \begin{table} \begin{tabular}{c c|c c c c c|c c c c|c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c|}{**Hightest accuracy attained, FL Algorithm : FedAvg**} \\ \cline{2-13} & \multicolumn{4}{c|}{**0 \% Stragglers**} & \multicolumn{4}{c|}{**10 \% Stragglers**} & \multicolumn{4}{c|}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & \(>\)400 & 329 & 271 & \(>\)400 & \(>\)400 & 201 & \(>\)400 & 234 & \(>\)400 & \(>\)400 \\ 0.3 & 15 & \(>\)400 & 300 & 323 & \(>\)400 & \(>\)400 & 201 & \(>\)400 & 217 & \(>\)400 & \(>\)400 \\ 0.6 & 20 & \(>\)400 & 300 & 385 & \(>\)400 & \(>\)400 & 376 & \(>\)400 & \(>\)400 & 356 & \(>\)400 & \(>\)400 \\ 0.6 & 15 & \(>\)400 & 385 & \(>\)400 & \(>\)400 & \(>\)400 & 395 & \(>\)400 & \(>\)400 & 398 & \(>\)400 & \(>\)400 \\ \end{tabular} \end{table} Table 19: HAM1000 (Skin Lesion) Dataset: : : Bounds required to attain Target Accuracy (60%) \begin{table} \begin{tabular}{c c|c c c c c|c c c c|c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c|}{**Hightest accuracy attained, FL Algorithm : FedAvg**} \\ \cline{2-13} & \multicolumn{4}{c|}{**0 \% Stragglers**} & \multicolumn{4}{c|}{**10 \% Stragglers**} & \multicolumn{4}{c|}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & \(>\)400 & 329 & 271 & \(>\)400 & \(>\)400 & 250 & \(>\)400 & 234 & \(>\)400 & \(>\)400 \\ 0.3 & 15 & \(>\)400 & 300 & 323 & \(>\)400 & \(>\)400 & 201 & \(>\)400 & \(>\)400 & 217 & \(>\)400 & \(>\)400 \\ 0.6 & 20 & \(>\)400 & 300 & 385 & \(>\)400 & \(>\)400 & 376 & \(>\)400 & \(>\)400 & 356 & \(>\)400 & \(>\)400 \\ 0.6 & 15 & \(>\)400 & 385 & \(>\)400 & \(>\)400 & \(>\)400 & 395 & \(>\)400 & \(>\)400 & 398 & \(>\)400 & \(>\)400 \\ \end{tabular} \end{table} Table 20: HAM10000 (Skin Lesion) Dataset: : : highest accuracy attained within the rounds threshold \begin{table} \begin{tabular}{c c|c c c c c|c c c|c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c|}{**Hightest accuracy attained, FL Algorithm : FedAvg**} \\ \cline{2-13} & \multicolumn{4}{c|}{**0 \% Stragglers**} & \multicolumn{4}{c|}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & \(>\)400 & 136 & 344 & \(>\)400 & \(>\)400 & 210 & \(>\)400 & 200 & \(>\)400 & \(>\)400 \\ 0.3 & 15 & \(>\)400 & 162 & 192 & \(>\)400 & \(>\)400 & 263 & \(>\)400 & \(>\)400 & 214 & \(>\)400 & \(>\)400 \\ 0.6 & 20 & \(>\)400 & 378 & 393 & \(>\)400 & \(>\)400 & \(>\)400 & \(>\)400 & 397 & \(>\)400 & \(>\)400 \\ 0.6 & 15 & \(>\)400 & 393 & \(>\)400 & \(>\)400 & \(>\)400 & 395 & \(>\)400 & \(>\)400 & \(>\)400 & \(>\)400 \\ \end{tabular} \end{table} Table 17: MIT ECG Dataset: : : Robustly required to attain Target Accuracy (60%) \begin{table} \begin{tabular}{c c|c c c c c|c c c c|c c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c|}{**Hightest accuracy attained, FL Algorithm : FedAvg**} \\ \cline{2-13} & \multicolumn{4}{c|}{**0 \% Stragglers**} & \multicolumn{4}{c|}{**10 \% Stragglers**} & \multicolumn{4}{c}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & 130 & 46 & 71 & 168 & 118 & 65 & 130 & 123 & 78 & 128 & 153 \\ 0.3 & 15 & 112 & 62 & 70 & 168 & 112 & 72 & 118 & \(>\)400 & 67 & 156 & 142 \\ 0.6 & 20 & 99 & 69 & 53 & 89 & 96 & 90 & 82 & 92 & 80 & 77 & 102 \\ 0.6 & 15 & 99 & 58 & 62 & 115 & 90 & 78 & 91 & 109 & 86 & 85 & 88 \\ \end{tabular} \end{table} Table 21: FEMNIST Dataset: : : : Robustly required to attain Target Accuracy (80%) \begin{table} \begin{tabular}{c c|c c c c c|c c c|c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{**Setting**}} & \multicolumn{10}{c|}{**Hightest accuracy attained, FL Algorithm : FedAvg**} \\ \cline{2-13} & \multicolumn{4}{c|}{**0 \% Stragglers**} & \multicolumn{4}{c|}{**20 \% Stragglers**} \\ \hline \(\alpha\) & **party \%** & **Random** & **FLIPS** & **OORT** & **GradCs** & **TiFL** & **FLIPS** & **OORT** & **TiFL** & **FLIPS** & **OORT** & **TiFL** \\ \hline 0.3 & 20 & \(>\)400 & 136 & 344 & \(>\)400 & \(>\)400 & 210 & \(>\)400 & 200 & \(>\)400 & 200 & \(>\)400 & \(>\)400 \\ 0.3 & 15 & \(>\)400 & 162 & 192 & \(>\)400 & \(>\)400 & 263 & \(>\)400 & \(>\)400 & 214 & \(>\)400 & \(>\)400 \\ 0.6 & 20 & \(>\)400 & 378 & 393 & \(>\)400 & \(>\)400 & \(>\)400 & \(>\)400 & 397 & \(>\)400 & \(>\)400 & 397 & \(>\)400 \\ 0.6 & 15 & \(>\)400 & 393 & \(>\)400 & \(>\)400 & \(>\)4 When considering the number of rounds needed to reach targeted accuracies for the MIT-ECG (60%) and HAM10000 (60%) datasets, we observe that Random selection, TiFL and Gradient Clustering take more than 400 rounds in the case of FedAvg, FedYogi and FedProx. While the performance of Random selection is not surprising considering all the reasons outlined so far in this paper, GradClus's performance is surprising, given that it also clusters parties, albeit using gradients. TiFL's adaptive tiering approach is unable to group the parties with under-represented labels into a single tier, which explains its performance. OORT's statistical utility function does enable it to perform better than Random, TiFL and GradClus. This trend can also be observed from example convergence plots in Figures 5 and 7. From Table 1, we observe that FLIPS reaches target accuracy up to 1.15-1.86\(\times\) faster, i.e., in fewer rounds when compared to OORT when \(\alpha~{}=~{}0.6\) and 1.08-2.37\(\times\) faster when \(\alpha~{}=~{}0.3\). Hence, when the degree of "non-IIDness" of the data increases (corresponding to decreasing \(\alpha\) as explained in Section 4.3), FLIPS performs better. This reduction in the number of rounds not only saves time but also results in much lower communication costs, as a result of having to participate in much fewer rounds. In the case of the HAM10000 dataset in Table 3, the performance benefits of FLIPS is more pronounced. Speedup (fewer rounds) is 1.32-1.52\(\times\) for \(\alpha~{}=~{}0.6\) and 1.56-2.10\(\times\) for \(\alpha~{}=~{}0.3\). We see a similar trend in the case of FedProx in Tables 9 and 11, with 1.12-2\(\times\) speedup for \(\alpha~{}=~{}0.6\) and 1.35-2.14\(\times\) speedup for \(\alpha~{}=~{}0.3\) for MIT-ECG and HAM10000 datasets. Also, in a world where training jobs are rerun for 1-2 _percentage point_, improvements in accuracy (in absolute terms), Tables 2 and 4 for FedYogi illustrate that FLIPS improves peak model accuracy by \(>30\) percentage points for MIT Figure 8: Convergence plots on HAM10000 (Skin Lesions) Dataset in the presence of Stragglers, FL Algorithm: FedYoGi Figure 9: Convergence plots on FEMNIST dataset, FL Algorithm: FedYoGi ECG and \(>20\) percentage points for HAM10000 when \(\alpha~{}=~{}0.3\) vs. random selection in the case of FedYogi. Even for a "less non-IID" distribution corresponding to \(\alpha~{}=~{}0.6\) FLIPS improves accuracy by more than 12 and 15 percentage points for MIT-ECG and HAM10000, respectively. These benefits endure when compared to GradClus (\(\approx\) 8-30 % point improvements in accuracy) and TiFL (\(\approx\) 6 - 30 % point improvements in accuracy). They are lower when compared to OORT, but still significant - \(\approx\) 3-15 and 2-5 percentage points corresponding to \(\alpha\) of 0.3 and 0.6, respectively. A similar trend is seen for FedProx and FedAvg in Tables 10, 12, 18 and 20 where accuracy of FLIPS increases by tens of percentage points vis-a-vis random selection, TiFL and GradClus. Unlike in FedYogi and FedProx, where the peak accuracy of FLIPS is higher than that of OORT, peak accuracies of OORT are closer to FLIPS in the case of FedAvg, and significant in many cases as illustrated in Tables 17-20. Next, we move on to the FEMNIST dataset. This dataset is more IID in its centralized version, and in Table 5 we observe that for all the party selection techniques, the target accuracy is reached within the threshold of 200 rounds. The highest accuracy obtained is 86.86 % for FLIPS. In Table 5, we can see that FLIPS achieves the target accuracy 1.3x faster than the most comparable OORT technique for \(\alpha=0.3\). While for \(\alpha=0.6\), OORT reaches the target accuracy almost at the same number of rounds as FLIPS. FLIPS is 1.5x - 2.9x faster than Random selection, 1.3x - 2.7x faster than GradCls and 1.4x - 2x faster than TiFL. In Figure 11, we can see that for \(\alpha=0.3\), FLIPS performs better than all the other techniques, while for \(\alpha=0.6\), OORT performs similar to FLIPS. For the Fashion MNIST dataset, FLIPS performs better than all the other techniques this can be seen in Figure7. All the techniques attain the target accuracy in the given rounds threshold. FLIPS is 1.2x - 1.5x faster than Random Figure 11: Convergence plots on Fashion MNIST dataset, FL Algorithm: FedYoGi Figure 10: Convergence plots on FEMNIST dataset in the presence of stragglers, FL Algorithm: FedYoGi selection, 1.45x - 2x faster than OORT, 1.13x - 1.66x faster than GradCls and 1.73x - 2.2x faster than TiFL. FLIPS attains the highest accuracy 85.14 %, which is higher than all the other techniques. This is consistent across all the other FL algorithms. We also observe that FLIPS achieves the highest model accuracy in all scenarios. This improvement in accuracy can be accredited to the fact that FLIPS improves the accuracy of the underrepresented labels. This can be seen in Figure 13. We can see that FLIPS brings in the most accuracy improvement for the underrepresented labels. ### Platform Heterogeneity We take the best performing techniques FLIPS and OORT and evaluate them under different platform heterogeneity constraints. OORT selects 1.3x the parties in FL at each round to overprovision for straggler parties. In Table 1 and 2, we can observe that under 10 % Straggler rate OORT is unable to attain the target accuracy even after 400 rounds of training, while FLIPS attains the target accuracy 3 out of 4 times while missing the target accuracy by a mere 2.8 % when \(\alpha=0.6\) and party % = 20. Under the 20 % straggler rate, the results are similar, OORT cannot reach the target accuracy in all 4 settings, while FLIPSattains the target accuracy 3 out of 4 times. FLIPS attains the accuracy of 74.66 %, which is the highest in presence of 10 % stragglers and 74.20 % in 20 % stragglers. This is approximately 4 percentage points less than the 0 % straggler rate or the ideal scenario. In Table 1 and Table 2, we can see that for 10 % straggler rate, FLIPS is faster by 1.3x - 2x than OORT except for the \(\alpha=0.6\) and party % = 20 condition, and attains 11-38 percentage points higher accuracy than OORT. For the 20 % Straggler rate, FLIPS is faster by 1.2x - 2x than OORT except for the \(\alpha=0.6\) and party % = 15 condition, and attains 2 % - 30 % higher accuracy than OORT. Figure 6 also shows that FLIPSis more robust to stragglers than OORT. For the HAM10000 (Skin Lesion) Dataset, FLIPS outperforms OORT across all straggler rates. This can be seen in Tables 3 and 4. OORT cannot reach the target accuracy in all cases, while FLIPS does in all. FLIPS is 1.2x - 1.9x faster than OORT under 10 % straggler rate, while it is 1.1x - 2.1x faster than OORT under 20 % straggler rate. FLIPS attains 13 - 17 % higher accuracy in classification of skin lesions under 10 % straggler rate and 14 - 26 % higher accuracy Figure 12: Convergence plots on Fashion MNIST dataset in the presence of stragglers, FL Algorithm: FedYoGi Figure 13: Convergence curves on underrepresented labels for ECG and HAM10000 datasets under 20 % straggler rate. The Figure 8 depicts the better performance of FLIPS under stragglers via convergence curves. In the case of the FEMNIST dataset in Table 5, FLIPS outperforms OORT and TiFL, when \(\alpha=0.6\), the performance is comparable to OORT. For \(\alpha=0.3\), FLIPS is 1.3x - 1.4x faster than OORT and 1.15x to 1.85x faster than TiFL, while for \(\alpha=0.6\), OORT's performance is similar to FLIPS. This is because the data distribution is more IID for the 10 % Straggler rate. For 20 % FLIPS is 1x - 1.3x faster than OORT and is 1.15 - 1.25x faster than TiFL. Additionally, FLIPS achieves 1.5 - 3.5 % higher accuracy than OORT, for a 10 % straggler rate, while for the 20 % straggler rate FLIPS attains higher accuracy in all the cases by 0.7 - 2 % as seen in Figure 5. For Fashion MNIST we observe that FLIPS outperforms OORT and TiFL in all settings. It achieves 3 - 4 % points improvements in accuracy as seen in Table 8 and is 1.3 - 2.2 faster than OORT and TiFL. These improvements are consistent across FedAvg and FedProx too. FLIPS is robust against data heterogeneity and platform heterogeneity (presence of stragglers) where data distributions are non-IID. Using intelligent party selection, we improve the terminal accuracy and reduce the communication overheads in FL by using fewer rounds than other techniques. To summarize FLIPS provides a middleware support system for FL by: * Improving the terminal accuracy when the communication overheads (FL rounds) are fixed. * Reducing communication overheads required to attain the desired/target accuracy. * Reducing the time required for FL to attain a target accuracy by lowering the number of FL rounds required. ## 6 Related Work The Intelligent Participant Selection in FLIPS deals both with platform and data heterogeneity using a data-driven approach to cluster similar parties and outperforms existing techniques like OORT and GradCls. **Data Heterogeneity**: Non-IID datasets in FL introduce client drift in the global model, which hampers its convergence. Many solutions exist to reduce client drift and solve data heterogeneity. [47, 46] propose a client-drift correction update \((m-m_{p})\) between the server model (\(m\)) and each party's model \(m_{p}\) to mitigate client drift. This improves convergence. [30] introduces a penalty and gradient correction term in the local loss function to account for the local drift to bridge the gap between the \(m\) and \(m_{p}\) for faster convergence in non-IID settings. [7] performs more optimizations on the local/party level by introducing dynamic regularization terms to bring the global and local models closer, reducing client drift. Several studies have examined clustering techniques to personalize models in FL by grouping similar parties in FL based on local model similarity [78]. [70] discuss clustering local model updates using cosine similarity to identify parties with similar data distributions and address data heterogeneity. They create personalized models for each group of parties after a fixed interval, involving re-clustering periodically. IFCA [32] assigns each party a cluster identity based on their data and tailors model parameters for each cluster using gradient descent to improve model convergence for similar parties. FedLabCluster [93] performs label presence clustering, aggregating models within each cluster to enhance convergence for models specific to each cluster. Another study [54] investigates sharing encoded data among parties and clustering datasets using K-Means to achieve high clustering accuracy in FL. CMFL [86] dynamically identifies irrelevant local model updates by comparing them to the global model, and discards updates irrelevant to the global model, addressing data heterogeneity in non-IID cases. In a hierarchical federated learning (HFL) setting, [25] improves convergence by selecting parties with the lowest KL divergence between local and edge aggregator's data. FedCBS [94] computes the Quadratic Class-Imbalance Degree from label distributions to choose parties with more balanced grouped datasets, addressing data heterogeneity. However, none of these studies focus on intelligent participant selection or discuss confidentiality approaches in FL. **Platform Heterogeneity**: Solutions like Aergia [21] deal with platform heterogeneity by offloading the training of stragglers to other parties with similar datasets. This reduces the time required for federated learning while maintaining accuracy similar to the baseline techniques. FLIPS leverages the knowledge of similar parties to perform participant selection across parties to make a 2-fold improvement across accuracy and training time, using lesser communication rounds in intermittent FL scenarios. [50] solves the straggler issue by using Locality Sensitive Hashing to cluster models/parties and drops duplicate and slow model updates. Further, it requires parties to train locally without knowing whether their models will be used, often wasting local computing resources, which may be undesirable in edge settings. FedLesScan [28] a clustering framework clusters parties into three groups: rookies, participants and stragglers based on the device variations. Parties are selected from these clusters to mitigate the effect of stragglers to reduce training time and cost. FLIPS uses a more data-driven approach to pick to compensate for the straggler parties. [74] uses a mechanism where it selects faster parties in the beginning to attain a target accuracy and then incorporates the slower parties to improve the global model. [69] deals with platform heterogeneity by using model grouping and weighting based on arrival delay to identify stragglers and entropy-based approaches to mitigate adversaries. [75] uses gradient coding to introduce redundancy in model training to mitigate the effect of stragglers. FedAT [17] uses a straggler-aware, weighted aggregation heuristic to solve the platform heterogeneity issue. This heuristic assigns higher weights to faster devices, which helps to compensate for the slower devices. FedCS [58] is a communication-efficient federated learning framework inspired by quantized compressed sensing. It compresses gradients at client devices using block sparsification, dimensional reduction, and quantization. Then, it reconstructs gradients at the parameter server and achieves almost identical performance as no compression, while significantly reducing communication overhead. ## 7 Towards FLIPS in real deployments We implement FLIPS in smartspaces and assisted living for older adults, analyzing real-time data from multiple facilities and individuals. We aim to monitor residents, identify health and safety incidents, and detect changes in daily activities, falls, illnesses, and wandering events. By utilizing federated learning (FL) and FLIPS, we ensure robust and timely analysis of personal health records and device data while maintaining data privacy. A specific area of interest is using ECG data from portable devices to detect arrhythmias and heart irregularities. Machine learning models trained on ECG datasets have shown promise in early detection and treatment of cardiac issues. With FLIPS, we can train heart rhythm and fall models without compromising sensitive personal data, thus enhancing both privacy and model robustness. We have partnered with a senior-care community consisting of approximately 50 facilities, which serves as a trusted entity for storing confidential resident information. These communities act as aggregators and trusted parties, facilitating label distribution in a FLIPS deployment. Additionally, we focus on the detection and localization of falls in assisted living facilities using data from cameras, acoustic sensors, and wearable tags with accelerometers, gyroscopes, and location-based sensors. Our efforts involve developing robust event detection models to handle device heterogeneity and variations in fall risk [8]. Privacy concerns are addressed by ensuring secure collection, communication, and storage of remotely obtained data to prevent unauthorized access and misuse. While FLIPS employs federated and decentralized learning, the clustering aspect is centralized and demonstrated using a centralized aggregator in Section 3. We chose this approach because it aligns with the most commonly used architecture in real-world federated learning systems, such as Google FL [13, 11], IBM FL [60, 38], FATE [57], and Facebook FL [37]. The centralized aggregator offers simplicity, statelessness, fault tolerance, and ease of storing FL job data and accounting information in fault-tolerant cloud object stores or key-value stores. In case of aggregator failure, data can be recovered, and aggregation can be resumed from the last round. Communication and aggregator failures are easily recovered by requesting retransmission of lost model updates from the parties, assuming each party securely stores its local model. ## 8 Conclusions and Future Work In this article, we present FLIPS: Federated Learning using Intelligent Participant Selection, which improves label and participant representation in FL to improve convergence, attain higher accuracy and reduce communication overheads. Our empirical evaluation indicates that FLIPS on an average speeds FL algorithms by 1.2\(\times\) - 2.9\(\times\) and improves terminal accuracy by 5-30 percentage points when compared to the existing techniques. We are currently exploring the following research directions: (1) personalization in Federated Learning (FL), (2) handling changing data distributions (3) decentralized clustering of label distributions and (4) FLIPS and adversarial robustness. Personalization in FL involves adapting to individual users' or devices' unique requirements and characteristics. Instead of a centralized dataset, we plan to train the model using data from similar parties or devices separately, allowing for personalized models that account for specific patterns and differences in each party's or device's data. Additionally, we plan to address the issue of changing data distributions, which is relevant in IoT settings with streaming data that can introduce shifts in data distributions. FLIPS, will capture these changes and train robust models while optimizing resource usage. We anticipate that as datasets and AI-based analytics techniques continue to expand, the ability to handle multiple requirements like privacy, performance, and latency will become crucial, and FLIPS represents a step in that direction. Decentralized FL architectures rely on secure multi-party computation (SMPC) with or without homomorphic encryption to ensure model update privacy. However, their higher computational requirements make them less popular. To implement FLIPS using SMPC, Algorithm 1, clustering must be computed using an SMPC protocol. Participant selection can be achieved through leader election, with the leader implementing the FLIPS selection protocol and other parties auditing the process. Finally, we are interested in investigating the interplay between diverse datasets in FL and techniques used for adversarial robustness - whether such techniques [61, 95] exclude valid underrepresented data classes. We also plan to investigate changes needed to make label distribution clustering work with adversarially robust FL.
2303.16918
Flat bands and multi-state memory devices from chiral domain wall superlattices in magnetic Weyl semimetals
We propose a novel analog memory device utilizing the gigantic magnetic Weyl semimetal (MWSM) domain wall (DW) magnetoresistance. We predict that the nucleation of domain walls between contacts will strongly modulate the conductance and allow for multiple memory states, which has been long sought-after for use in magnetic random access memories or memristive neuromorphic computing platforms. We motivate this conductance modulation by analyzing the electronic structure of the helically-magnetized MWSM Hamiltonian, and report tunable flat bands in the direction of transport in a helically-magnetized region of the sample for Bloch and Neel-type domain walls via the onset of a local axial Landau level spectrum within the bulk of the superlattice. We show that Bloch devices also provide means for the generation of chirality-polarized currents, which provides a path towards nanoelectronic utilization of chirality as a new degree of freedom in spintronics.
Vivian Rogers, Swati Chaudhary, Richard Nguyen, Jean Anne Incorvia
2023-03-29T18:00:00Z
http://arxiv.org/abs/2303.16918v3
Flat bands and multi-state memory devices from chiral domain wall superlattices in magnetic Weyl semimetals ###### Abstract In this work, we study the electronic structure and transport in magnetic Weyl semimetal (MWSM) chiral domain wall (DW) superlattices. We show that the device conductance is strongly modulated by the number of DWs between contacts, providing a means to encode multiple resistance states in a single racetrack device. Additionally, we show that periodic exchange fields can generate tunable flat bands in the direction of transport as well as a chirality filtering mechanism for Bloch DWs. This work elucidates both the physical behavior and nanoelectronic potential for DW-MWSM devices. ## I Introduction Topological materials such as topological insulators, Dirac semimetals, and Weyl semimetals have gained much interest for their litany of novel electron transport behaviors and potential utility in nanoelectronic devices [1, 2]. Weyl semimetals in particular must break either spatial inversion or time-reversal symmetry (TRS). In a magnetic Weyl semimetal (MWSM), a Dirac cone is split into two Weyl cones with opposite chiralities when TRS is broken. This opens up an additional chirality degree of freedom which can significantly influence the electronic transport [3, 4]. In intrinsically magnetic Weyl semimetals, the helicity of Weyl fermions can be controlled by changing the magnetization of the material, giving rise to large longitudinal magnetoresistances (MRs) via the helicity mismatch of carriers in differently-magnetized portions of a sample. Previous works [5, 6] have looked at this MR associated with transport across MWSM domain walls (DWs) and a magnetic tunnel junction (MTJ) constructed from MWSMs, both of which conclude that on/off ratios could be improved significantly (\(10^{5}\%\) vs \(10^{2}\%\)) over the MRs utilized in traditional CoFeB/MgO MTJs. This giant Weyl MR arising from spatially-varying magnetic textures is expected to be resilient to disorder and large when compared to the DW resistances attributed to anisotropic MR or spin-mistracking in a nontopological DW [7, 8]. Experimentally, an anomalous longitudinal MR of \(\approx 7\%\) was observed by nucleating DWs in a sample of Weyl ferromagnet Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\)[9]. Other works [10, 11, 12] conclude that the chirality-magnetization locking will cause carriers to localize at discontinuities in the magnetization profile, which along with the recently observed topological hall torque [13, 14], opens up novel and orders-of-magnitude more efficient means for electronic control of magnetic information in MWSMs. The carrier localization and chirality-magnetization locking in MWSMs motivates interest in transport and the electronic structure of the periodic exchange superlattices, with the starting intuition that transport should be strongly forbidden in the direction of alternating magnetic texture. Previous works [10, 11] have shown that Neel DWs will localize carriers in 1D coplanar to the DW, but so far no works have investigated the connection between periodic magnetic textures and electronic bandstructure in 3D. Experimentally, such a system could potentially be realized in a MWSM helimagnet with or via injection of chiral DWs into a magnetic racetrack [15] with mean-free-path larger than the superlattice length. In this work, we outline the working principles of a multi-state MWSM DW memory device and investigate the electronic structure of MWSMs in a helical spin texture. We show that the conductance is modulated with the number of DWs between contacts in a MWSM and show the emergence of flat bands in the periodically-magnetized MWSM superlattice. Not only is this of interest for the physics of topological magnetic superlattices, but this highly-tunable readout of magnetic tex Figure 1: Diagram of DW lattice MWSM device. \(x_{0}\) represents the start position of a series of chiral DWs (red and blue domains depicted). In the bottom device, few MWSM DWs are present between nonmagnetic electrodes. In the top device, \(x_{0}\) is increased, injecting DWs into the active region, increasing resistance between the grey, nonmagnetic contacts. ture could allow for a novel and efficient mechanism to convert magnetization to charge information--a necessity of spintronic circuits, with great utility in emerging neuromorphic computing platforms [16]. ## II Domain wall superlattices and plane-wave analysis We consider two models to elucidate the electronic structure and transport in periodically-magnetized MWSM superlattices: one using the continuum plane-wave basis model and another using the standard tight-binding approximation for transport. For both models, we consider the cases of Neel and Bloch magnetic DW lattices with the exchange field defined as \[\mathbf{\beta}_{N}(\mathbf{r})= J(\cos\theta_{r}\,\hat{y}+\sin\theta_{r}\,\hat{x}), \tag{1}\] \[\mathbf{\beta}_{B}(\mathbf{r})= J(\cos\theta_{r}\,\hat{y}+\sin\theta_{r}\,\hat{z}) \tag{2}\] where \(\theta_{r}\) is a spatially-dependent magnetization angle. For the infinite superlattice along \(\hat{x}\), \(\theta_{r}\) can be defined as \(\theta_{r}=2\pi x/a_{S}=\vec{b}_{x}\cdot\vec{R}\), where \(a_{S}=n_{x}\,a_{0}\) is the superlattice length. Later, we define \(\theta_{r}\) for a series of chiral DWs. For the continuum model, we consider the four-band model Hamiltonian of a TRS-broken MWSM [1]: \[\hat{H}(k)=v_{f}\tau_{x}\otimes(\mathbf{k}\cdot\mathbf{\sigma})+m\tau_{z}\otimes \sigma_{0}+\mathbf{\beta}\cdot\mathbf{\sigma}, \tag{3}\] defined in the spin (\(\sigma\)) and pseudospin (\(\tau\)) spaces. Imposing a 1D superlattice structure in \(\hat{x}\) and adding the Fourier-transformed exchange field \(\mathbf{\tilde{\beta}}\)[17] to the Hamiltonian off-diagonals results in \[\hat{H}(k)=\sum_{G\in\mathbb{Z}\mathbb{Z}_{e}}c^{+}_{G}c_{G}(v_{ f}\tau_{x}\otimes(\mathbf{k}-\mathbf{G})\cdot\mathbf{\sigma}+m\tau_{z}\otimes \sigma_{0})\\ +\sum_{G^{\prime},G\in\mathbb{Z}\mathbb{Z}_{e}}c^{+}_{G^{\prime} }c_{G}\tau_{0}\otimes\mathbf{\tilde{\beta}}_{(\mathbf{G^{\prime}}-\mathbf{G})} \cdot\mathbf{\sigma}. \tag{4}\] We set the Fermi velocity \(v_{f}=1.5\times 10^{6}\frac{m}{s}\)[18] and focus on \(m=0\) in this paper. An exchange splitting magnitude of \(J=0.25~{}eV\) is taken to split the Weyl nodes with a small \(\Delta k=0.04\frac{2\pi}{a_{0}}\) with \(a_{0}=1\) nm, and for comparison with previous work [5]. For a constant magnetization, the exchange field \(\beta=J\hat{M}\) will split the \(\ket{L}\text{ or }\ket{R}\) (i.e. \(\langle\gamma^{5}\rangle\) = -1,+1) Weyl points in momentum space as \(\mathbf{k}_{\pm}=\mp\mathbf{k}_{0}=\mp\Delta k\hat{M}=\mp\frac{1}{\hbar v_{f} }\sqrt{J^{2}-m^{2}}\hat{M}\). Then a low-energy model for the Weyl cone dispersions can be taken as [10; 11; 1] \[\hat{H}_{k,\pm}=\pm\hbar v_{f}[\mathbf{k}+\mathbf{k}_{\pm}]\cdot\mathbf{\sigma}= \pm\hbar v_{f}[\mathbf{k}\mp\frac{e}{\hbar}\mathbf{A}^{5}]\cdot\mathbf{\sigma}, \tag{5}\] where \(\mathbf{A}^{5}\) leads to a chirality-dependent magnetic field determined by the exchange field texture: \[\mathbf{B}^{5}=\nabla\times\mathbf{A}^{5}=\frac{J}{ev_{f}}\nabla\times\mathbf{ M}(\mathbf{r}). \tag{6}\] This generates \(\mathbf{B}^{5}_{N}(\mathbf{r})=\frac{-2\pi J}{a_{S}ev_{f}}(\sin\theta_{r}\hat{z})\) and \(\mathbf{B}^{5}_{B}(\mathbf{r})=\frac{-2\pi J}{a_{S}ev_{f}}(\cos\theta_{r}\hat{y}+ \sin\theta_{r}\hat{z})\) for the Neel and Bloch wall cases, the inclusion of which significantly modifies the band structure. In Fig 2, we show the chirality-projected bands for the uniformly magnetized, Neel, and Bloch helical-exchange superlattices, where the chirality operator \(\gamma^{5}=\tau_{x}\otimes\sigma_{0}\) and \(\langle\gamma^{5}\rangle=\frac{-1}{\pi}Tr(G^{R}_{k}(E)\gamma^{5})\) with the time-retarded Green's function \(G^{R}_{k}=(E+i\eta-H_{k})^{-1}\)[19]. While straightforward to project the Hamiltonian eigenstates as \(\langle u_{n}(k)|\gamma^{5}|u_{n}(k)\rangle\), it would be misleading in the Neel wall case as the \(\ket{L}\) and \(\ket{R}\) states are chirality-polarized but degenerate in \(k_{z}\), thus obscuring one or the other. In the Bloch wall case, the lowest-energy \(\ket{R}\) and \(\ket{L}\)-chiral bands are lowered and raised in energy with \(\Delta E_{\pm}=\mp(\sqrt{J^{2}+\hbar^{2}v_{f}^{2}\pi^{2}/a_{S}^{2}}-\hbar v_{ f}\pi/a_{S})\approx\mp J\) at \(\Gamma\) (see Eq. 9) for a right-handed magnetization spatial helicity (as opposed to the spin helicity \(\hat{\mathbf{p}}\cdot\mathbf{\sigma}\)). It can also be seen that the axial magnetic fields, \(\mathbf{B}^{5}_{N}\) and \(\mathbf{B}^{5}_{B}\), will localize carriers in \(\hat{x}\). Notably, the Neel case will also localize carriers in \(\hat{y}\), while the Bloch wall case remains dispersive in \(\hat{y}\) and \(\hat{z}\), as mentioned in Ref. [10]. Focusing on the Neel wall Hamiltonian \(H_{N}\), In Fig. 3, we sample the spectrum of \(H_{N}(\Gamma)\) and, as also discussed in Ref. [11], generate axial Landau levels (LLs) corresponding to \[\max(|\mathbf{B}^{5}|)=\frac{2\pi J}{a_{S}ev_{f}}; \tag{7}\] \[E_{\nu}\approx\text{sgn}(\nu)\sqrt{2|\nu|\frac{J\hbar v_{f}}{a_{S }}}. \tag{8}\] The good fitting of the low-energy spectrum to \(\max(\mathbf{B}^{5})=10.47\) T implies that the wavefunctions are strongly localized to regions of highest axial magnetic Figure 2: Chirality \(\langle\gamma^{5}\rangle\) projected plane-wave band structures for superlattice period \(a_{s}=100\,a_{0},J=0.25\) eV with differing exchange field textures. \(\mathbf{k}_{0}\) denotes the position of the \(\ket{L}\) Weyl point in the uniformly magnetized case. (a) Uniformly magnetized MWSM shows \(\ket{L}\)-chiral Weyl node with band folding. (b) Néel DW superlattices show band flattening and axial Landau levels in \(\hat{x}\) and \(\hat{y}\), while (c) Bloch DW superlattices only show band flattening in \(\hat{x}\). field, i.e. around \(x=a_{s}(2\mathbb{Z}+1)/4\), or where the Neel magnetization is collinear with the direction of the super-lattice. While an analogous axial LL physics should manifest for the Bloch wall case, the continuously changing direction of \(\mathbf{B}^{5}\) makes it somewhat obscure: \(\mathbf{B}^{5}\) rotates with constant magnitude in the \(y-z\) plane as a function of \(x\). As a result, the zeroth Landau level disperses in different directions at different positions which can be seen in Fig. 2 (c) for the \(k_{y}\) and \(k_{z}\) directions. Magnetization profile \(\mathbf{\beta}_{B}(\mathbf{r})\) (Eq. 2), which decides the separation of two Weyl nodes, leads to a nodal-ring structure in the spectrum in the \(k_{y}-k_{z}\) plane centered at \(\Gamma\). Interestingly, the dispersion for \(k_{||}=\Gamma\) can be obtained analytically [20] and is given by: \[E_{n}=\mp\sqrt{J^{2}+\hbar^{2}v_{f}^{2}\left(k_{x}+\frac{\pi}{a_{S}}(2n+1) \right)^{2}}+\hbar v_{f}\frac{\eta\pi}{a_{S}}. \tag{9}\] In both cases, we notice that the electronic bands are significantly flattened in the direction of superlattice. Knowing that periodic magnetic textures in MWSMs can modify dispersions in the direction of transport and provide dynamic control over the electronic structure, along with the intuition that DWs will reflect carriers via the helicity-mismatch mechanism, we move to a device picture to assess one application of MWSM DW lattices. ## III Quantum transport and tight-binding analysis For transport and spatial resolution of Fermi surfaces, we employ the canonical four-band tight-binding model of a MWSM [5; 21; 22] to approximate Eq. 4, again choosing the \(\tau_{x}\) and \(\tau_{z}\) matrices for the momentum and mass [23] terms, respectively: \[\hat{H}_{\text{site}}\mathrm{i}=\sum_{j\in\{x,y,z\}}[\frac{t}{2}( c_{\mathrm{i}}^{+}c_{\mathrm{i}}\hat{\tau}_{0}-c_{\mathrm{i+}j}^{+}c_{\mathrm{i}} \hat{\tau}_{z})\otimes\hat{\sigma}_{0}-\\ \frac{it}{2}(c_{\mathrm{i+}j}^{+}c_{\mathrm{i}}\hat{\tau}_{x} \otimes\hat{\sigma}_{j})+\mathrm{h.c}]+\mathbf{\beta}(\tau_{\mathrm{i}})\cdot\mathbf{ \sigma}, \tag{10}\] with hopping parameter \(t=1\) eV, lattice parameter \(a_{0}=1\) nm for simplicity, \(n_{x}=\)100 sites in \(\hat{x}\), and the Bloch phase prefactors applied to hoppings in \(\hat{y}\) and \(\hat{z}\) to construct a supercell \(H(k_{||})\). Here, \(\theta_{r}\) of a single \(180^{\circ}\) DW is defined in [24] and convolved with a semi-infinite Dirac comb to generate the periodic magnetization angle: \[\theta_{r}=(2\text{tan}^{-1}(\pi e^{-\frac{\pi i+d}{2}}))\star\sum_{\mathrm{ i=0}}^{\infty}\delta(\mathrm{i}n_{x}a_{0}-x_{0}-x), \tag{11}\] with \(\star\) being the convolution operator, \(x_{0}\) referring to the start position of the chiral DW lattice, the DW width \(d=8\) nm, and the superlattice periodicity \(a_{s}=30\) nm. While these parameters are not necessarily physical with relevance to the modelling of real magnetic racetrack systems, they are a minimal model that is computationally tractable and allows us to demonstrate underlying physics. In a real system, thicker domain walls or a longer superlattice would decrease \(\text{max}(\mathbf{B}^{5})\)[11] thus decreasing the axial Landau Level spacing and zone-folded subband spacing. In the tight-binding model, we consider trivial metallic electrodes [25] and semi-infinite MWSM electrodes to elucidate the underlying physics [26]. For the case with MWSM electrodes, the magnetizations \(\mathbf{\beta}(x=-a_{0})\) and \(\mathbf{\beta}(x=a_{s}+a_{0})\) are copied for the left and right contact and extend to infinity for the generation of the Figure 4: Chirality-resolved mixed-space Fermi surface with \(\mu=0.1\) eV and \(\eta=10^{-2.5}\) eV are shown with corresponding exchange field textures for \(x_{0}\) fixed to \(63\) nm. (a) and (b) show the Neel DW lattice, while (c) and (d) show the helical bloch DW lattice. Figure 3: (a) 3D surfaces of the chirality-projected but degenerate bands are shown for a slice of \(H_{N}(k)\), with \(k_{x}\) fixed to \(0\). (b) the eigenvalue spacing is sampled at \(\Gamma\), plotted with the observed correspondence to the analytical Landau level spacing for bands under \(E=J\). Sancho-Rubio [27] contact self-energies \(\Sigma_{L,R}\) to retain the continuity of the magnetization field texture. Of particular interest is the braiding of the Fermi surfaces in connection with the band structures of Fig. 2, which are useful to understand transport. Fig. 4 shows the mixed-space Fermi surface of the periodically magnetized racetrack devices for energies close to the Weyl node. Neel devices show a \(k_{y}\)-symmetric braiding of the L and R chiral states, corresponding to the degenerate states in Fig. 4 (b). This can be understood from the profile of \(\mathbf{\beta}_{N}\) for the Neel case which decides the position of two Weyl nodes in \(k_{y}-k_{z}\) plane. Additionally, this magnetization pattern generates \(\mathbf{B}_{N}^{5}\) pointing along \(z\) which flattens the bands in \(k_{y}\). It should be noted here that these Fermi surfaces can also be interpreted as zeroth LL [11]. On the other hand, for Bloch DW case, \(\mathbf{B}_{B}^{5}\) rotates in \(y-z\) leading to these twisted Fermi-surfaces which can also be understood as the Fermi contour of the zeroth LL shown in Fig. 2 (c) where for an infinitesimal non-zero energy, \(|L\rangle\) and \(|R\rangle\) states are at a different radius in the \(k_{y}-k_{z}\) plane. As a result, the \(|L\rangle\) chiral states wrap around the expected position of the twisted Weyl cones, while the \(|R\rangle\) chiral states wrap closely to \(k_{||}=\Gamma\). This twisted Fermi-surface motivates analysis of the chiral anomaly and Weyl orbits with periodic real and axial magnetic fields [28]. For device modeling, we neglect the orbital effects of the magnetic field and consider transport in the Landauer limit of the NEGF quantum transport formalism [29]. With a minimal carrier lifetime broadening of \(\eta=10^{-4}\) eV, in Fig. 5 (a) we show that device conductance can be modulated by orders of magnitude via the injection of DWs into the active region of the device, and that this effect persists for both magnetization patterns, with both MWSM and simple metal electrodes. In Fig. 5 (b), we sweep the broadening parameter \(\eta\), corresponding to a phenomenological inelastic momentum-preserving scattering self-energy [30], and show that it 1) smooths out conductivity vs. \(x_{0}\) and 2) forms stairsteps in conductance with to the number of DWs in the active region of the device-as one might expect with an increasing number of helicity-mismatch carrier reflections. This form of nonvolatile, texture-dependent conductivity is desirable for use in emerging analog memory devices [16]. Curiously, even a high-resolution 401 x 401 k-grid over the zoomed-in portion of the surface Brillouin zone (SBZ) is unable to converge the value of the Landauer conductance for small broadening parameters (\(\eta=10^{-3},10^{-4}\) eV), leading to unphysical spikes in conduction. To explain this, we consider the \(k_{||}\)-resolved transmission in Fig. 5 (c) and Fig. 5 (d) to resolve conductance behavior from the distorted Weyl cones in the periodic magnetization region. We see that the distorted Weyl cones form infinitesimally-thick curtains of conductance in the SBZ which are challenging to capture numerically: In Fig. 5 (c), the Neel DW lattice conduction is dominated by the distorted \(|L\rangle\) and \(|R\rangle\) Weyl cones in the periodic region of the device, especially where they overlap with the contact electrodes' bulk Weyl cones. In contrast, the Bloch DW device in Fig. 5 (d) has minimal conduction through the distorted Weyl cones which wrap around the bulk Weyl cones in either contact. Thus, the majority of the conductance tunnels through the periodic region of the device from the Bulk states in both contacts, leading to oscilla Figure 5: Device conductance is modulated by injecting DWs between contacts via sweeping \(x_{0}\). (a) Conductance modulation by orders of magnitude is shown for all combinations of MWSM and simple metallic electrodes with Bloch and Néel DW lattices. (b) A phenomenological broadening term \(\eta\) is swept, showing discrete stairsteps in conductance, thus multi-weight behavior, as \(\eta\) increases (i.e. carrier lifetime decreases). (c) and (d) show k-resolved transmission maps for the Néel and Bloch DW cases, respectively, sweeping \(x_{0}\), over center of surface Brillouin zone. Hotspots in transmission \(T(k_{||})\) are visible where the bulk states from the left and right electrodes overlap, connected by twisted Fermi arcs of character determined by the magnetic texture. \(x_{0}\) position is labeled in white for each sub-plot. tions in Fig. 5 (a). Interestingly, due to the broadening parameter providing finite-lifetime states from the contacts, conductance from the contact Weyl cone overlap decreases and the inner \(|R\rangle\)-chiral transmission ring will begin to plateau in the total conductance, giving rise to a texture-dependent chirality filtering mechanism to generate or reflect chiral currents with potential application in emerging devices [31]. ## IV Discussion We have studied the electronic structure and transport in a MWSM superlattice device, have shown magnetization-tunable flat bands in the direction of transport, and have demonstrated that multiple conductance states could potentially be encoded in such a device. We have also shown that the multi-state behavior persists through all electrode and magnetic superlattice configurations we have considered, quite unlike the spin transport in traditional spin filtering systems. With regards to the superlattice picture, more analysis needs to be done to fully understand the connection between exchange textures and topological or electronic structure, the exploration of which is only nascent in systems such as semiconductor nanowires [32] or magnetic superlattice graphene [33, 34] under skyrmion exchange superlattices which show topological flat bands. It should be noted that this effect only relies on the mean-field exchange splitting of a periodic ferromagnet, acting on spin space with contributions on the order of 100 to 1000 meV in bulk ferromagnets, as opposed to a real magnetic field or proximity-induced exchange, the effects of which would show with much weaker Zeeman perturbations or contribution to an effective gauge field. Thus, we expect the electronic structure in periodic MWSM lattices to be more robust closer to room temperatures, supposing a MWSM with Curie temperature above 300K as in Mn\({}_{3}\)Sn[35], Co\({}_{2}\)MnGa[36], or Fe\({}_{3}\)Sn\({}_{2}\)[37]. Nevertheless, other perspectives on scattering at MWSM DWs imply that skew scattering [38] or heavily-tilted Weyl cones [39] could significantly decrease the MR. This highly-tunable electronic structure and conductivity via control of magnetization in MWSMs would be of great use to nanoelectronics, which we will discuss briefly. While it is hard to make definitive claims from our model without incoherent momentum relaxation - we see longitudinal Weyl MRs over \(10^{14}\%\) - we estimate that, using the DW resistances observed in [9] an effective MR = \(\frac{R_{\text{sur}}-R_{\text{sur}}}{R_{\text{ext}}}=\frac{n_{DW}R_{DW}}{R_{0 }}=(\frac{L}{d}+1)\frac{R_{DW}}{R_{0}}=32.1\%\) could be achievable for the observed \(2R_{DW}\approx 4\,\Omega\) and \(R_{0}\approx 53\,\Omega\), for a domain width \(d=80\) nm and device length \(L=600\) nm. This is 14x the 3.93% read margin observed in CoFeB transverse-read DW memory [8]. In itself, this becomes competitive with other GMR devices (with regards to on/off ratio) or multi-state TMR devices [16], but without the need of a magnetic tunnel junction to measure the magnetization-induced resistance change. Supposing a deliberate engineering of the chemical potential or DW width could bring about a modest MR\({}_{DW}=25\%\) with \(L=1\mu\)m, and \(d=40\) nm, on/off ratios upwards of 650% become possible. Another potential advantage of this approach, compared to traditional DW-MTJ devices, is that one could avoid the need for precise MgO deposition to avoid pinholes, thus making high MRs accessible to industry and academic labs without expensive, lengthy fabrication processes. Here, one could construct memory devices using simple metallic thin films, so long as the chirality-magnetization locking in the MWSM is preserved and surface fermi arcs do not dominate the conductance. Predicted orders-of-magnitude-improved readout performance over existing MTJs [5] could further increase device viability, supposing one could amplify the Weyl MR or suppress scattering in a real system. ## V Acknowledgements The authors acknowledge funding from the UT CDCM MRSEC supported under NSF Award Number DMR-1720595, funding from Sandia National Laboratories, and computational resources from the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. The authors also thank Gregory A. Fiete for helpful discussions, as well as Kerem Camsari and Shuvro Chowdhury for their discussions on the theory and implementation of the NEGF transport formalism.
2306.14709
Self-supervised novel 2D view synthesis of large-scale scenes with efficient multi-scale voxel carving
The task of generating novel views of real scenes is increasingly important nowadays when AI models become able to create realistic new worlds. In many practical applications, it is important for novel view synthesis methods to stay grounded in the physical world as much as possible, while also being able to imagine it from previously unseen views. While most current methods are developed and tested in virtual environments with small scenes and no errors in pose and depth information, we push the boundaries to the real-world domain of large scales in the new context of UAVs. Our algorithmic contributions are two folds. First, we manage to stay anchored in the real 3D world, by introducing an efficient multi-scale voxel carving method, which is able to accommodate significant noises in pose, depth, and illumination variations, while being able to reconstruct the view of the world from drastically different poses at test time. Second, our final high-resolution output is efficiently self-trained on data automatically generated by the voxel carving module, which gives it the flexibility to adapt efficiently to any scene. We demonstrated the effectiveness of our method on highly complex and large-scale scenes in real environments while outperforming the current state-of-the-art. Our code is publicly available: https://github.com/onorabil/MSVC.
Alexandra Budisteanu, Dragos Costea, Alina Marcu, Marius Leordeanu
2023-06-26T13:57:05Z
http://arxiv.org/abs/2306.14709v1
Self-supervised novel 2D view synthesis of large-scale scenes with efficient multi-scale voxel carving ###### Abstract The task of generating novel views of real scenes is increasingly important nowadays when AI models become able to create realistic new worlds. In many practical applications, it is important for novel view synthesis methods to stay grounded in the physical world as much as possible, while also being able to imagine it from previously unseen views. While most current methods are developed and tested in virtual environments with small scenes and no errors in pose and depth information, we push the boundaries to the real-world domain of large scales in the new context of UAVs. Our algorithmic contributions are two folds. First, we manage to stay anchored in the real 3D world, by introducing an efficient multi-scale voxel carving method, which is able to accommodate significant noises in pose, depth, and illumination variations, while being able to reconstruct the view of the world from drastically different poses at test time. Second, our final high-resolution output is efficiently self-trained on data automatically generated by the voxel carving module, which gives it the flexibility to adapt efficiently to any scene. We demonstrated the effectiveness of our method on highly complex and large-scale scenes in real environments while outperforming the current state-of-the-art. Our code is publicly available: [https://github.com/onorabil/MSVC](https://github.com/onorabil/MSVC). ## 1 Introduction Current AI models are becoming increasingly capable to generate realistic fake worlds, which may be harmful in applications where we need to stay grounded in the real physical world. At the same time, in many tasks, it is also essential to be able to imagine how the world really looks from different completely novel views, with minimal use of resources and computation cost. For example, in many cases (e.g. creating artistic movies and other kinds of visual content) the ability to imagine virtual flights over a real scene could be of great practical value, but without modifying the true structure of the real scene. Prior work for novel view synthesis focused only on small synthetic scenes, with zero noise in input data (for pose or depth) and only with very small variations in novel poses versus the ones seen in training. This is a very limited scenario, with very little use in the real world and here, we come to address this limitation. Our goal is to address all these limitations, along several dimensions, which also define our main contributions. We address the problem of grounding in the 3D structure of the real world, with a novel and very efficient multi-scale voxel carving method, which considers voxels at different scales (sizes) and decides upon their 3D existence (part of the real surface in the world) and color using a robust confidence voting-based measure. Our multi-scale voxelization is not used to explicitly build the 3D structure of the world, but only to minimize 2D view reconstruction error and coverage from novel unseen camera poses. This is different from current 2D synthesis methods, which are not based on a real 3D structure, and also different from explicit 3D reconstruction methods, which are accurate but have very weak 2D reconstruction coverage, being limited only to poses seen during training. We address the second important aspect of generating accurate and high quality (resolution) 2D views of a real scene, by introducing a second neural net module, which learns automatically supervised by the first module, on the training views of the same scene, such that it can automatically adapt to any scene, for maximum efficiency. This module essentially consists of a small U-Net, which is rapidly fine-tuned during the multi-scale voxel carving stage and learns to reconstruct the original RGB from the output of the Multi-Scale Voxel-Carving (MSVC) based 2D view reconstruction, at minimal additional cost. Thus, the MSVC teacher module grounds the reconstruction in the real world, being robust to noises coming from depth and pose due to its multi-scale voxelization structure, while the second model learns to refine its output and improve coverage during the automatic self-supervised stage. Our main contributions are summarized below: 1. We introduce a novel self-supervised system of novel 2D view reconstruction, in which an analytical module, based on multi-scale voxelization, grounds the output into the real 3D world and supervises automatically a second neural net module for accurate high-resolution reconstruction of novel views of the scene. 2. We test and compare our work with state-of-the-art methods, on very difficult cases of real-world large-scale scenes, captured by UAVs, having a significant amount of noise in pose and depth, unlike all other works published so far which are limited to small-scale scenes, synthetic, having no measurement errors and only small variations in view between the train and test cases. Figure 1: We present an overview of our novel self-supervised system of novel 2D views reconstruction. We leverage video and telemetry data from UAV flights and state-of-the-art optical flow-based approaches to obtain accurate metric depth estimation for a given real-world scene. Using RGB, Depth and Pose see apply our novel multi-scale voxelization procedure for 3D reconstruction, which becomes the supervisory signal for a self-supervised neural network in order to obtain accurate high-resolution reconstruction of novel 2D views of the scene. Related work **Scene reconstruction** The traditional, feature-based 3D reconstruction structure-from-motion pipelines such as COLMAP [14] need significant processing time, even for a small number of high-resolution images [6]. As the number of images increases, the resources required increase at an exponential rate[18], making it unfeasible for large datasets. The implicit representation proposed by Neural Radiance Fields (NeRF) [11] promised an alternative to the compute-intensive structure-from-motion pipelines [4]. The advent of fast NeRF training frameworks such as Instant Neural Graphics Primitives [12] and NerfAcc [8] has reduced the training time from a couple of days for a single scene to a couple of hours or even several minutes. Transforming the input into rays is a shared starting point with our method, but ours has an explicit representation instead. Direct Voxel Grid Optimization [16] proposes a representation that replaces the MLP usually used in NeRFs with a voxel grid, resulting in speed and performance gains. This approach shares both the input and output representation with our method. Nevertheless, most of the time, the reconstructed depth is not dense - marching cubes and a density threshold are needed for actual depth reconstruction. The depth quality and resolved detail are not adapted for large datasets. Generally, the whole dataset is loaded into memory and empirically, the PSNR score from a higher resolution is similar (see our experiments in Section 4.1). Furthermore, another downside of these methods is the requirement for accurate pose - structure-from-motion must be run prior to the reconstruction in order to estimate camera poses. This is also a very expensive operation, mostly due to the bundle adjustment optimization that requires significant resources, reaching hundreds of millions of matches. A key contribution of this work is that our method achieves competitive results using only video and telemetry data (from consumer GPS and onboard sensors - no advanced positioning hardware needed). A bundle adjustment NeRF was also developed to address this issue [2]. **Large scale reconstruction** Given the errors that occur over a large set of images, some authors have proposed various workarounds, usually involving splitting the scene into a set of smaller scenes or grid [17; 19; 20], splitting the path into image sets [10], or splitting the scene into smaller NeRFs [21]. We argue that these methods are not globally consistent and aim to hide the pose and reconstruction errors. **Voxel carving** On the other hand, some of the traditional Novel View Synthesis methods relied on a prior scene reconstruction, such as [15]. The authors propose a reconstruction scheme based on a voxelization of the scene and multi-view consistency of the voxels. In their work, the scene is defined as a set of voxels, arranged in such a way that it can be represented such as consecutive layers of voxels. These layers are considered based on their distance from themselves to the camera, meaning that the missing depth information is circumvented by the arrangement of the voxels. Moreover, the method assumes multiple views from a set of cameras. However, the cameras are pre-set in specific positions, resembling the surface of a sphere, with the reconstructed scene in the center of the said sphere. Using the preset positions of the cameras and the layering of the voxels, the proposed algorithm projects voxels into the image space and compares the colors of the voxels with the colors in the input image. If the voxel's color is not consistent across multiple images, the voxel is then carved. A similar approach with fixed cameras is descibed in [1]. Even though modern approaches focus heavily on implicit representations using neural networks, our method aims to improve the classical representations of scene reconstruction using voxelization and novel view synthesis. We introduce the metrics of _depth consistency_ and _color consistency_, following mathematical 3D representations of the scene, and also define various optimizations in order to solve some of the issues state-of-the-art methods are facing, improving both the resolution and speed for large-scale datasets. ## 3 Self-supervised novel 2D view reconstruction Inspired by the Voxel Carving [7] method, we develop our algorithm starting from a scene represented by a volume of voxels. We then iteratively carve voxels based on the drone data until all the remaining voxels are consistent across all the viewpoints. We furthermore define two key concepts on which we construct our approach: _depth consistency_ and _color consistency_. These two concepts represent properties of the voxels which comprise the final 3D scene. The data used throughout our experiments consists of a set of _poses_ of the drone, a set of _RGB frames_, and their respective _depth maps_. We refer to the \(pose\) as the position of the drone, expressed in 3D World Coordinates (x, y, z), and its orientation, described as Euler angles. Therefore, we represent our dataset as a set of \(T\) quadruples \(P_{xyz},E_{xyz},RGB,Depth\). ### Backprojection We define a \(voxel\) as the smallest unit in the 3D scene and we represent it as a cube. One cube is defined with a 3D point in the world and the length of its side. We arrange multiple voxels in a grid, such that voxels are directly adjacent to each other, with no gaps in between them. We thus define \(V\) as the set of all initial voxels of the scene \(V=\{v_{xyz}=[v_{x},v_{y},v_{z}]\}\) and we refer to it as _Voxel Grid (VG)_. The size of the voxel is considered constant and it is shared across all voxels. The voxel grid is constructed in a way such that by placing a camera in a pose from the dataset, the captured image contains parts of the voxel grid, for every pose in the dataset. The proposed algorithm aims to carve the voxel grid in such a way that after the carving is complete, the existing voxels satisfy both the depth consistency and color consistency criteria (detailed in Sections 3.2 and 3.3). To this end, we employ an iterative process in which we pass through all the quadruples of items in the dataset and check all voxels from the voxel grid if they describe the aforementioned properties using a _backprojection_ from the 3D world space to the 2D image space of the drone's camera. The backprojection of a voxel from the world space to the image space is done using the position of the voxel in the world space \(v_{xyz}\), the position of the drone \(P_{xyz}\), the orientation of the drone \(E_{xyz}\) and the intrinsic camera parameters of the drone, which we refer to as \(K\). The image space is a two-dimensional space, meaning that the backprojection of a voxel can be represented as a two-dimensional vector, which we call \(v_{proj}=[vp_{x},vp_{y}]\). Computing the \(v_{proj}\) of a voxel into the image space is done by a double projection. First, we project the voxel from the world space to the camera space, then we project from the camera space to the image space. These two projections can be done in a single step using Equation 1, \[v_{proj}=K*[(P_{xyz}-v_{xyz})*R]=[v_{x^{\prime}},v_{y^{\prime}},v_{z^{\prime}}] ^{\prime} \tag{1}\] where \(K\) is the intrinsic matrix, \(P_{xyz}\) is the position of the drone in world coordinates, \(v_{xyz}\) is the position of the voxel in world coordinates, and \(R\) is the rotation matrix obtained from the orientation of the drone \(E_{xyz}\). However, the resulting vector requires a further scaling operation to convert to homogeneous coordinates. The scaling factor \(v_{z^{\prime}}\) also acts as the numerical distance of the voxel from the camera. Thus, we compute \(v_{proj}=[\frac{v_{x^{\prime}}}{v_{x^{\prime}}},\frac{v_{y^{\prime}}}{v_{x^{ \prime}}}]^{\prime}\). ### Depth consistency The depth consistency property is determined using a voting mechanism employed throughout the reconstruction process. Let \(N=card(V)\) be the number of the voxels from the voxel grid. For each voxel \(v\in V\) we define a \(seen\) measure, which counts how many times a voxel has been \(seen\) from all the viewpoints of the dataset. A voxel is considered to be \(seen\), from a camera at position \(P_{i}\) having the orientation \(E_{i}\), if by projecting the voxel from the world space into the image space, its position is within the bounds of the image \(RGB_{i}\). Moreover, as the depth map \(D_{i}\) is also available, we strengthen this condition by forcing the voxel to also be approximately on par with the depth information, as we can compute its real distance from the camera and the pixel coordinates in which the voxel projects. For the depth consistency part, we extract information from the depth map based on the pixel coordinates of a voxel. In Section 3.1 we described how to compute the pixel coordinates of the projection of a voxel in the image space. These pixel coordinates are used to extract information from both the RGB image and its respective depth map. Assuming the voxel projects inside the boundaries of the image, the depth map offers us the expected depth of the voxel, which should be the expected distance from the camera. To this end, we employ Equation 2, where \(W\) and \(H\) are the width and height, respectively, of the image: \[s_{in}(v)=\left\{\begin{array}{l}1,if\ 0\leq vp_{x}\leq W\ \&\ 0\leq vp_{y}\leq H\\ 0,otherwise\end{array}\right. \tag{2}\] Computing the distance from a voxel to the camera has been done during the backprojection step. In the non-homogeneous form of the projection of the voxel, we have the \(v_{z^{\prime}}\) component which essentially represents the distance between the voxel and the camera. The \(v_{proj}\) vector defines in which pixel the voxel projects, such that using the depth map we extract the expected distance of the voxel from the camera. Thus we denote \(e_{dist}(v)=Depth[v_{proj}]\) and \(c_{dist}(v)=v_{z^{\prime}}\) which describe the expected distance of a voxel from the camera and the real distance of a voxel from the camera, respectively. We denote another consistency measure for a voxel based on the depth information in Equation 3, and a parameter \(\varepsilon_{seen}\) which controls the threshold for the depth consistency. \[s_{depth}(v)=\left\{\begin{array}{l}1,\ if\ |e_{dist}(v)-c_{dist}(v)|< \varepsilon_{seen}\\ 0,otherwise\end{array}\right. \tag{3}\] We define our _depth consistency_ function as \(seen(v)=s_{in}(v)*s_{depth}(v)\). The _seen_ function describes whether a voxel is depth consistent from a given position and orientation of the camera, based on the corresponding depth map. However, as there are multiple camera poses across the reconstruction data, we define a voting scheme by accumulating the times a voxel is considered as _seen_. To this end, during initialization, we define an array that accumulates the values of the \(seen\) function for all voxels, across all the reconstruction points. We call this array \(seen\_array\) and formally define it as \(seen\_array(v)=\sum_{i=1}^{T}seen(v)\), where each \(i\) represents a pose of the drone. The voting accumulation allows us to further define a threshold for which a voxel is considered depth consistent across all frames if \(seen\_array(v)>seen\_threshold\), where \(seen\_threshold\) is a parameter. ### Color consistency If the voxel projects inside the image bounds, we can also get the expected color of the voxel, based on the pixel in which it projects, as we can extract it from the RGB image itself. A voxel is considered color consistent if it can be seen from multiple viewpoints with roughly the same color. The main objective is to assign colors to each voxel such that they are consistent with the RGB images captured by the drone. In Section 3.2 we have defined a \(seen\) measure to represent whether a voxel is actually consistent from a given point of view. With this in mind and knowing which pixel, a voxel projects, we can also extrapolate its expected color, by querying the RGB image in that specific position. However, the expected voxel color is bound to the current point of view of the drone, thus we need a way to aggregate all the possible colors from all the viewpoints of the drone. Whenever a voxel is projected to the image space, the _seen_ function says whether it is actually seen by the camera or not. Furthermore, the exact pixel in which it projects is computed, such that the RGB image is queried for the expected color of the voxel. To counter different illumination variations in the RGB color space, we instead use the HSV color space alongside a discretization scheme. The discretization of the HSV space is done on each channel individually, by splitting the whole interval on which the channel is defined into multiple equal smaller intervals, for which we refer to as _bins_. Whenever we convert a bin back to its HSV counterpart, we select the middle point of the interval as the representative point. The HSV color space has been discretized into 15 bins on the Hue channel, 10 bins on the Saturation channel, and 10 bins on the Value channel. In total, for each voxel, there can be 1500 combinations of HSV bins for all the colors in the HSV spectrum. Essentially, the discretization method maps any given color of the HSV color spectrum into one of the 1500 bins that we have defined, facilitating 'closer' colors to be mapped in the same bin. This means that a voxel, seen from multiple viewpoints, has slightly different colors due to changes in illumination or small shadows and its color will be mapped in the same bin, further enhancing that the voxel is color consistent. Formally, let \(v\) be a voxel for which \(seen(v)=1\), given a specific position and orientation of the drone. The image \(RGB_{i}\) is the image captured by the drone in this scenario. As the voxel projects inside the image bounds, \(vp_{x}\) and \(vp_{y}\) represent the exact pixel where the voxel projects, leading to \(RGB_{i}[vp_{y},vp_{x}]\) being the expected RGB color of the voxel. Converting the RGB image to HSV and applying the discretization method, we arrive at Equation 4. \[bin(v)=discretize(HSV_{i}[vp_{y},vp_{x}])=[v_{H},v_{S},v_{V}]^{\prime} \tag{4}\] \(v_{H}\), \(v_{S}\), and \(v_{V}\) represent the bin on the Hue, Saturation, and Value components for the original RGB color. In a similar way to how depth consistency is defined, we need a way to aggregate the colors with which the voxels are seen across all the data points. We define an array called \(bins\) which has \(card(V)\) lines and 1500 columns. The number of columns is computed from the number of combinations of the discretized HSV space, for which we use a basic linearization method for the bins themselves. This array is updated through each frame during the reconstruction algorithm for each voxel that is seen, according to the color resembled in the RGB image. For depth consistency, we added a single vote for the aggregation to decide whether a voxel is consistent across all the frames. In the case of color consistency, we instead propose a different method, which takes into account both the distance between the voxel and the camera and the difference between the expected depth of the voxel and its actual one. Our intuition comes from the fact that voxels far away from the camera tend to have lower color components in the RGB image, such that we prefer to give more importance to voxels closer to the camera as they appear much bigger. On the other hand, if the expected depth of the voxel is further away than the actual depth of the voxel, it means that the voxel is occluded in the scene, meaning that the current color should not have too much importance on the real color of the voxel. According to our intuition, we define \(f_{1}(x)=e^{-\frac{ln(\alpha)}{250}\cdot x}\) as a function of distance which is monotonically decreasing with the distance between the voxel and the camera. Voxels farther away will have small values when plugged into this function, while voxels closer to the camera describe higher values. We also define \(f_{2}(x)=e^{\frac{-x^{2}}{2}}\) also monotonically decreasing based on the difference between the expected depth and the computed depth of the voxel. For voxels with similar computed and expected depths, the function has high values. The \(\alpha\) parameter controls what value should the function have whenever the voxels are close to the maximum distance (of 250 meters in our experiments). We ignore voxels for which their distance is greater than 250 meters as they tend to induce errors during the reconstruction. The \(\sigma\) parameter controls the width around the Gaussian representation, which translates into how large the difference between the expected depth and the computed depth can be. Given a voxel \(v\) and a specific pose of the drone, we define the color consistency as in Equation 5, \[color(v)=f_{1}(v_{z^{\prime}})*f_{2}(e_{dist}(v)-c_{dist}(v)) \tag{5}\] Finally, we accumulate these consistency values across all frames and for all depth-consistent voxels. We compute the expected color based on the projection of the voxel and add to the corresponding HSV bin the value of the _color_ function, as in Equation 6. \[bins[v,bin(v)]_{i+1}=bins[v,bin(v)]_{i}+color(v) \tag{6}\] After passing through all the frames, normalization is applied over the bins array, such that the difference in magnitudes is taken into account across all bins. \[bins[v,bin(v)]=\frac{bins[v,bin(v)]}{\sum_{bin}bins[v]} \tag{7}\] With the normalized bins array, the color consistency of a voxel is given by the maximum value across all bins, greater than a tunable threshold. We call this threshold \(\varepsilon_{hsv}\). If a voxel is color consistent, its color is therefore given by the bin corresponding to that maximum value. In order to obtain the RGB color, we compute the corresponding bins on the H, S, and V components based on the bin, pick the middle points of each interval and convert them back to the RGB color spectrum. ### Multi-scale voxel grid The multi-scale paradigm implies the reconstruction of the scene with voxels varying in size and combining the end results in a way that improves the image quality when compared to the individual ones. Following this idea, we propose to use the multi-scale approach by reconstructing the same scene using multiple increasing voxel sizes and blending the resulting RGB images. Given a set of increasing sizes \(S=S_{1},S_{2},...,S_{k}\), for each \(k\), the corresponding voxel grid is reconstructed, and the images are regenerated, as they are observed from the camera poses in the test set. The resulting set of images consists of multiple views of the reconstructed scene, at multiple scales, for each \(k\). Finally, the blending is done on an image level using a masking mechanism, starting from the smallest size \(S_{1}\). The mask denotes those pixels for which they are observed as empty, such that the larger scales images can fill the empty pixels in the smaller scales images. During the reconstruction process, the voxels within the scene are independent of each other, thus allowing for a better implementation of the initial voxel grid. The initial proposal described a single larger voxel grid that had a similarly large memory footprint. Due to the independent relation between the voxels, the large voxel grid can be split into disjoint smaller voxel grids, which we call _blocks_, for which the reconstruction algorithm is applied iteratively. Moreover, as the blocks are getting smaller, the voxels comprised in a single block can also get smaller, such that the scene is reconstructed at a higher resolution. ### Voxel grid parameters In this subsection, we present the parameters used for the construction of the Voxel Grid. Throughout all of our experiments, we have set the voxel size as \(voxel\_size=0.5\), which roughly translates to a size of half a meter. The grid consists of multiple voxels "stitched" together with no space in between them. Thus we arrange the voxels in a parallelepiped defined with a position in the World Space and a width, height, and length, signifying its sides. The position is, essentially, the position of one voxel in the world, be it either the central voxel of the grid or one of its corners. The width, height, length, and position are scene dependent, as the voxel grid must cover most of the scene, while also taking into account the positions and orientations of the drone. Even though this can be done in a dynamic way based on the drone information, we have opted to set them empirically and keep the sides of the voxel grid constant, while moving its position based on the reconstructed scene. The height is set to 80 meters (160 voxels), the length is set to 550 meters (1100 voxels) and the width is set to 550 meters (1100 voxels). ### Reconstruction enhancement Despite our efforts to maximize the density due to the image noise and low consistency score our output tends to have a number of empty regions. In order to address this problem, we developed an algorithm to densify its prediction. Similar to the depth completion algorithms, ours attempts an RGB completion based on the reconstructed input. The algorithm is trained and tested on the same sets used for voxel carving. The algorithm consists of a small U-Net [13] (1M parameters) trained to fill in the gaps. Two versions of the algorithm were tested, one with RGB input and another one with RGB and pose encoded as 6 channels, 3 for normalized pose and 3 for Euler angles encoded as \(sinx*cosx\). ## 4 Experimental Analysis We focus on aerial scene reconstruction - as opposed to a single-centered object generally targeted by fast reconstruction methods. We use the public real UFODepth dataset proposed in [9], which features telemetry data (commonly provided by UAV manufacturers) and a diverse set of landscapes, with both vegetation and man-made structures. The flight altitude is generally 50m and the flight trajectory is manual. Although the original resolution is 4K (3180 \(\times\) 2160 px), we rescale them to 1920 \(\times\) 1080 for most experiments). We use 5 minutes videos (9000 frames in total) which we sample every 20 frames. We report results for all four scenes from the dataset - Slanic, Olanesti, Chilia, and Herculane. For the first stage of the algorithm, we extract metric depth using an optical flow-based method, termed OdoFlow from [9] and FlowFormer [5] as a pretrained optical flow algorithm. For validation, we also compute the depth from SfM using Meshroom [4]. We split each scene of the dataset into two parts: the reconstruction part and the reprojection part. The reconstruction part contains 80% of the data points, leaving the rest of 20% as the reprojection data. We reconstruct the scene based on only the reconstruction data and test the results using the reprojection data. In this way, we ensure that the results are based on data that was not used during the reconstruction process. During the reprojection phase, we use the poses of the drone to place a virtual camera with the same intrinsic parameters as the intrinsics of the drone and capture an RGB Image for each pose. ### Results and discussion We compare our method against both implicit and explicit depth methods with similar computing requirements. Instant-NGP [12] is a fast framework that benefits from a 2D location hashing scheme and a CUDA-optimized MLP architecture. We compare two versions - vanilla and depth input. The depth support was added after the initial code release and was not officially supported. We use the same flow-based depth for our method, in order to provide a fair comparison. We have chosen two complementary voxel-based methods. The first one, Direct Voxel Grid Optimization[16] is similar to a NeRF that has the neural network replaced by a dense voxel grid, making it the most similar to our approach. On the other hand, Plenoxels [3] use a sparse grid in order to build a more efficient scene representation. As Table 1 shows, we achieve competitive performance among the compared methods on the training set and superior numbers on the testing set. It is notable that implicit methods that use input depth make little use of it, often resulting in poorer performance (e.g., Instant-NGP with depth). We argue that the testing set should be the focus for the novel view reconstruction, as overfitting the training set generally results in artifacts on novel views - missing structures or low-density structures, as shown in Figure 2. We conduct an ablation study on the final self-supervised learning stage and report our results in Table 2. This validates the final stage - it filters out the noise from the carving algorithm. Even without this later stage, our algorithm still yields competitive performance on the test scenes. Figure 3: Multi-scale voxel carving reconstruction at different scales - from left to right, 0.5, 0.25, 0.125 m voxel, followed by the merged result. Multi-scaling allows us to better reconstruct the scene and reduces the void areas (white in the picture above). Figure 2: Qualitative testing results from all scenes. From left to right, Plenoxels[3], Instant-NGP[12], MSVC full, MSVC before learning, ground truth. While other methods perform poorly on unseen images, displaying artifacts and significant noise levels, ours preserves more fine details and manages to reproduce a higher fidelity image. Best seen in color and zoom. A complementary video with more qualitative results can be seen here [https://youtu.be/dqN_OUVzscE](https://youtu.be/dqN_OUVzscE). **Runtime**. In terms of technical performance, we have measured the running time of the algorithm on a single Tesla P100 GPU, on a machine equipped with an Intel Xeon E5-2640 v4 CPU. The projection was run on the GPU to leverage fast matrix multiplication, while the colorization is executed on the CPU due to the large memory footprint of the bins array. We describe the mean running times in Table 3, where the reported times are averaged across all scenes with voxels varying in size. The scene consists of a grid of voxel blocks, where each block has a height of 80m, width of 50m, and length of 50m. As each block is reconstructed independently, the running time increases linearly with the number of blocks used in the initial grid. **Impact.** Novel view synthesis with explicit depth representation is a helpful tool for safer navigation or environmental monitoring. With the ever-increasing resolution capabilities of UAV cameras, our multi-resolution grid could be adapted to smaller voxel sizes and result in more accurate maps. ## 5 Conclusion We present a method for novel view synthesis of real scenes which is grounded in the real world and proved its effectiveness on large-scale scenes with noisy pose measurements. While most methods are tested on synthetic scenes and on the same images used for training, we evaluated and showed significant improvements over state-of-the-art from viewpoints that are drastically different from the ones seen in training. Moreover, we showed that our method suffers from very little degradation between the seen (training) vs unseen (testing), unlike the recent competition which shows significant \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline MethodScene & \multicolumn{2}{c|}{Slanic} & \multicolumn{2}{c|}{Olanesti} & \multicolumn{2}{c|}{Chilia} & \multicolumn{2}{c|}{Herculane} & \multicolumn{2}{c|}{Mean} \\ \hline & Train & Test & Train & Test & Train & Test & Train & Test & Train & Test \\ \hline DVGO[16] & 18.90 & 15.34 & 17.29 & 14.34 & 18.01 & 16.43 & 18.12 & 15.43 & 18.08 & 15.38 \\ \hline Plenoxels[3] & **23.08** & 17.22 & **21.15** & 15.92 & 21.19 & 18.59 & 21.60 & 16.07 & 19.5 & 15.38 \\ \hline Instant-NGP[12] & 19.91 & 16.54 & 20.36 & 19.25 & **21.88** & 21.01 & 20.69 & 19.12 & **20.71** & 18.98 \\ \hline Instant-NGP, Depth (1) & 20.18 & 16.63 & 14.98 & 15.55 & 22.09 & **21.32** & **20.48** & 18.04 & 19.43 & 17.88 \\ \hline \hline MSVC, Depth (1) & 20.64 & 20.05 & 19.66 & 19.35 & 20.53 & 19.91 & 20.05 & 19.93 & 20.22 & 19.81 \\ \hline MSVC, Depth (2) & 21.46 & **20.49** & 19.55 & **19.43** & 21.41 & 21.29 & 20.18 & **20.25** & 20.65 & **20.36** \\ \hline \end{tabular} \end{table} Table 1: PSNR reconstruction results on the UFODepth dataset. While many methods achieve better results at learning the training set, they have poorer performance for poses that are not close neighbors of the training set. Our method consistently yields better reconstruction errors on the testing set, yielding much lower performance drops on the test set. Depth (1) denotes OdoFlow depth, while (2) refers to SfM depth. Higher is better. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline MethodScene & \multicolumn{2}{c|}{Slanic} & \multicolumn{2}{c|}{Olanesti} & \multicolumn{2}{c|}{Chilia} & \multicolumn{2}{c|}{Herculane} & \multicolumn{2}{c|}{Mean} \\ \hline & Train & Test & Train & Test & Train & Test & Train & Test & Train & Test \\ \hline MSVC Steps 1-2 & 19.01 & 18.33 & 17.18 & 17.06 & 18.80 & 18.72 & 18.68 & 18.48 & 18.41 & 18.14 \\ \hline MSVC full & 21.46 & **20.49** & 19.55 & **19.43** & 21.41 & 21.29 & 20.18 & **20.25** & 20.65 & **20.36** \\ \hline \end{tabular} \end{table} Table 2: Ablation study for the self-supervised learning module. The result from the multi-scale voxel grid is used as a prior for the student, using only the training data available for the voxel carving algorithm (steps 1-2). This final step enables higher quality novel view synthesis on the testing set (full pipeline, steps 1-3). Higher is better. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Voxel Size[m] & Projection[s] & Colorization[s] & Total[s] \\ \hline 0.125 & 2.07 & 4.55 & 6.62 \\ \hline 0.25 & 0.35 & 1.11 & 1.46 \\ \hline 0.5 & 0.07 & 0.12 & 0.19 \\ \hline \end{tabular} \end{table} Table 3: Mean running times of the reconstruction algorithm per training image. Colorization tends to take most of the time when decreasing the voxel size due to the color consistency algorithm. Although a finer voxel size than 0.125m is possible, it was deemed too compute-intensive and delivers small performance gains. degradation between the two, indicating strong overfitting. On the algorithmic side, we demonstrated the effectiveness of both modules. Mlu-scale voxelization ensures that we start from good coverage and low error from drastically different unseen views, while our reconstruction enhancement module is able to self-train on the seen views and effectively adapt to each seen in a relatively short amount of time. We believe that our dual geometric and deep learning does cover new ground in this area and has the potential to push the boundaries further towards novel view synthesis that is grounded in reality. ## 6 Acknowledgements This work was funded by UEFISCDI, under Projects EEA-RO-2018-0496 and PN-III-P4-ID-PCE-2020-2819. We want to express our sincere gratitude towards Aurelian Marcu and The Center for Advanced Laser Technologies (CETAL) for their generosity and for providing us access to GPU computational resources.
2304.02712
Efficient and Accurate Automatic Python Bindings with cppyy & Cling
The simplicity of Python and the power of C++ force stark choices on a scientific software stack. There have been multiple developments to mitigate language boundaries by implementing language bindings, but the impedance mismatch between the static nature of C++ and the dynamic one of Python hinders their implementation; examples include the use of user-defined Python types with templated C++ and advanced memory management. The development of the C++ interpreter Cling has changed the way we can think of language bindings as it provides an incremental compilation infrastructure available at runtime. That is, Python can interrogate C++ on demand, and bindings can be lazily constructed at runtime. This automatic binding provision requires no direct support from library authors and offers better performance than alternative solutions, such as PyBind11. ROOT pioneered this approach with PyROOT, which was later enhanced with its successor, cppyy. However, until now, cppyy relied on the reflection layer of ROOT, which is limited in terms of provided features and performance. This paper presents the next step for language interoperability with cppyy, enabling research into uniform cross-language execution environments and boosting optimization opportunities across language boundaries. We illustrate the use of advanced C++ in Numba-accelerated Python through cppyy. We outline a path forward for re-engineering parts of cppyy to use upstream LLVM components to improve performance and sustainability. We demonstrate cppyy purely based on a C++ reflection library, InterOp, which offers interoperability primitives based on Cling and Clang-Repl.
Baidyanath Kundu, Vassil Vassilev, Wim Lavrijsen
2023-04-05T19:12:05Z
http://arxiv.org/abs/2304.02712v1
# Efficient and Accurate Automatic Python Bindings with cppyy & Cling ###### Abstract The simplicity of Python and the power of C++ force stark choices on a scientific software stack. There have been multiple developments to mitigate language boundaries by implementing language bindings, but the impedance mismatch between the static nature of C++ and the dynamic one of Python hinders their implementation; examples include the use of user-defined Python types with templated C++ and advanced memory management. The development of the C++ interpreter Cling has changed the way we can think of language bindings as it provides an incremental compilation infrastructure available at runtime. That is, Python can interrogate C++ on demand, and bindings can be lazily constructed at runtime. This automatic binding provision requires no direct support from library authors and offers better performance than alternative solutions, such as PyBind11. ROOT pioneered this approach with PyROOT, which was later enhanced with its successor, cppyy. However, until now, cppyy relied on the reflection layer of ROOT, which is limited in terms of provided features and performance. This paper presents the next step for language interoperability with cppyy, enabling research into uniform cross-language execution environments and boosting optimization opportunities across language boundaries. We illustrate the use of advanced C++ in Numba-accelerated Python through cppyy. We outline a path forward for re-engineering parts of cppyy to use upstream LLVM components to improve performance and sustainability. We demonstrate cppyy purely based on a C++ reflection library, _InterOp_, which offers interoperability primitives based on Cling and Cling-Repl. ## 1 Introduction The C++ programming language was adopted in the mid-'90s by the high energy physics (HEP) community to provide a more modern yet performant programming environment. At the time, HEP had some of the world's largest data sets (and processing rates). Python usage in HEP started in the mid-2000s, initially only for interactive access to experiments' frameworks and analysis. Meanwhile, the rest of the world caught up with and even surpassed HEP data processing needs, and a complete ecosystem has grown up around Python to support data science. Much of the modern software for HEP physics analysis, such as Awkward Array [1], Uproot [2], and Coffea [3], is grounded in this Python ecosystem. C++ remains favored because of its performance, access to accelerators in heterogeneous computing environments, and having foundational libraries in HEP, such as ROOT [4], GEANT4 [5], and most experiments' processing and production software frameworks. With both languages playing important roles, Python C++ integration has become instrumental. However, more advanced integration is necessary for high-performance codes and codes that run in heterogeneous environments. For example, a data processing task performed by a C++ framework running a user-provided Python function or kernel incurs significant overhead because of the large number of language crossings. Similarly, if the task is run as part of a GPU workflow, it will incur a performance penalty due to offloading since the Python code can only run on the CPU. Numba [6], a just-in-time compiler (JIT) for Python code, addresses this problem: it is capable of compiling Python code, targeting either the CPU or GPU, and provides callable interfaces to use the JITed closures from low-level libraries. Thus, Numba greatly improves the performance of Python code not only by lowering it to machine code but also by removing unnecessary crossings of language (or device) boundaries in inner loops. However, Numba has two shortcomings: preparing external code, such as functions written in C, to be usable for the Numba JIT is an involved, manual process; and Numba does not support C++ directly. Cppyy, an automatic runtime bindings generator [7] based on the C++ interpreter Cling [8], can address both these shortcomings. One goal of our work is to bring advanced C++, in particular highly optimized numerics libraries, into Numba-accelerated Python in a fully automatic way, with minimal language crossing overhead. In this paper, we present a fully generic prototype that provides, lazily and at runtime, Numba extensions for bound C++ through cppyy. Our prototype supports overloading, C++ templates, data member access, and instance method calls in Numba-accelerated Python. We outline a path forward for re-engineering cppyy's backend to use LLVM components more directly, thus improving performance and sustainability. We demonstrate cppyy on InterOp, a new library that implements interoperability primitives based on Cling and Clang-Repl. This paper is organized as follows: Section 2 motivates our technology and design choices; Section 3 provides a bird's eye overview of the implementation design; Section 4 shows timing results with the current implementation; Section 5 discusses the enhancements brought about by a new reflection library, InterOp, which will allow for even better performance and greater flexibility when retargeting codes; Section 6 discusses related work; finally, Section 7 summarizes our findings. ## 2 Motivation The typical approach to speed up a Python application has been to write the performance-critical portions of it in a lower-level language, usually C, and then access that code in the Python interpreter through an extension module. A much more elegant solution is to lower the original Python code to native through the use of a JIT so that the developer can stay within a single environment, Python, and only needs to write and debug the code once. In HEP, it is critical that such a JIT works well with bound C++ code and that JITed Python code is usable from C++, since any full HEP software stack relies heavily on both languages. For this paper, we use Numba as the Python JIT and integrate C++ into it through cppyy. Numba has an extension application programming interface (API) to add functions and types for the JIT to recognize, annotate, and lower to LLVM IR (intermediate representation; which in turn gets assembled to native code). Manually writing such extensions is tedious and time-consuming, so automation is critically important, especially to meet the goal of being able to use the JIT transparently. Integrating C++ with Numba would additionally make it easy to use Python kernels in C++ code without losing performance, thus reaping the benefits of continued use of a large installed base of C++ codes. To enable this, we use cppyy to generate first-class Python objects for bound C++, which enables Numba to query for reflection information through regular Python introspection and for cppyy to provide the necessary Numba extensions lazily at runtime. The result is a generic, compact, and efficient implementation. Modern C++ is especially amenable to automation: successive C++ standards have moved the language away from low-level code to more expressive abstractions for the purposes of code quality and optimizing compilers. In modern C++, many previously difficult to analyze corner cases are now more clear; compare, e.g., ownership rules for raw v.s. smart pointers. Furthermore, cppyy is based on Cling, the C++ interpreter, to have C++ match Python's dynamism, interactivity, and runtime behavior. Cling itself is based on Clang/LLVM, which allows it to stay up to date even as the C++ language continues to evolve rapidly. The use of Cling also means that unresolved corner cases can be handled at runtime in either C++ or Python as appropriate, without necessitating an intermediate bridging or mapping language. ## 3 Implementation We introduce C++ into Numba through a new reflection interface on top of cppyy, which resolves overloaded functions and instantiates C++ templates during Numba's type annotation step; and which provides the low-level description for lowering to LLVM IR for subsequent optimization and conversion of assembly to machine code. This process is completely automatic and transparent, including integration with other manually provided Numba extensions. Performance is on par, with only a moderate additional warm-up cost. Python programmers can thus continue to develop and debug their codes fully in Python while simply switching on the Numba JIT for selected performance-critical closures. Python is dynamically typed. It achieves this by wrapping objects into so-called PyObjects via a method known as _boxing_ and retrieves their original value via a method known as _unboxing_. Since Numba needs to eliminate these Python _boxing_ and _unboxing_ steps, it requires accurate typing information ahead of time. To provide detailed, exact and low-level C++ type information to Numba, which is needed in addition to the type information already available through Python's introspection, we introduce a new _Reflection API_ to cppyy. The reflection API follows the request/reply pattern and adds a new function to all cppyy-bound objects called __cpp_reflex__. The function has a constant signature for all cppyy object types, but its functionality depends on which one it is called on. It takes two arguments, the type of reflection information and its format, and returns the information requested. The request can range from asking if the object represents a namespace or class to requesting the object's C++ type. The format argument specifies whether we want the information in a string format, a C++ type, or in the manner that the API thinks is the most optimal. Fig. 1 shows the layered approach taken by Numba to separate concerns. Numba analyzes a Python closure as directed, and upon encountering cppyy types, it queries cppyy's pre-registered _numba_ext_ module for exact type information. This module, in turn, queries cppyy's new reflection API if it encounters a type that has not been seen before. The module then uses Numba's extension and core modules to generate the necessary typing classes, to be returned to Numba during this typing step; and lowering methods, which Numba will expect in its next step, to write the LLVM IR. Each core language construct (free functions, namespaces, classes, methods, data members, etc.) has its own implementation. For instance, to support C++ free functions, we extend the Numba _Callable_ type and register it as the type provider for cppyy function objects. When Numba encounters a C++ function bound by cppyy, it queries Figure 1: The interaction diagram for Numba, numba_ext and cppyy this registered type for the function signature (i.e., the return and argument types), given the actual argument types traced so far. Cppyy's _numba_ext_ uses the argument types to select an appropriate overload, aided by the new reflection API, and wraps this overload in a unique typing class. Finally, _numba_ext_ registers a lowering method uniquely generated for the address of the chosen C++ function overload, which will be called during the Numba lowering step. This process provides Numba with all the information it needs to convert the function call to LLVM IR. ## 4 Results We benchmark the running time of Numba JITed functions with cppyy objects against their Python counterparts to obtain the time taken by Numba to JIT the function (Numba JIT time), the time taken by cppyy to create the typing info and possibly perform lookups and instantiate templates (cppyy JIT time), the time taken to run the function after it has been JITed (Hot run time), and the time taken to run the equivalent Python function. The results are obtained on a 3.1GHz Intel NUC Core i7-8809G CPU with 32G RAM. To evaluate the speedup obtained by Numba JITing of cppyy objects, we use the fixture in Fig. 2. For each benchmark case in Tab. 1, a Numpy array of size \(100\times 100\) was passed to the function; times indicated are averages of 3000 runs. Numba JITed functions achieve a minimum speedup of 2.3 times in the case of methods and a maximum speedup of nearly 21 times in the case of templated free functions. Tab. 1 summarizes timings obtained on our chosen benchmarks. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline **Benchmark Case** & **Cppyy JIT** & **Numba JIT** & **Hot run** & **Python run** & **Speedup** \\ & **time (s)** & **time (s)** & **time (s)** & **time (s)** & **time (s)** & \\ \hline Function w/o args & 1.72e-03 & 3.33e-01 & 3.58e-06 & 1.73e-05 & 4.84\(\times\) \\ Overloaded functions & 1.05e-03 & 1.35e-01 & 4.51e-06 & 3.47e-05 & 7.70\(\times\) \\ Templated free functions & 8.92e-04 & 1.45e-01 & 3.48e-06 & 7.18e-05 & 20.66\(\times\) \\ Class data members & 1.43e-06 & 1.33e-01 & 5.87e-06 & 1.82e-05 & 3.10\(\times\) \\ Class methods & 2.16e-03 & 1.39e-01 & 6.06e-06 & 1.43e-05 & 2.36\(\times\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Performance comparison.** The columns **cppyy JIT time** and **Numba JIT time** represent respectively the time taken by cppyy to create the Python objects for C++ entities and time taken for Numba to compile the Python function. Both do not execute the compiled functions. Column **Hot run time** shows the time taken to run the Numba JITed function, whereas **Python run time** shows the time taken to run the uncompiled Python function with cppyy code. Column **Speedup** compares the _Hot run time_ to _Python run time_. Figure 2: **Benchmark fixture for ‘Templated free functions’ case.** On the left side, the C++ templated function is declared in cppyy. In the center, we use a Python kernel to run this C++ function. On the right, we use the same kernel but add the Numba JIT decorator to accelerate it and compare the timing of the two kernels. The other cases use the same setup, with only the highlighted code being replaced by the case being benchmarked. ## 5 Further Improvements A promising area of research is incorporating the LLVM IR of the C++ code directly into the Numba-generated LLVM IR, in effect inlining the C++ functions into Python. This would open up more optimization opportunities for the compiler and simplify support for alternate targets, such as GPUs. In practice, this requires IR to be sent from Cling to Numba or vice versa. One of the challenges here is that Numba uses the llvmlite [9] package to synthesize LLVM IR, while Cling operates directly on the underlying IR. Since LLVM IR has no guarantees of backward or forward compatibility, this could make a rather fragile dependency. Furthermore, the current design of Cling does not provide the right abstractions and does not have the necessary low-level infrastructure to either extract or inject IR into the interactive C++ environment. We therefore created a new, specialized library called InterOp to mitigate these problems. This library serves as a high-performance interface between C++ and Python, abstracting out details from the underlying compiler API. The InterOp library is built on top of Clang-Repl, a new capability available in LLVM, and has a reflection-focused API surface. InterOp will follow LLVM's release cycles, enabling its client to quickly adopt new features. The design of InterOp brings significant improvements to cppyy in the time taken and memory used for template instantiations. It shows promising preliminary results thanks to the new way of modeling them. In Fig. 3 (left), we show improvements in unpacking templates where we recursively instantiate a textbook example of a multitype array with an underlying implementation based on std::tuples. The InterOp version is about 40% faster and 4.5% more memory efficient. In Fig. 3 (right), we show that instantiating deeply nested templates scales better. The initial speedup of 6.2 times decreases to 3.8 times at about 4 levels of nesting but improves rapidly after that. In addition, we reduce the total memory used. In addition to the improved performance, the engineering merits of using Clang-Repl and InterOp in cppyy are: (a) adopting new features faster, including potential CUDA support; (b) better runtime and memory performance due to using opaque data structures directly from the underlying compiler rather than their string representations; and (c) a wider range of supported Figure 3: **Time taken and memory used during class template instantiation.** On the left, we compare template instantiations with std::tuple<double, double,...> where the number of template instantiations done by the C++ interpreter increases with the number of template arguments. On the right, we compare instantiating nested templates, for example, std::vector<...<std::vector<double> >, where cppyy has to instantiate each nesting individually from the innermost to the outermost class template. These are common features of high-performance, templated numerics libraries that utilize template expressions. stock LLVM versions. ## 6 Related Work Automatically improving the performance of Python code, i.e. without having to rewrite it, has been a research and/or engineering topic for many projects. Chosen Methodologies include just-in-time compilation [10][11], transpilation followed by ahead-of-time compilation [12], or by using Python as a domain-specific language [13][14]. Just-in-time compilation provides the most pythonic experience, however, since it can be used interactively, and is very relevant today as it is the most suitable in heterogeneous computing environments. The PyPy project implements an alternative Python interpreter that uses a tracing JIT to improve performance. PyPy is fully compatible with the standard CPython implementation and is able to provide significant speedups, especially for numeric code. Oppy is integrated at the interpreter level into PyPy, which allows C++ code to be called directly from the JITed traces without the need for function wrappers, and which enables the inlining of access to C++ data. PyPy's tracing JIT is, contrary to all other options, not based on LLVM. This reduces dependencies and overheads, but it does mean that the JIT can target fewer platforms and offers fewer optimizations. A weak point of PyPy is support for existing CPython extension modules, access to which, because of different memory models, can be rather slow. Furthermore, because the memory models need to be mapped, it is not as lenient as CPython when it comes to reference counting bugs, leading to incompatibilities. JAX is an alternative implementation of Numpy on top of XLA, a domain-specific compiler to optimize linear algebra for machine learning. It can JIT compile any Numpy-based code, targeting CPUs, GPUs, and TPUs. It takes advantage of the runtime knowledge of actual data types and sizes used, to specialize the optimization. JAX can be extended, through the underlying XLA, with user-defined functions and types. However, there is, at the time of writing, no officially supported extension interface and suggested implementations use undocumented APIs that are unsupported and subject to change. We find that both PyPy and JAX are good alternative candidates for our stated goals, but that Numba provides a better platform: where JAX supports GPUs well and where PyPy supports extension with user-defined functions and types, Numba does both. We will continue to track the developments of these alternative technologies, however, and may revisit our choice in the future if warranted. ## 7 Conclusion Seamless integration between programming languages is crucial in many scientific domains, including high-energy physics. Python and C++ are widely used in this field, and integrating the two languages is important for efficient and accessible workflows. The just-in-time compiler Numba is capable of compiling Python code to native but does not support C++. To address this issue, we introduce C++ into Numba through a new reflection interface on top of cppyy, which is an automatic, runtime bindings generator based on Cling, the C++ interpreter. This approach provides a fully automatic and transparent process, including integration with other Numba extensions, with performance on par with traditional methods. This technology allows HEP programmers to develop and debug their codes in Python, utilizing C++ libraries, while switching on the Numba JIT for selected performance-critical closures. We demonstrate 2-20 times speedup when using Numba to accelerate cppyy through our extension. We outlined further gains using the Clang-Repl component of LLVM and the newly developed library InterOp. The preliminary results from using new infrastructure showed 1.4 to 144 times faster handling of templated code in two different scenarios in cppyy, which will indirectly improve the Numba-accelerated Python. ## 8 Acknowledgement This work is supported by the National Science Foundation under Grant OAC-1931408 and under Cooperative Agreement OAC-1836650. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research (ASCR), SC Program 31.
2302.11386
Entropy Minimization for Optimization of Expensive, Unimodal Functions
Maximization of an expensive, unimodal function under random observations has been an important problem in hyperparameter tuning. It features expensive function evaluations (which means small budgets) and a high level of noise. We develop an algorithm based on entropy reduction of a probabilistic belief about the optimum. The algorithm provides an efficient way of estimating the computationally intractable surrogate objective in the general Entropy Search algorithm by leveraging a sampled belief model and designing a metric that measures the information value of any search point.
Xiaohe Luo, Warren B. Powell
2023-02-22T14:02:42Z
http://arxiv.org/abs/2302.11386v1
# Entropy Minimization for Optimization of Expensive, Unimodal Functions ###### Abstract Maximization of an expensive, unimodal function under random observations has been an important problem in hyperparameter tuning. It features expensive function evaluations (which means small budgets) and a high level of noise. We develop an algorithm based on entropy reduction of a probabilistic belief about the optimum. The algorithm provides an efficient way of estimating the computationally intractable surrogate objective in the general Entropy Search algorithm by leveraging a sampled belief model and designing a metric that measures the information value of any search point. ## 1 Introduction Optimization of noisy, unimodal functions which are expensive to compute has long been a hard problem to solve in the field of global optimization. The underlying problem can be formulated as follows: \[\max_{x\in\mathcal{X}\subseteq\mathbb{R}}\ \ f(x)=\mathbb{E}[F(x,W)] \tag{1.1}\] where \(\mathcal{X}\) is a bounded feasible domain and \(W\) is a random variable representing the noise involved when observing the true function values. The black-box objective function \(f\) is usually continuous, yet nonconvex, and its gradients are inaccessible. In this paper, we restrict our attention to finding the unique global optimum of this class of functions with a unimodal structure. In real-world applications, however, the measurement of the objective function can be both noisy and expensive: tuning hyperparameters of a machine-learning algorithm (Snoek et al., 2012), arranging drug trials, controlling robots, and finding stepsizes in stochastic gradient algorithms, to name just a few applications. When faced with expensive functions, we have to solve these problems with a very limited number of observations. Bayesian optimization (BO) has been a popular strategy for solving the proposed problem due to its ability to find the near optimal solution in fewer experiments (Wu and Frazier, 2016; Snoek et al., 2012). BO achieves this goal by assuming a statistical model on the unknown, complicated objective function, which encodes the prior belief about the underlying function. This statistical model is updated after new observation(s) are made and reflects the incoming knowledge acquired as the experiment proceeds. In addition, Bayesian optimization adopts a comparatively tractable acquisition function as the new objective to guide the sampling process. At each iteration \(n\), a BO algorithm seeks the point that maximizes the acquisition function as the next point for function evaluation, which means there is no attempt to maximize the value of information as is done with policies such as the knowledge gradient. The acquisition function can be identified as the objective function of the imbedded surrogate maximization problem used in three of the four classes of policies: cost function approximations (CFAs), value function approximations (VFAs) and direct lookahead policies (DLAs) (Powell, 2007, 2022). Since Mockus (1994) set up the theoretical foundation, Gaussian Process regression (GP) has become the typical statistical prior most BO algorithms adopt. The focus of the previous literature has been on designing an effective policy, equivalently the acquisition function, based on a GP prior (Williams and Rasmussen, 2006). Improvement-based policies proposed in previous works include probability of improvement (PI) (Kushner, 1964), expected improvement (GP-EI) (Mockus et al., 1978; Jones et al., 1998) and knowledge gradient (KG) (Scott et al., 2011; Frazier et al., 2009). These are direct lookahead policies (DLAs in Powell (2022)) as they use the one-step improvement in the estimation of the global optimum, measured by some metric, as the criterion to pick the next query point. Policies based on other criteria are, for example, upper confidence bound (GP-UCB) (Srinivas et al., 2009), which is a popular method in the CFA class and entropy search (ES) (Hennig and Schuler, 2012). Entropy search is particularly interesting because it hybridizes the methodology of one-step lookahead and the idea of finding a good substitute for the unknown value functions. Each of these policies incorporates a distinct philosophy of sampling a good sequence of queries that help one decide on the final optimum. While all of these methods aim at finding the next point for observation, the resulting surrogate objective functions are, though better than the original objective function \(f\), not always easy to maximize. For example, KG and ES involve maximizing the expectation of a complicated, nonconvex function. Let \(\mathcal{D}^{n}\coloneqq\{x^{i},\hat{f}(x^{i})\}_{i=0}^{n-1}\) be the set of data available at iteration \(n\). Specifically, the surrogate objective function adopted by ES takes the form: \[ES^{n}(x)=H(p(X^{*}|\mathcal{D}^{n}))-\mathbb{E}_{\hat{f}(x)|\mathcal{D}^{n}}[H (p(X^{*}|\mathcal{D}^{n}\cup\{x,\hat{f}(x)\}], \tag{1.2}\] where \(\hat{f}(x)\) is a noisy observation at \(x\), \(p(X^{*}|\mathcal{D}^{n})\) is the posterior distribution of the global optimizer \(x^{*}\) at iteration \(n\) and \(H(\cdot)\) is the Shannon differential entropy (Hennig and Schuler, 2012). Formula (1.2) does not have an analytical expression if the GP prior is selected, resulting in a computationally intractable objective (Frazier, 2018; Hernandez-Lobato et al., 2014). As a consequence, it becomes numerically expensive to estimate (1.2) not only because Monte Carlo sampling is required in optimizing this kind of surrogate objective functions (Brochu et al., 2010) but also because each observation is hard to compute. To address the computational difficulty of ES, there have been several attempts in the literature. By recognizing (1.2) to be mutual information, Hernandez-Lobato et al. (2014) proposes Predictive Entropy Search (PES), whose surrogate objective function is theoretically identical to (1.2) by the property of mutual information: \[PES^{n}(x)=H(p(\hat{f}(x)|\mathcal{D}^{n}))-\mathbb{E}_{X^{*}|\mathcal{D}^{n}}[ H(p(\hat{f}(x)|\mathcal{D}^{n},X^{*})]. \tag{1.3}\] A further step is taken by considering the maximum function value \(y^{*}\) instead of \(x^{*}\) in formula (1.3) (Hoffman and Ghahramani, 2015; Wang and Jegelka, 2017), which yields the Max-value Entropy Search (MES): \[MES^{n}(x)=H(p(\hat{f}(x)|\mathcal{D}^{n}))-\mathbb{E}_{Y^{*}|\mathcal{D}^{n}} [H(p(\hat{f}(x)|\mathcal{D}^{n},Y^{*})]. \tag{1.4}\] Even though both of the above methods simplify the procedure of approximating (1.2), equations (1.3) and (1.4) still require Monte Carlo methods to estimate the expectation in the formulas as well as additional techniques such as expectation propagation to sample from the distribution \(p(X^{*}|\mathcal{D}^{n})\) or \(p(Y^{*}|\mathcal{D}^{n})\). In this paper, we present a new BO algorithm based on the idea of entropy search: the sampled-belief entropy search (SBES). By assuming a sampled belief model on the underlying function \(f\), we convert the hard-to-evaluate ES surrogate objective function into a deterministic function that can be optimized via well-established deterministic, derivative-free optimization methods. This conversion is established on the elimination logic of Fibonacci search (Ferguson, 1960) and the probability of observing the correct gradient signs introduced for a gradient-based method in Powell and Ryzhov (2012); Waeber et al. (2013). In the derivative-free setting, we extend the probability of observing correct gradient signs to the probability of observing the correct location of the optimum, also called the probability of correct region assignment. We show that by carefully designing the probability of correct region assignment, SBES outperforms when the total budget is small. This paper makes the following contributions: 1)We design a new entropy-search based policy that specifically tackles the difficulty in finding the optimum of unimodal, noisy and expensive functions. To combat the challenge introduced by noisy observations and to fully leverage the proposed unimodal structure, a discrete parametric prior is adopted to model the truth function \(f\). 2)By introducing the probability of correct region assignment that can be calculated using the parametric prior, we derive an updating rule for any posterior \(p(X^{*}|\mathcal{D}^{n})\) and thus successfully turn expression (1.2) into an analytical formula. 3)We present an error bound on the one-step information gain of SBES in the stochastic setting. 4) We conduct empirical experiments to compare SBES and other BO algorithms, including both the non entropy-search based and the entropy-search based methods, under different truth functions across various levels of noise. We show that SBES is robust, competitive with other BO algorithms at high noise levels and outperforms at low and medium noise levels. The paper is organized as follows. Section 2 describes the process of solving problem (1.1) formally as a one-dimensional search problem and introduces our proposed models on this problem. The modeling part includes the definition of the probability of correct region assignment function, the sampled belief model and the probability distribution over the belief of \(x^{*}\). Section 3 starts with formulating the one-dimensional search problem as a stochastic sequential decision problem. It is then followed by a discussion of the surrogate objective function we use and ways of solving it when (i) \(\mathcal{X}\) is finite and discrete, or (ii) \(\mathcal{X}\) is compact. Section 4 is devoted to establishing the one-step error bound of our algorithm. This paper concludes with section 5 in which we compare SBES against popular bench-mark algorithms such as GP-UCB, GP-EI and the response surface method. Problem and Models We begin by defining the problem of finding the location of the optimum of a unimodal function mathematically. We formulate the problem of finding the best algorithm (policy) \(\pi\) for learning the value \(x^{\pi,N}\) that maximizes \(\mathbb{E}[F(x,W)]\) in a budget of \(N\) function evaluations. The second part of this section is devoted to the statistical model we use to approximate the underlying function \(f\) and the modeling of uncertainty based on this statistical model. ### Problem Definition. Suppose there is a unimodal function \(f:\mathbb{R}\rightarrow\mathbb{R}\) with a feasible domain \(\mathcal{X}\) that can be finite and discrete or compact. There are only noisy observations of this function available whenever we decide to evaluate the function at some points. Let \(\hat{f}(x)\) be the observation of the function \(f\) at a point \(x\in\mathcal{X}\). Mathematically, define \(F(x,W):=\hat{f}(x)\), where \(W\) is a random variable that inherits the randomness of the noise. We then make the following assumptions about \(F(x,W)\): 1. Unbiasedness: at any point \(x\in\mathcal{X}\), \(\mathbb{E}[\hat{f}(x)]=f(x)\). 2. Homoscedasticity: for any \(x\) and \(y\in\mathcal{X}\), \(Var(\hat{f}(x))=Var(\hat{f}(y))=\sigma^{2}\). 3. Gaussian noise: \(\hat{f}(x)=f(x)+\epsilon\), where \(\epsilon\sim\mathcal{N}(0,\sigma^{2})\). 4. \(f(x)\) is continuous and might be differentiable but the gradient of \(f(x)\) is not available to us. Define \(x^{*}\coloneqq\operatorname*{argmax}_{x\in\mathcal{X}}\mathbb{E}[\hat{f}(x)]\). Our challenge is to find the optimum of this unimodal function \(x^{*}\) with a limited number of measurements. Assume there is an algorithm \(\pi\) that produces a solution \(x^{\pi,N}\) after \(N\) iterations. Then our goal is to find the best algorithm that solves: \[\max_{\pi}\mathbb{E}\left\{F(x^{\pi,N},\hat{W})|S^{0}\right\}. \tag{2.1}\] given an initial state \(S^{0}\). In order to determine \(x^{\pi,N}\) after \(N\) iterations, the policy \(\pi\) needs to do learning via exploration in the search region. Ideally, the optimal policy \(\pi\) already knows the location of \(x^{*}\) after \(N\) experiments so that it will map \(x^{\pi,N}\) to \(x^{*}\). With this being said, \(\pi\) also needs to determine the set of points for function evaluation, denoted as \(\{x^{n}\}_{n=1}^{N-1}\), up to iteration \(N\) so that it can give the best estimate of \(x^{*}\). Hence, taking the uncertainty in the initialization \(S^{0}\) into account, problem (2.1) is equivalent to: \[\max_{\pi}\mathbb{E}_{S^{0}}\mathbb{E}_{W^{1},\ldots,W^{N}|S^{0}}\left\{\mathbb{ E}_{\hat{W}}[F(x^{\pi,N},\hat{W})|S^{0}]\right\}. \tag{2.2}\] ### Models. In Fibonacci search, two function evaluations can provide an indication of where the optimum might be and thus eliminate a section of the region in \(\mathcal{X}\) that the optimum is not in. This is a property that comes from unimodality and the assumption of no noise. Starting with two initial points, Fibonacci search chooses one point at each iteration in the search region to evaluate. Then this new function evaluation is used along with the previous function evaluation to narrow the region where \(x^{*}\) might be located. In the stochastic setting, we still want to exploit the property of unimodality and extract the information about the location of \(x^{*}\) from two function evaluations. However, the presence of noise hinders the elimination of regions as is done with classical Fibonacci search, so we maintain a dynamic belief about \(x^{*}\) in the form of distribution instead. This belief is updated by choosing one point from the history of prior observations, and another point \(z^{n}\) at which we perform another function evaluation. The point \(z^{n}\) is chosen to minimize the expected entropy in our belief about the location of \(x^{*}\). For the purpose of this section, we defer the discussion of how exactly we choose those points at each iteration \(n\) to the section 3. For now, assume that at iteration \(n\), we pick a new point \(z^{n}\in\mathcal{X}\) and a point from history \(h^{n}\in H^{n}\), where \(H^{n}\coloneqq\{x^{0}\}\cup\{z^{i}\}_{i=1}^{n-1}\) is the set of historically chosen points up to time \(n\) and \(x^{0}\) is the set of initial points. So the set of points we pick at iteration \(n\) is: \(x^{n}=(h^{n},z^{n})\). We then perform an (expensive) function evaluation \(\hat{f}(z^{n})=F(z^{n},W^{n+1})\), and define the history of observations to be \(\hat{f}^{n}\coloneqq\{\hat{f}(x^{0})\}\cup\{\hat{f}(z^{i})\}_{i=1}^{n-1}\). A prior distribution of belief of the location of \(x^{*}\) can be constructed using \((H^{n},\hat{f}^{n})\), denoted as \(P^{n}\). The posterior distribution \(P^{n+1}\) can then be calculated given the comparison between \(\hat{f}(h^{n})\) and \(\hat{f}(z^{n})\) using Bayes theorem. In this section, the relative location of \(h^{n}\) and \(z^{n}\) is important; that is, whether \(h^{n}<z^{n}\) or \(h^{n}>z^{n}\) determines how we update \(P^{n}\). So we label the smaller of the two points \(x^{n}_{l}\) and the larger of the two points \(x^{n}_{r}\). Then \(x^{n}=(h^{n},z^{n})=(x^{n}_{l},x^{n}_{r})\). In the remaining sections, we first introduce the sampled belief model used to represent the ground truth \(f(x)\). Then, we discuss two probabilistic approaches that are used to model the uncertainty in function observations and the location of \(x^{*}\). #### 2.2.1 Sampled Belief Model. To address the question of how to estimate the true function appropriately, one approach is to use a finite set of parametric, unimodal functions (see figure 1 for a family of gamma distributions as an example). Suppose the ground truth can be parameterized as \(f(x|\theta)\). Let \(F_{\Theta}=\{f_{k}=f(x|\theta_{k}):\theta_{k}\in\Theta,\forall\ 1\leq k\leq K\}\) be a parametric family of unimodal functions that are used to approximate \(f(x|\theta)\). Also define \(p_{k}^{n}\) to be the probability that \(f_{k}\) is the best representation of \(f\) at iteration \(n\): \[p_{k}^{n}\coloneqq\mathbb{P}[\theta=\theta_{k}|\mathcal{D}^{n}]. \tag{2.3}\] We then approximate \(f\) by \(\bar{f}^{n}(x)=\sum_{k=1}^{K}p_{k}^{n}f_{k}(x)\) for all \(x\in\mathcal{X}\). During the experiment, \(\bar{f}\) will become more precise as more data points come in. Initialize \(p_{k}^{0}=\frac{1}{K}\), and implement Bayesian updating to \(\{p_{k}^{n}\}\) after each observation \(\hat{f}(z^{n})\). That is: \[p_{k}^{n+1}=\frac{p(\hat{f}(x)|x=z^{n},\theta_{k})p_{k}^{n}}{\sum_{k=1}^{K}p( \hat{f}(x)|x=z^{n},\theta_{k})p_{k}^{n}}\ \ \forall 1\leq k\leq K, \tag{2.4}\] where \(p(\hat{f}(x)|x=z^{n},\theta_{k})\) is the density function of a normally distributed random variable \(\hat{f}(z^{n})\sim\mathcal{N}(f_{k}(z^{n}),\sigma^{2})\). Figure 1: A family of gamma functions. #### 2.2.2 Probability of Correct Region Assignment. Since the underlying function is unimodal, two function evaluations can provide us information about the location of \(x^{*}\). Nevertheless, the noise in observations distorts this piece of information, so it is necessary to model the uncertainty in comparing two observations. Given any two points \(x,y\) and their corresponding observations of \(f\): \(\hat{f}(x),\hat{f}(y)\), there are two possible outcomes: either \(\hat{f}(x)>\hat{f}(y)\) or \(\hat{f}(x)\leq\hat{f}(y)\). Similarly, the true function values evaluated at these two points can also be split into two cases: \(f(x)>f(y)\) or \(f(x)\leq f(y)\). Let \(g(x,y)\) denote the probability that we are able to observe the true comparative relation between \(f(x)\) and \(f(y)\), i.e. if \(f(x)>f(y)\), the probability of observing \(\hat{f}(x)>\hat{f}(y)\) is \(g(x,y)\); if \(f(x)\leq f(y)\), then the probability of observing \(\hat{f}(x)\leq\hat{f}(y)\) is \(g(x,y)\). Mathematically for any \(x,y\in\mathcal{X}\), \[g(x,y)\coloneqq\begin{cases}\mathbb{P}(\hat{f}(x)>\hat{f}(y))&\quad\text{if }f(x)>f(y)\\ \mathbb{P}(\hat{f}(x)\leq\hat{f}(y))&\quad\text{if }f(x)\leq f(y).\end{cases} \tag{2.5}\] The following truth table describes the relationships between \(\hat{f}(x),\hat{f}(y)\) and \(g(x,y)\): \[\begin{array}{c|c|c}\text{\psfig{width=150.0pt}}\text{\psfig{width=150.0pt}} \text{\psfig{width=150.0pt}}\text{\psfig{width=150.0pt}}\text{\psfig{width=150.0pt}} \text{\psfig{width=150. and consider the following table: \[\begin{array}{c|c|c|c|c}\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par At each iteration \(n\), we are trying to approximate \(p_{X^{*}}\) by \(P^{n}\) given observations. Equation (2.8) indicates that at iteration \(n\), after we have made the decision \(x^{n}\) and have the two function evaluations we need \(\hat{f}(x^{n})=(\hat{f}(h^{n}),\hat{f}(z^{n}))\), \(P^{n}\) can be updated to \(P^{n+1}\) by Bayes' theorem. In the following, we present the updating rule in detail. First we define an auxiliary variable \(\hat{y}^{n+1}\) as: \[\hat{y}^{n+1}=\mathds{1}_{\left\{\hat{f}(x^{n}_{1})\leq\hat{f}(x^{n})\right\}}. \tag{2.9}\] Note that \(\hat{y}^{n+1}\) is a function of both the decision \(x^{n}\) and the randomness \(W^{n+1}\), indicating the comparative relation between the two observations that are used to update our belief. Given the definition of \(\hat{y}^{n+1}\), we can rewrite (2.8) as: \[P^{n+1}(x)dx =\mathbb{P}(X^{*}\in dx|S^{n},x^{n},\hat{y}^{n+1})\] \[=\frac{\mathbb{P}(\hat{y}^{n+1},X^{*}\in dx|S^{n},x^{n})}{ \mathbb{P}(\hat{y}^{n+1}|S^{n},x^{n})}\] \[=\frac{\mathbb{P}(\hat{y}^{n+1}|X^{*}\in dx,S^{n},x^{n})P^{n}(x)} {\int_{X}\mathbb{P}(\hat{y}^{n+1}|X^{*}\in dx,S^{n},x^{n})P^{n}(x)dx}dx. \tag{2.10}\] Equation (2.10) is a general formula for calculating the posterior \(P^{n+1}\) given a prior \(P^{n}\). With the help of the sampled belief model and the probability of correct region assignment, (2.10) has an analytic expression (please see appendix for detailed calculations). The exact updating formula can be found in section 3.2. ## 3 Algorithm and Policy With the models introduced in Section 2, it is possible to solve problem (2.2) with a well-defined algorithm. We first formalize the SBES algorithm in the beginning of this section. Then, we present all the necessary components, such as the updating rules for all state variables, of the algorithm with the following steps: begin with defining the five basic elements of the sequential decision problem (2.2); then design a one-step lookahead policy \(\pi\) for solving this sequential decision problem. We then show that the policy itself is generated by a surrogate optimization problem. ### Algorithm. Section 2 gives a brief overview of the SBES algorithm and all the probabilistic models we use to quantify the uncertainty in our problem without specifying how we make the decision \(x^{n}\). We now describe the full algorithm along with the surrogate objective function. Inspired by Fibonacci search, our algorithm starts with a pair of initial points \(x^{0}=(x^{0}_{l},x^{0}_{r})\) and pick a new point in the search region, denoted as \(z^{n}\) at iteration \(n\geq 1\) to obtain a new function evaluation \(\hat{f}(z^{n})\). This new observation is compared with one of the previous observations to update the belief \(P^{n}\). This means that besides \(z^{n}\), we also pick a historical point \(h^{n}\in H^{n}\) and use the corresponding function evaluation \(\hat{f}(h^{n})\in\hat{f}^{n}\) directly. These two points are chosen to minimize the expected entropy of our belief \(P^{n+1}\) in the next step. See algorithm 1 for details. ``` 0: draw an initial set of samples \(x_{0}=(x^{0}_{l},x^{0}_{r})\) in \(\mathcal{X}\); pick an initial state \(S^{0}=\left(P^{0},H^{0},\hat{f}^{0},g^{0},\hat{g}^{0},\left\{p^{0}_{k}\right\} _{k=1}^{K}\right)\). 1:for\(n=1\)to\(N\)do 2:ifn=1then 3: Evaluate \(x^{0}\) and obtain \(\hat{f}(x^{0})=(\hat{f}(x^{0}_{l}),\hat{f}(x^{0}_{r}))\). 4:else 5: Evaluate \(z^{n-1}\) and observe \(\hat{f}(z^{n-1})\); obtain \(\hat{f}(h^{n-1})\) from \(\hat{f}^{n}\). 6:endif 7: Update \(S^{n-1}\) to \(S^{n}\) according to equations (18)-(26). 8: Sample \(m\) points from \(P^{n}\) to form the set \(\mathcal{A}^{n}\). Pick \(x^{n}=(h^{n},z^{n})=argmin_{h\in H^{n},z\in\mathcal{A}^{n}}\,\nu^{n}(h,z)\) defined in (3.9). 9:endfor Return\(\bar{x}^{N}=argmax_{x\in\mathcal{X}}\,P^{N}(x)\). ``` **Algorithm 1** Sampled Belief Entropy Search ### The Sequential Learning Process. Our algorithm is a form of sequential decision problem, which can be described by five core elements (see (Powell, 2022)): state variables, decision variables, exogenous information, transition function and objective function. We describe these below: **State Variables**: \(S^{n}=\left(P^{n},H^{n},\hat{f}^{n},g^{n},\bar{g}^{n},\{p^{n}_{k}\}_{k=1}^{K}\right)\). At iteration \(n\), the state variable \(S^{n}\) consists of two parts. The first part is our belief about some important quantities in our sequential decision problem. This includes: the belief \(P^{n}\) of the location of the optimum \(x^{*}\) given the first n observations, the belief about the probabilities of correct region assignment \((g^{n},\bar{g}^{n})\), and the belief about how "close" each sample curve is to the underlying function \(\{p^{n}_{k}\}_{k=1}^{K}\). The second part of the state variables involves the history of our experiments: \(H^{n}\) denotes the set of points we have chosen up to iteration \(n\) and \(\hat{f}^{n}\) is the corresponding observations of the function value at those points. This is all the information we need to make a decision at iteration \(n\). **Decision Variables**: \(x^{n}=(h^{n},z^{n})\). Due to the logic that we only choose one new point for function evaluation and the other function evaluation comes from the history, the decision variable has two components. It is straightforward that the first decision variable is \(z^{n}\in\mathcal{X}\), the point we pick for function evaluation. The second decision is the observation \(h^{n}\in H^{n}\) we use from history. After \(n\) observations, we make the decision of the next point to observe using our policy \(x^{n}=X^{\pi}(S^{n})\) which depends on the information in \(S^{n}\). **Exogenous Information**: \(W^{n+1}\). The variable \(W^{n+1}\) is the new information that is observed from the \(n+1^{\text{st}}\) function evaluation which means we can write our sequential decision process as: \((S^{0},x^{0},W^{1},S^{1},x^{1},...,\)\(S^{N-1},x^{N-1},W^{N},S^{N},x^{N})\). After being in a state \(S^{n}\) and choosing an action \(x^{n}\), we will observe a realization of the random variable \(W^{n+1}\) coming outside of the system. Since the noise in a single observation follows \(\epsilon\sim\mathcal{N}(0,\sigma^{2})\) and when \(n=0\) we have two new observations, \(W^{1}\) is a two-dimensional Gaussian variable such that \(W^{1}\sim\mathcal{N}(0,\sigma^{2}I)\). When \(n\geq 1\), \(W^{n+1}\) is the randomness in evaluating \(\hat{f}(z^{n})\). So \(W^{n+1}\sim\mathcal{N}(0,\sigma^{2})\). **Transition Function**: \(S^{n+1}=S^{M}(S^{n},x^{n},W^{n+1})\). The transition function describes how a state \(S^{n}\) evolves to \(S^{n+1}\) given the decision \(x^{n}\) and the exogenous information \(W^{n+1}\). At iteration \(n\), the state variables in this problem is \(S^{n}=(P^{n},H^{n},h^{n},g^{n},\tilde{g}^{n},\{p_{k}^{n}\}_{k=1}^{K})\). This means that the transition function consists of a series of equations for updating each state variable, as follows: 1. \(P^{n}\) **to**\(P^{n+1}\): following the arguments in section 2.2.3 and letting \(F^{n}\) be the CDF of \(P^{n}\), (2.10) takes the forms: **case 1:** if \(\hat{y}^{n+1}=1\), \[P^{n+1}(x)=\begin{cases}\frac{1-g^{n}(x_{l}^{n},x_{r}^{n})}{U_{1}^{n}(x_{l}^{n},x_ {r}^{n})}P^{n}(x)&\quad\text{if $x\leq x_{l}^{n}$},\\ \frac{1-\bar{g}^{n}(x_{l}^{n},x_{r}^{n})}{U_{1}^{n}(x_{l}^{n},x_{r}^{n})}P^{n} (x)&\quad\text{if $x_{l}^{n}<x<x_{r}^{n}$},\\ \frac{g^{n}(x_{l}^{n},x_{r}^{n})}{U_{1}^{n}(x_{l}^{n},x_{r}^{n})}P^{n}(x)& \quad\text{if $x_{r}^{n}\leq x$}.\end{cases} \tag{3.1}\] \[U_{1}^{n}(x_{l}^{n},x_{r}^{n})\coloneqq (1-g(x_{l}^{n},x_{r}^{n}))F^{n}(x_{l}^{n})+(1-\bar{g}(x_{l}^{n},x _{r}^{n}))(F^{n}(x_{r}^{n})-F^{n}(x_{l}^{n}))\] \[+g(x_{l}^{n},x_{r}^{n})(1-F^{n}(x_{r}^{n})). \tag{3.2}\] **case 2:** if \(\hat{y}^{n+1}=0\), \[P^{n+1}(x)=\begin{cases}\frac{g^{n}(x_{l}^{n},x_{r}^{n})}{U_{0}^{n}(x_{l}^{n},x_{r}^{n})}P^{n}(x)&\quad\text{if $x\leq x_{l}^{n}$},\\ \frac{\bar{g}^{n}(x_{l}^{n},x_{r}^{n})}{U_{0}^{n}(x_{l}^{n},x_{r}^{n})}\ P^{n} (x)&\quad\text{if $x_{l}^{n}<x<x_{r}^{n}$},\\ \frac{1-g^{n}(x_{l}^{n},x_{r}^{n})}{U_{0}^{n}(x_{l}^{n},x_{r}^{n})}\ P^{n}(x)& \quad\text{if $x_{r}^{n}\leq x$}.\end{cases} \tag{3.3}\] \[U_{0}^{n}(x_{l}^{n},x_{r}^{n})\coloneqq g(x_{l}^{n},x_{r}^{n})F^{n}(x_{l}^{n})+\bar{g}(x_{l}^{n},x_{r}^{n})(F^{n} (x_{r}^{n})-F^{n}(x_{l}^{n}))\] \[+(1-g(x_{l}^{n},x_{r}^{n}))(1-F^{n}(x_{r}^{n})). \tag{3.4}\] 2. \(H^{n},h^{n}\) **to**\(H^{n+1},h^{n+1}\): \[H^{n+1}=H^{n}\cup z^{n}.\] \[h^{n+1}=h^{n}\cup\hat{f}(z^{n}).\] 3. \(g^{n},\bar{g}^{n}\) **to**\(g^{n+1},\bar{g}^{n+1}\): \(\forall x,y\in\mathcal{X}\), \[g^{n+1}(x,y) =\sum_{k}^{K}p_{k}^{n}\cdot g_{k}(x,y).\] \[\bar{g}^{n+1}(x,y) =\sum_{k=1}^{K}\mathbb{P}(\theta_{k}|x<x^{*}<y)\left(1-\Phi(-\frac {f_{k}(x)-f_{k}(y)}{\sqrt{2}\sigma})\right).\] where, \[g_{k}(x,y)=\mathds{1}_{\left\{f_{k}(x)\geq f_{k}(y)\right\}}\left(1-\Phi(-\frac{f _{k}(x)-f_{k}(y)}{\sqrt{2}\sigma})\right)+\mathds{1}_{\left\{f_{k}(x)<f_{k}(y) \right\}}\Phi(-\frac{f_{k}(x)-f_{k}(y)}{\sqrt{2}\sigma}).\] \[\bar{\Theta}\coloneqq\left\{\theta_{k}:x_{k}^{*}\in(x,y),\ \forall 1\leq k \leq K\right\}.\] \[\mathbb{P}(\theta_{k}|x<x^{*}<y)=\left\{\begin{array}{cl}0&\text{ if }\theta_{k}\notin\bar{\Theta},\\ \frac{p_{k}^{n}}{\sum_{\theta_{k}\in\bar{\Theta}}p_{k}^{n}}&\text{if }\theta_{k} \in\bar{\Theta}.\end{array}\right.\] 4. \(\left\{p_{k}^{n}\right\}_{k=1}^{K}\) **to \(\left\{p_{k}^{n+1}\right\}_{k=1}^{K}\)**: please refer to (2.4). **Objective Function.** The objective of this problem is to find a policy \(\pi^{*}\) that solves: \[\max_{\pi}\mathbb{E}\left\{F(x^{\pi,N},\hat{W})|S^{0}\right\}.\] ### Policy in the SBES Algorithm. We now present the policy for choosing the point \(x^{n}\). We first describe the logic behind Entropy Search (ES) and the surrogate optimization problem that optimizes the expected entropy reduction, followed by a discussion of how to solve this surrogate optimization problem when \(\mathcal{X}\) is a finite set or a compact interval. #### 3.3.1 One-step Lookahead Entropy Reduction. Recall that in the deterministic case, the metric we use for determining the distance between the true optimum \(x^{*}\) and our estimate of the optimum \(x^{N}\) is the \(\mathbb{L}^{1}\) norm: \(|x^{N}-x^{*}|\). In the stochastic setting, we want a similar "distance" metric on how close our estimated density function \(P^{n}\) is to the true density of \(X^{*}\): \(p_{X^{*}}\). However, without knowing the exact location of \(x^{*}\), there is no way of comparing \(P^{n}\) and \(p_{X^{*}}\). To address this problem, Hennig and Schuler (2012) suggest using the Kullback-Leibler(KL) divergence, also called _relative entropy_, from the uniform measure \(U_{\mathcal{X}}\) to the posterior \(P^{n}\) as a mean for assessing the current information about \(x^{*}\). Denote the differential entropy of any density function \(p\) on \(X^{*}\) by \[H(p)=-\int_{\mathcal{X}}p(x)log_{2}(p(x))dx. \tag{3.5}\] The relative entropy (KL-divergence) from \(q\) to \(p\) is defined as: \[D_{KL}(p\parallel q) =\int_{\mathcal{X}}p(x)log_{2}(\frac{p(x)}{q(x)})dx\] \[=-\int_{\mathcal{X}}p(x)log_{2}(q(x))dx-H(p).\] Relative entropy is a measure of how "close" two distributions are. \(D_{KL}(p\parallel q)=0\) if and only if \(p(x)=q(x)\) almost everywhere. On the other hand, the relative entropy from the uniform distribution \(U_{\mathcal{X}}\) to the dirac delta function \(\delta(x-x^{*})\) is defined to be \(\infty\). Hence, a larger relative entropy from the uniform measure to \(P^{n}\) implies greater dissimilarity between them, meaning we have more information about the location of \(x^{*}\). This inspires us to use the expected relative-entropy maximization as the value function of the surrogate problem. With relative entropy as the metric, we can now define the contribution functions \(\tilde{C}_{i}\) for our surrogate problem. Recall that for any \(1\leq n\leq N\), the value function of a state \(s\) is \(\tilde{V}_{n}^{\pi}(s)=\mathbb{E}[\sum_{i=n}^{N}\tilde{C}_{i}(S^{i},X^{\pi}(S^ {i}),W^{i+1})|S^{n}=s,x^{n}=X^{\pi}(S^{n})]\). For any \(n\leq i\leq N\) and the state variable \(S^{i}\), define \[\tilde{C}_{i}(S^{i},X^{\pi}(S^{i}),W^{i+1})=\left\{\begin{aligned} D _{KL}(P^{i+1}\parallel U_{\mathcal{X}})-D_{KL}(P^{i}\parallel U_{\mathcal{X}} )&\quad\text{if }n\leq i\leq N-1,\\ 0&\quad\text{if }i=N.\end{aligned}\right. \tag{3.6}\] which is the increment in relative entropy between iteration \(i+1\) and \(i\). Note that the data stream of this sequential decision process is \((S^{0},x^{0},W^{1},S^{1},x^{1},W^{2},...,S^{N},x^{N})\). So with the transition function defined in section 3.2, we are able to calculate \(P^{n+1}\) given \((S^{n},X^{\pi}(S^{n}),W^{n+1})\). Then the surrogate value of a state \(S^{n}\) at iteration \(n\leq N\) is: \[\tilde{V}_{n}(S^{n}) =\mathbb{E}[\sum_{i=n}^{N}\tilde{C}_{i}(S^{i},X^{\pi}(S^{i}),W^{i +1})|S^{n},x^{n}=(x_{h},z)]\] \[=\mathbb{E}[D_{KL}(P^{N}\parallel U_{\mathcal{X}})-D_{KL}(P^{n} \parallel U_{\mathcal{X}})|S^{n},x^{n}=(x_{h},z)]. \tag{3.7}\] At each iteration \(n\), we choose to optimize the one-step lookahead value. Equivalently, set \(\tilde{V}_{n+1}(S^{n+1})=0\) and obtain the one-step lookahead SBES policy: \[X^{SBES}(S^{n}) =\operatorname*{argmax}_{h\in\bar{H}^{n},z\in\mathcal{X}}\tilde{V}_ {n}(S^{n})\] \[=\operatorname*{argmax}_{h\in H^{n},z\in\mathcal{X}}\mathbb{E}[ \tilde{C}_{n}(S^{n},x^{n},W^{n+1})|S^{n},x^{n}=(h,z)]\] \[=\operatorname*{argmax}_{h\in H^{n},z\in\mathcal{X}}\mathbb{E}[D_{ KL}(P^{n+1}\parallel U_{\mathcal{X}})-D_{KL}(P^{n}\parallel U_{\mathcal{X}}))|S^{n},x^{n }=(h,z)]\] \[=\operatorname*{argmin}_{h\in H^{n},z\in\mathcal{X}}\mathbb{E}[H (P^{n+1})-H(P^{n})|S^{n},x^{n}=(h,z)]\coloneqq\nu^{n}(h,z). \tag{3.8}\] The objective function \(\nu^{n}\) in equation (3.8) of the SBES algorithm is the same as the ES acquisition function despite the difference in sign. Another important merit of SBES is that with the probabilistic models stated in section 2.2, this objective function now has an analytical expression. Letting \((x_{l},x_{r})\) be the standard notation for indicating relative location of \((h,z)\), \(\nu^{n}\) can be expressed as follows: \[\nu^{n}(h,z) =\mathbb{E}[H(P^{n+1})|S^{n},x^{n}=(h,z)]-H(P^{n})\] \[=[g^{n}(x_{l},x_{r})log_{2}(g^{n}(x_{l},x_{r}))+(1-g^{n}(x_{l},x_ {r}))log_{2}(1-g^{n}(x_{l},x_{r}))](F^{n}(x_{r})-F^{n}(x_{l})-1)\] \[\quad-[\bar{g}^{n}(x_{l},x_{r})log_{2}(\bar{g}^{n}(x_{l},x_{r}))+ (1-\bar{g}^{n}(x_{l},x_{r}))log_{2}(1-\bar{g}^{n}(x_{l},x_{r}))](F^{n}(x_{r})- F^{n}(x_{l}))\] \[\quad+U_{1}^{n}(x_{l},x_{r})\log_{2}(U_{1}^{n}(x_{l},x_{r}))+U_{ 0}^{n}(x_{l},x_{r})\log_{2}(U_{0}^{n}(x_{l},x_{r})). \tag{3.9}\] Equation (3.9) is the formula for a single-step lookahead. Yet, equation (3.7) indicates that we are not limited to looking ahead just one-step; we can also perform multi-step lookaheads by maximizing the value function in (3.7). #### 3.3.2 Optimizing the Surrogate Objective. We have derived the formula for the one-step lookahead entropy reduction objective above in equation (3.9). Even though this objective is more straightforward and easier to evaluate than the truth function \(f\), it is still nonconvex in most scenarios. We now discuss how to optimize \(\nu^{n}(h,z)\) when \(\mathcal{X}\) is discrete and continuous respectively. Consider the following two possible structures of \(\mathcal{X}\): 1. \(\mathcal{X}\) **has finitely many elements.** Note that the SBES objective function \(\nu^{n}:H^{n}\bigtimes\mathcal{X}\to\mathbb{R}\) is a two-dimensional real-valued function. Both \(H^{n}\) and \(\mathcal{X}\) are finite in this case, so the complexity of finding the optimum is at most \(\mathcal{O}(|\mathcal{X}|(N+1))\) (remember that \(N\) is small). In other words, it is not hard to optimize \(\nu^{n}\) over the domain even though it is nonconvex. 2. \(\mathcal{X}\) **is a compact interval in \(\mathbb{R}\).** When \(\mathcal{X}\) is a continuous and bounded set, optimizing \(\nu^{n}(h,z)\) is equivalent to solving a deterministic one-dimensional nonconvex optimization problem over the feasible region \(\mathcal{X}\) for \(|H^{n}|\) times. So one solution is to use some well-studied deterministic, derivative-free global optimization methods, such as _DIRECT_ suggested in (Jones et al., 1993), to optimize \(\nu^{n}(h,z)\). In spite of the availability of global optimization methods, we choose to discretize the search region \(\mathcal{X}\) in order to approximate \(\nu^{n}(h,z)\). To better facilitate the approximation, one recommended way of discretizing \(\mathcal{X}\) is to draw \(m\) samples from the posterior distribution \(P^{n}\), as adopted by Hernandez-Lobato et al. (2014), Shah and Ghahramani (2015), and Wu and Frazier (2016). In general, the benefit of implementing this discretization method is it encourages exploration at the early stages of experimentation since we have less knowledge about \(x^{*}\) and thus \(P^{n}\) is more uniform over the entire search region. It also features exploitation later in the search as a more concentrated density \(P^{n}\) will produce a collection of samples close to the estimated optimum. As a result, there are more chances at later iterations to check the local information around the estimated optimum. With the sampled belief model, another advantage of discretizing \(\mathcal{X}\) based on \(P^{n}\) is that \(P^{n}\) is analytical and ready-for-use at iteration \(n\). This avoids the relatively complicated sampling procedure involving a linear approximation of the GP prior using a feature map that Hernandez-Lobato et al. (2014) introduces for optimizing the acquisition function (1.3) of PES. In the design of numerical experiments, the SBES algorithm generates \(m\) points, denoted as \(\mathcal{A}^{n}\), according to the up-to-date posterior distribution \(P^{n}\) at every iteration and solve the following version of (3.8): \[X^{SBES}(S^{n})=\underset{h\in H^{n},z\in\mathcal{A}^{n}}{\operatorname{ argmin}}\nu^{n}(h,z).\] (3.10) Based on the discussion above, we argue that the SBES algorithm provides substantial simplification to implementation compared against other entropy-search based algorithms, in attribution to the fact that there is no extra step needed to evaluate the surrogate objective function 3.10. In the case of discretization, it is also easier and more computationally tractable for SBES to sample the posterior distribution \(P^{n}\) because the posterior distribution \(P^{n}\) has numerical expressions. Consequentially, SBES is expected to improve on the computational time spent on optimizing the surrogate objective function. ## 4 Theory In this section, we provide an error bound for the final estimate \(\bar{x}^{N}\), produced by the SBES algorithm, given that the truth function is in the set of sampled belief functions. When noise is present (i.e. \(\sigma>0\)), under the assumption that the sampled belief curves contains the underlying truth \(f\), it is shown that the SBES algorithm is able to reduce the expected entropy of the posterior \(P^{n}\), i.e. \(\mathbb{E}[H(P^{n+1})|x^{n}=(x,y),P^{n}]-H(P^{n})\leq 0\). A single-period lower bound on the expected entropy reduction that is dependent on the function we are trying to learn is presented as well. For the following discussion, we use \(x,y\in\mathcal{X}\) to refer to any two points in the search region \(\mathcal{X}\). When we want to emphasize the comparative location of \(x,y\), we use \(x_{l}\) and \(x_{r}\) where \(x_{l}=\min\{x,y\}\), \(x_{r}=\max\{x,y\}\). Before the formal proofs, we want to introduce two important assumptions, and then introduce the notations we use throughout the section. **Assumption 1**.: Suppose the underlying function \(f\) is parameterized by \(\theta\) and the true parameter is \(\theta^{*}\), i.e. \(f(x)=f(x|\theta^{*})\). Then we assume that the set of belief parameters contains the true one : \(\theta^{*}\in\Theta\coloneqq\{\theta_{k}\}_{k=1}^{K}\) (\(|\Theta|=K\)). Throughout the theory section, we suppose \(\theta\) follows a distribution \(p^{n}_{\theta}\) at iteration \(n\) where \(\theta_{k}\in\Theta\). **Assumption 2**.: Let \(p^{n}_{\theta}=\{p^{n}_{k}\}_{k=1}^{K}\), where \(p^{n}_{k}\) is defined in equation (2.3), denote the posterior distribution of \(\theta\) at iteration \(n\) after observing the data \(\mathcal{D}^{n}=(H^{n},\hat{f}^{n})\). Also denote the true distribution of \(\theta\) to be \(p^{*}_{\theta}\coloneqq\{\mathds{1}_{\{\theta_{k}=\theta^{*}\}}\}_{k=1}^{K}\).We assume that the prior distribution on \(\theta\) satisfies \(p^{0}_{k}>0\). Recall that \(\hat{y}^{n+1}\coloneqq\mathds{1}_{\{\hat{f}(x^{n}_{l})\leq\hat{f}(x^{n}_{l})\}}\). At any state \(S^{n}\), the tuple \((x^{n}_{l},x^{n}_{r})\) is determined by a policy \(\pi\) that picks two points \(X^{\pi}(S^{n})=(x^{n}_{h},z^{n})\). The binary variable \(\hat{y}^{n+1}\) is then a function of \((\hat{f}(x^{n}_{h}),f_{\theta}(z^{n}),W^{n+1})\). Note that \(f_{\theta}(z^{n})\) is the noiseless function value evaluated at \(z^{n}\) given that the true parameter is \(\theta\). In other words, given a decision \(X^{\pi}(S^{n})\) the distribution of is determined jointly by the distribution of \(\theta\) and \(W^{n+1}\). In the following sections, we use \(\hat{y}^{n+1}(X^{\pi}(S^{n}))\) to indicate the binary random variable \(\hat{y}^{n+1}\) under a policy \(\pi\) at the state \(S^{n}\) whenever we want to emphasize the relation between \(\hat{y}^{n+1}\) and \(\pi\). When the policy \(\pi\) is not clearly defined, we use \(\hat{y}^{n+1}(x^{n})\), in which \(x^{n}\) is some decision made at iteration \(n\). To produce a good estimate of the location of the maximizer \(x^{*}\), the learning algorithm should collect as much information as possible throughout \(N\) experiments. The information gain about \(x^{*}\) is quantified in the mutual information between the random variable \(X^{*}\) and the \(N\) random observations: \(I(X^{*};\{\hat{y}^{n}_{\pi}\}_{n=1}^{N}|S^{n})\). In the following, we investigate the difference between the information gain of the SBES decision and the information gain of the conceptually optimal policy \(\pi^{*}\), given the underlying true parameter \(\theta^{*}\). This section starts with specifying the difference between predictive mutual information and the perfect mutual information, as well as the general assumptions we make. Then, we proceed to prove an error bound on the one-step perfect mutual information generated by the SBES policy. It is shown that this error bound is tied to the quality of our estimation of the true parameter \(\theta^{*}\). **Assumption 3** (Distinct Belief Optimizers).: All of the sampled belief functions have distinct optimizers. Define \(x^{*}_{k}\coloneqq\text{argmax }f(x|\theta_{k})\). Then \(x^{*}_{i}\neq x^{*}_{j},\ \forall i\neq j\in\{1,...,K\}\). **Assumption 4** (Existence of One-to-one Mapping).: Let a general probability space of \(\theta\) be \((\Theta,\mathcal{F},\mathbb{P}_{\theta})\). It is assumed that there exists a quantizer \(q:\mathcal{X}\rightarrow\{1,...,K\}\) and a partition \(\mathcal{P}_{q}\coloneqq\{\mathcal{X}_{1},...,\mathcal{X}_{K}\}\) associated with \(q\) such that \(x^{*}_{k}\in\mathcal{X}_{k},\ \forall k=1,...,K\). Let \(\sigma(\mathcal{P}_{q})\) be the sigma algebra generated by \(\mathcal{P}_{q}\). Then the random variable \(X^{*}:\Theta\rightarrow\mathcal{P}_{q}\) is well-defined with the induced probability space \((\mathcal{P}_{q},\sigma(\mathcal{P}_{q}),\mathbb{P}_{X^{*}})\). **Remark**.: _Assumption 4 is saying that there exists a one-to-one mapping between the event \(\{\theta=\theta_{k}\}\) and \(\{X^{*}\in\mathcal{X}_{k}\},\ \forall k=1,...,K\). In other words, knowing any of the events directly implies the other event._ **Definition 4.1** (Predictive Mutual Information).: Let \(p(\hat{y}^{n+1}|\theta)\) denote the distribution of \(\hat{y}^{n+1}\) parameterized by \(\theta\). The predictive mutual information \(\hat{I}\) at a state \(S^{n}\) is defined as: \[\hat{I}(X^{*};\hat{y}^{n+1}|S^{n}) \coloneqq H(p(\hat{y}^{n+1}|S^{n}))-\mathbb{E}_{X^{*}|S^{n}}[H(p( \hat{y}^{n+1}|S^{n},X^{*}))]\] \[\coloneqq H(p(X^{*}|S^{n}))-\mathbb{E}_{\hat{y}^{n+1}|S^{n}\sim p( \hat{y}^{n+1}|\theta)}[H(p(X^{*}|S^{n},\hat{y}^{n+1}))]\] \[=H(p(X^{*}|S^{n}))-\mathbb{E}_{\theta}[\mathbb{E}_{\hat{y}^{n+1}| S^{n},\theta}[H(p(X^{*}|S^{n},\hat{y}^{n+1}))]]. \tag{4.1}\] Note that \(S^{n}\) encodes the posterior distribution of \(\theta\) given all the information up to iteration \(n\), i.e. \(p_{\theta}^{n}\). This implies that as we update the posterior distribution of \(\theta\), the distribution of \(\hat{y}\): \(p(\hat{y}^{n+1}|\theta)\) also changes. **Definition 4.2** (Perfect Mutual Information).: If we were given perfect information at the state \(S^{n}\), i.e. \(\theta=\theta^{*}\) with probability 1, then we know the true distribution of \(\hat{y}^{n+1}\): \(p(\tilde{y}^{n+1}|\theta^{*})\) as well as \(p(X^{*}|\theta^{*})\). Define the perfect mutual information as: \[I^{*}(X^{*};\hat{y}^{n+1}|S^{n}) =H(p(X^{*}|S^{n}))-\mathbb{E}_{\hat{y}^{n+1}|S^{n}\sim p(\hat{y}^ {n+1}|\theta^{*})}[H(p(X^{*}|S^{n},\hat{y}^{n+1}))]\] \[=H(p(\hat{y}^{n+1}|S^{n}))-\mathbb{E}_{X^{*}|S^{n},\theta^{*}}[H( p(\hat{y}^{n+1}|S^{n},X^{*}))]. \tag{4.2}\] **Lemma 4.1**.: _For \(\hat{y}^{n+1}(X^{\pi}(S^{n}))\) produced by any policy \(\pi\), its predictive mutual information is no greater than the predictive mutual information of \(\hat{y}^{n+1}(X^{SBES}(S^{n}))\):_ \[\hat{I}(X^{*};\hat{y}^{n+1}(X^{\pi}(S^{n}))|S^{n})\leq\hat{I}(X^{*};\hat{y}^{n +1}(X^{SBES}(S^{n}))|S^{n}). \tag{4.3}\] Proof.: Proof. Recall that the decision of SBES \(X^{SBES}(S^{n})\) is determined by maximizing the predictive entropy reduction of the posterior distribution of \(X^{*}\) with the pair \((x_{h},z)\in H^{n}\times\mathcal{X}\) under the parameter \(\theta\): \[X^{SBES}(S^{n}) =\underset{x_{h}\in H^{n},z\in\mathcal{X}}{\operatorname{argmin}} \mathbb{E}_{\theta,W^{n+1}}[H(P^{n+1})-H(P^{n})|S^{n},x^{n}=(x_{h},z)]\] \[=\underset{x_{h}\in H^{n},z\in\mathcal{X}}{\operatorname{argmax} }\ H(p(X^{*}|S^{n}))-\mathbb{E}_{\hat{y}^{n+1}}[H(p(X^{*}|S^{n},\hat{y}^{n+1 }(x^{n}))]\] \[=\underset{x_{h}\in\tilde{H}^{n},z\in\mathcal{X}}{\operatorname{ argmax}}\ H(p(X^{*}|S^{n}))-\mathbb{E}_{\theta}[\mathbb{E}_{\hat{y}^{n+1}|\theta}[H(p(X^{*}|S^{n },\hat{y}^{n+1}(x^{n})]]\] \[=\underset{\pi}{\operatorname{argmax}}\ \hat{I}(X^{*};\hat{y}^{n+1}(X^{\pi}(S^{n}))|S^{n}). \tag{4.4}\] In the next theorem, we bound the error between the predictive mutual information produced by the SBES algorithm and the maximum perfect mutual information. The idea behind the perfect mutual information is that the quality of the next query point we choose is measured under the true parameter \(\theta^{*}\). Ideally, we should pick a point that maximizes the mutual information between the true optimum \(x^{*}\) and the next observation. However, the reality is that we do not know this true parameter \(\theta^{*}\), so we use the predictive mutual information as an estimate of the perfect mutual information, which produces an error. The following theorem provide a bound on this error under the measure of perfect mutual information. **Theorem 4.2**.: _Let \(\pi^{*}\) be the optimal policy that maximizes the perfect mutual information. The error of a single-step information gain between \(\pi^{*}\) and the SBES algorithm is bounded by the KL-divergence from the posterior distribution of \(\theta\) to the point mass measure \(p^{*}_{\theta}\coloneqq\{\mathds{1}_{\{\theta_{k}=\theta^{*}\}}\}_{k=1}^{K}\). That is:_ \[\left|I^{*}(X^{*};\hat{y}^{n+1}(X^{\pi^{*}}(S^{n}))|S^{n})-I^{*}(X^{*};\hat{y}^ {n+1}(X^{SBES}(S^{n}))|S^{n})\right|\] \[\leq 4\mathbb{P}(\theta^{n}\neq\theta^{*})\] \[\leq \mathcal{O}\left(\sqrt{D_{KL}(p^{*}_{\theta}|p^{n}_{\theta})} \right).\] _For the sake of simplicity, we use \(\hat{y}_{\pi^{*}}\) and \(\hat{y}_{SBES}\) to denote \(\hat{y}(X^{\pi^{*}}(S^{n}))\) and \(\hat{y}(X^{SBES}(S^{n}))\) respectively._ Proof.: Proof. \[I^{*}(X^{*};\hat{y}_{\pi^{*}}^{n+1}|S^{n})-I^{*}(X^{*};\hat{y}_{ SBES}^{n+1}|S^{n})\] \[= I^{*}(X^{*};\hat{y}_{\pi^{*}}^{n+1}|S^{n})-\hat{I}(X^{*};\hat{y}_ {\pi^{*}}^{n+1}|S^{n})+\hat{I}(X^{*};\hat{y}_{\pi^{*}}^{n+1}|S^{n})-\hat{I}(X^ {*};\hat{y}_{SBES}^{n+1}|S^{n})\] \[+\hat{I}(X^{*};\hat{y}_{SBES}^{n+1}|S^{n})-I^{*}(X^{*};\hat{y}_{ SBES}^{n+1}|S^{n})\] \[\leq I^{*}(X^{*};\hat{y}_{\pi^{*}}^{n+1}|S^{n})-\hat{I}(X^{*};\hat{y}_ {\pi^{*}}^{n+1}|S^{n})+\hat{I}(X^{*};\hat{y}_{SBES}^{n+1}|S^{n})-I^{*}(X^{*}; \hat{y}_{SBES}^{n+1}|S^{n}). \tag{4.5}\] where the last inequality follows from lemma 4.1. For any policy \(\pi\), we have: \[\left|I^{*}(X^{*};\hat{y}_{\pi}^{n+1}|S^{n})-\hat{I}(X^{*};\hat{y}_{ \pi}^{n+1}|S^{n})\right|\] \[= \left|\mathbb{E}_{X^{*}|S^{n}}[H(p(\hat{y}_{\pi}^{n+1}|S^{n},X^{*} ))]-\mathbb{E}_{X^{*}|S^{n},\theta^{*}}[H(p(\hat{y}_{\pi}^{n+1}|S^{n},X^{*}))]\right|\] \[= \left|\int_{\mathcal{X}}p(X^{*}\in dx|S^{n})H(p(\hat{y}_{\pi}^{n+ 1}|S^{n},X^{*}\in dx))-\int_{\mathcal{X}}p(X^{*}\in dx|S^{n},\theta^{*})H(p( \hat{y}_{\pi}^{n+1}|S^{n},X^{*}\in dx))\right| \tag{4.6}\] \[= \left|\int_{\Theta}p(\theta^{n}\in d\theta|S^{n})H(p(\hat{y}_{ \pi}^{n+1}|S^{n},X^{*}(d\theta)))-H(p(\hat{y}_{\pi}^{n+1}|S^{n},X^{*}(\theta^{ *}))\right|\] (4.7) \[= \left|\sum_{k=1}^{K}p_{k}^{n}H(p(\hat{y}_{\pi}^{n+1}|S^{n},\theta _{k}))-H(p(\hat{y}_{\pi}^{n+1}|S^{n},\theta^{*})\right|\] (4.8) \[= \left|\sum_{k=1}^{K}(p_{k}^{n}-\mathds{1}_{\{\theta_{k}=\theta^{ *}\}})H(p(\hat{y}_{\pi}^{n+1}|S^{n},\theta_{k}))\right|\] \[\leq \left\|p_{\theta}^{n}-p_{\theta}^{*}\right\|_{1}\left\|H(p(\hat{ y}_{\pi}^{n+1}|S^{n},\theta))\right\|_{\infty}\] \[\leq \left\|p_{\theta}^{n}-p_{\theta}^{*}\right\|_{1}=2\mathbb{P}( \theta\neq\theta^{*}). \tag{4.9}\] The equality between (4.6) and (4.7) follows from assumption 4, whereas the replacement of \(X^{*}\) with \(\theta\) in (4.8) occurs because when \(X^{*}\in\mathcal{X}_{k}\), where \(\mathcal{X}_{k}\) is the partition subset which corresponds to \(\theta^{k}\), the conditional distribution of \(\hat{y}^{n+1}\) is the same as the one conditioned on \(\theta^{n}=\theta^{k}\). In other words, \(X^{*}\) influences the distribution of \(\hat{y}^{n+1}\) through its association with the underlying function, which is a piece of information that \(\theta^{n}\) also encodes. Inequality (4.9) is immediate from the fact that \(\hat{y}^{n+1}\) is a binary variable whose entropy has maximum equal to 1. The proof is finished by Pinsker's inequality: \[\left|I^{*}(X^{*};\hat{y}_{\pi^{*}}^{n+1}|S^{n})-I^{*}(X^{*};\hat {y}_{SBS}^{n+1}|S^{n})\right|\] \[\leq \left|I^{*}(X^{*};\hat{y}_{\pi^{*}}^{n+1}|S^{n})-\hat{I}(X^{*}; \hat{y}_{\pi^{*}}^{n+1}|S^{n})\right|+\left|\hat{I}(X^{*};\hat{y}_{SBS}^{n+1}| S^{n})-I^{*}(X^{*};\hat{y}_{SBS}^{n+1}|S^{n})\right|\] \[\leq 2\left\|p_{\theta}^{n}-p_{\theta}^{*}\right\|_{1}=4\left\|p_{ \theta}^{n}-p_{\theta}^{*}\right\|_{TV}\] \[\leq 4\sqrt{2D_{KL}(p_{\theta}^{*}||p_{\theta}^{n})}.\] The take-away from theorem 4.2 is that at every state, the error between the information gain of the SBES algorithm and the information gain of the optimal policy is bounded by the estimation error of \(\theta^{n}\), which is a model dependent quantity. **Corollary 4.2.1**.: _(Lower Bound on One-step Mutual Information)_ \[\max\left\{I^{*}(X^{*};\hat{y}_{\pi^{*}}^{n+1}|S^{n})-4\sqrt{2D_{KL}(p_{\theta}^{* }||p_{\theta}^{n})},\ 0\ \right\}\leq I^{*}(X^{*};\hat{y}_{SBES}^{n+1}|S^{n}). \tag{4.10}\] Proof.: Given that \(I(X;Y)>0\) if \(X,Y\) are not independent (Duchi, 2019), the statement holds directly from theorem 4.2. ## 5 Numerical Experiments We present the results of two types of experiments. First, the SBES algorithm is tested on maximizing synthetic functions, both parameterized and black-box, in comparison to other well-studied algorithms. In all experiments, we implement the SBES algorithm under different noise levels as well as multiple initializations and illustrate that it outperforms other algorithms especially at a medium noise level. To further demonstrate the power of SBES, it is employed as the stepsize rule of the Gradient Descent (GD) algorithm, which is applied to finding the optimum of various multidimensional functions. It is observed that SBES is more robust than prevalent stepsize rules such as RMSProp and AdaGrad to the choice of initialization. Moreover, GD converges much faster with SBES as a parametric stepsize rule. ### Synthetic Functions Two types of synthetic functions are investigated in testing the performance of the SBES algorithm: unimodal parametric functions (Gaussian, Gamma and Beta) and well-known benchmark functions (Mccormick and Ackley) noted in Bingham, Derek and Surjanovic, Sonja (2013). For the first category, the SBES algorithm is provided with the exact parametric family and a set of parameters that include the true one. In addition, there are two versions of the SBES algorithm. One of them is provided with the real scale of the test function, while the other version, called SCALE-SBES, has to learn the true scale as the experiment proceeds. For the second category of test functions, since we don't know which family the test function belongs to, we pick several parametric families as an approximation of the test function and run the SCALE-SBES algorithm based on the parametric beliefs of our choice. The selected parametric families are specified in table 1. In terms of the benchmark algorithms against which the SBES algorithm competes, we choose the response surface method (RSM), GP-UCB, GP-EI, PES and MES where the GP kernel's parameters are optimized along the way. Each experimental result is computed with 15 randomly chosen initializations from a Latin hypercube design and 20 realizations of each initialization. Different noise levels are also considered when comparing the performances of different algorithms. Since the scale of test functions can be different, to make noise levels more comparable we use noise ratio (\(\gamma\)), defined as the ratio between standard deviation of the noise in observations and the maximum difference in underlying function values: \[\gamma\coloneqq\frac{\sigma}{|f_{\max}-f_{\min}|}. \tag{5.1}\] For example, the 2-dimensional Rosenbrock function has \(|f_{\max}-f_{\min}|=1000\), so a noise ratio equal to 0.5 means \(\sigma=500\). Table 1 shows the immediate regret of all algorithms in log scale under different noise levels, in which a smaller regret means a more accurate estimation. The results illustrate that SBES is the absolute winner of the in-model experiments with the outperforming gap maximized at a medium noise level. Despite the effort to learning the true scale, SCALE-SBES has a similar performance to SBES, suggesting that SBES's performance is robust against the variation in scale as long as the true parameter is in its sampled belief set. This is a significant assumption as in reality, people usually have limited information about the magnitude of the underlying function aside from which family of parametric functions it resembles. When testing on black-box benchmark functions (Mccormick and Ackley), SBES still beats all of the competing algorithms in the presence of medium noise as shown in table 1. An interesting observation is that SBES has a better performance on Ackley than on Micromick, while all the GP-based methods are more successful on Micromick. The difference is that Micromick features a smooth and convex curvature whereas Ackley is a nonconvex, triangle-shaped function with a sharp and non-differentiable optimum. This phenomenon implies that SBES is competent even when the assumptions about the underlying function are relaxed. It is worth mentioning that MES is the best among all the competing algorithms in terms of robustness, especially when the underlying function is smooth. Yet SBES either outperforms or is competitive against MES across all test functions. ### Stepsize Experiments An important application of SBES is to conduct the one-dimensional line search for (stochastic) Gradient Descent, which is an essential algorithm to modern machine learning methods. Finding the stepsize is typically handled using a parametric stepsize rule such as Adagrad or RMSProp which has to be tuned to capture not only the characteristics of the problem, but also the choice of starting point. However, practitioners often find that popular stepsize formulas suffer from bad choices of the starting point of the GD algorithm. An algorithm that is less sensitive to the choice of starting point would represent a major contribution to stochastic gradient descent. \begin{table} \begin{tabular}{c c|c c c c c c c} \hline \multirow{2}{*}{Test Function} & Noise Level & GP-EI & GP-UCB & RSM & PES & MES & SBES & SCALE-SBES \\ \hline \multirow{4}{*}{Gamma} & low & -2.88 & -2.79 & 1.17 & -0.06 & -3.63 & **-6.33** & -6.28 \\ & mid & -0.31 & -0.25 & 1.16 & 0.02 & -1.72 & **-3.98** & -3.52 \\ & high & -0.07 & 0.01 & -0.25 & 0.15 & **-0.58** & -0.54 & -0.43 \\ \hline \multirow{4}{*}{Beta} & low & 0.25 & 0.18 & 1.07 & 0.15 & -2.88 & **-4.31** & -4.22 \\ & mid & 0.36 & 0.64 & 0.91 & 0.18 & -1.27 & **-4.26** & -3.58 \\ \(\alpha=3\),\(\beta=18\) & high & 0.64 & 0.32 & 1.08 & 0.14 & -0.02 & **-1.06** & -0.34 \\ \hline \multirow{4}{*}{Gaussian} & low & -3.57 & -3.65 & 1.02 & 0.08 & -3.78 & **-5.88** & -5.84 \\ & mid & -2.02 & -2.21 & 1.10 & 0.10 & -2.08 & **-5.61** & -5.06 \\ \(\mu=7.5\),\(\sigma=1\) & high & -0.08 & -0.30 & 0.92 & 0.13 & -0.68 & **-1.06** & -0.75 \\ \hline \multicolumn{2}{c}{overall average} & -0.85 & -0.89 & 0.91 & 0.099 & -1.85 & **-3.67** & -3.34 \\ \hline \end{tabular} \end{table} Table 1: Performance on standard test functions. We use this section to demonstrate that SBES can exactly achieve the described objective. In the following experiments, we use stochastic Gradient Descent(SGD) to find the global optimum of multidimensional unimodal functions defined on compact hypercubes. The noise present in the evaluation of the truth functions has standard deviation of 0.1. All stochastic gradients are estimated via finite-difference stochastic approximation (FDSA).The SBES algorithm is applied as a stepsize rule of SGD, which is compared against a selected subset of ad-hoc benchmark stepsize formulas including harmonic, AdaGrad and RMSProp. Our assumption is still that the underlying function is noisy and expensive to evaluate, resulting in hard-to-obtain gradients. For this reason, we restrict our attention to the performance of each stepsize rule after 10 iterations of SGD. We use the distance between the furthest vertex and the known optimum \(x^{*}\) as a reference for picking the starting point of SGD. Denote this distance to be \(d\). Then an initial point is randomly chosen from a region in-between two spheres defined by \(\mathcal{B}(x^{*},r_{2})\backslash\mathcal{B}(x^{*},r_{1})\), in which \(r_{1},r_{2}\) are a proportion of \(d\). See table 2 for the specific choice of \(r_{1},r_{2}\) for each category of initial points. There are two categories of unimodal test functions we have used: convex and non-convex. For the convex test functions, we picked the Bohachevsky, Rotated Hyper-Ellipsoid and Sum of Different Powers functions. The Bohachevsky function is defined to be 2-dimensional while the dimension of the other two functions can be varied. We run the simulations with the choices of dimension for the other two test functions to be 5, 10 and 20. The non-convex test functions Figure 2: Average reduction in the initial distance to the global maximum within 10 iterations on far starting points. consist of multivariate Gaussian density functions in the hypercube \(x_{i}\in[0,5]\) for \(i=2,10\) with different means and covariance matrices. We provide two versions of the SBES stepsize rule: SBES-single and SBES-mix. The sampled beliefs in SBES-single all have the same shape and only vary in the horizontal localities. For example, we propose a family of gaussian density curves with the same variance but different means in the non-convex experiments. The second version SBES-mix employs sampled beliefs that have different localities as well as shapes, which can be a result of variation in parameters within the same parametric family or different choices of parametric family. We can see from table 2 that both SBES-single and SBES-mix achieve the highest reduction in the distance between a Gradient Descent iterate and the global optimum in all categories of the starting position, especially in the sector of far initialization (highlighted in bold). Figure 2 further stresses that the L2-distance from the starting point to the optimum has been substantially reduced only after the first iteration. One key note is that all of harmonic, RMSProp and AdaGrad need to be tuned for different underlying functions in order to perform reasonably. But tuning is not a remedy for the problem of initialization as well. It is common that the pre-tuned stepsize rules only converge on some initial points but fail on many others. In practice, retuning a stepsize formula for different starting points on the same problem is simply unrealistic. To the opposite, SBES uses the exact same set of parametric beliefs within a category (eg. convex) of functions and yields consistent performance across various types of initialization. If we refer to fig. 3, it is easy to see that RMSProps and AdaGrad zig-zag around the optimum and harmonic is too conservative in its steps. Meanwhile, SBES chooses a stepsize based on how close the current iterate is to the \begin{table} \begin{tabular}{l c|c c c c c} \hline \hline & initialization & Harmonic & RMSProp & AdaGrad & SBES-single & SBES-mix \\ \hline nonconvex & close & -2.81 & -2.06 & -1.50 & **0.36** & 0.31 \\ unimodal & medium & 0.38 & 0.30 & 0.26 & 1.48 & **1.53** \\ functions & far & 2.29 & 1.78 & 1.26 & 2.34 & **2.76** \\ \hline convex & close & -21.87 & 20.07 & 21.74 & **22.54** & 22.52 \\ unimodal & medium & 23.48 & 56.14 & 45.28 & 66.97 & **67.12** \\ functions & far & 46.49 & 69.89 & 54.81 & 89.67 & **89.70** \\ \hline overall average & 7.99 & 24.35 & 20.31 & 30.56 & **30.66** \\ \hline \hline \end{tabular} _Notes._ Each test function is tested on 20 random starting points from each category. The radii of spheres chosen for each category of initial points are: close (\(r_{1}=0,r_{2}=\frac{1}{d}d\)), medium (\(r_{1}=\frac{1}{4}d,r_{2}=\frac{3}{4}d\)), far (\(r_{1}=\frac{3}{4}d,r_{2}=d\)). The largest reduction in \(\left\|x^{0}-x^{*}\right\|\) of each row is highlighted in bold. \end{table} Table 2: Average reduction in L2-Distance from the global optimum (\(\left\|x^{0}-x^{*}\right\|\)) after 10 iterations of stochastic Gradient Descent with different starting points. Figure 3: Comparison of Steps for Gradient Descent on Maximizing a 2-dimensional Gaussian Density Function. optimum. ## 6 Conclusion We have proposed a novel Entropy Search (ES) based algorithm, Sampled Belief Entropy Search (SBES), built upon a discrete parametric belief model instead of the Gaussian Process (GP) regression. While SBES inherits the logic of maximizing the one-step expected reduction in the entropy of the distribution of \(x^{*}\), it successfully transforms the computationally intractable objective function of ES into an analytic formula under the assumption that the truth function is one-dimensional and unimodal. This is a significant step in the effort to simplify the computation of expected entropy reduction, as are attempted in Predictive Entropy Search (PES) and Max-Value Entropy Search (MES). While both PES and MES still require numerical approximation as well as sampling techniques, the surrogate objective function of SBES is ready-to-use. In addition to the computational benefits, SBES achieves the following accomplishments: 1) We have proved a lower bound on the one-step information gain produced by SBES, which occurs to be a problem-dependent quantity. 2) Experiments with synthetic functions also show that SBES often outperforms both ES based algorithms and non-ES based algorithms within a limited amount of experimental budgets. 3) Among all the test functions we use, an important observation is that SBES is able to handle functions with a non-smooth structure much better than algorithms modeled on GP, such as PES, MES, GP-UCB and GP-EI. The reason could be that the quality of the estimation produced by the GP based methods depends on how well the Gaussian Process regression captures the curvature of the truth function. Hence, in the scenario that the truth function is non-smooth or noisy, GP falls short of reproducing the underlying function precisely, thus leading to inaccurate predictions. In contrast, SBES focuses on learning the right comparative relations between any two points through the probability of correct region assignment. 4) On top of this, SBES makes estimates according to the posterior distribution of \(x^{*}\), which in combination with the previous feature enables SBES to combat noisy and expensive experiments. 5) Lastly, in the experiments where SBES is applied as a stepsize rule for Gradient Descent (GD), it is evident that SBES is less sensitive to the choice of the starting point of GD than ad hoc stepsize formulas such as harmonic, RMSProp and AdaGrad. SBES is also more robust in the sense that it does not need much tuning across different types of functions to produce good results, unlike harmonic, RMSProp and AdaGrad.
2309.02936
EdgeFL: A Lightweight Decentralized Federated Learning Framework
Federated Learning (FL) has emerged as a promising approach for collaborative machine learning, addressing data privacy concerns. However, existing FL platforms and frameworks often present challenges for software engineers in terms of complexity, limited customization options, and scalability limitations. In this paper, we introduce EdgeFL, an edge-only lightweight decentralized FL framework, designed to overcome the limitations of centralized aggregation and scalability in FL deployments. By adopting an edge-only model training and aggregation approach, EdgeFL eliminates the need for a central server, enabling seamless scalability across diverse use cases. With a straightforward integration process requiring just four lines of code (LOC), software engineers can easily incorporate FL functionalities into their AI products. Furthermore, EdgeFL offers the flexibility to customize aggregation functions, empowering engineers to adapt them to specific needs. Based on the results, we demonstrate that EdgeFL achieves superior performance compared to existing FL platforms/frameworks. Our results show that EdgeFL reduces weights update latency and enables faster model evolution, enhancing the efficiency of edge devices. Moreover, EdgeFL exhibits improved classification accuracy compared to traditional centralized FL approaches. By leveraging EdgeFL, software engineers can harness the benefits of federated learning while overcoming the challenges associated with existing FL platforms/frameworks.
Hongyi Zhang, Jan Bosch, Helena Holmström Olsson
2023-09-06T11:55:41Z
http://arxiv.org/abs/2309.02936v1
# EdgeFL: A Lightweight Decentralized Federated Learning Framework ###### Abstract Federated Learning (FL) has emerged as a promising approach for collaborative machine learning, addressing data privacy concerns. However, existing FL platforms and frameworks often present challenges for software engineers in terms of complexity, limited customization options, and scalability limitations. In this paper, we introduce EdgeFL, an edge-only lightweight decentralized FL framework, designed to overcome the limitations of centralized aggregation and scalability in FL deployments. By adopting an edge-only model training and aggregation approach, EdgeFL eliminates the need for a central server, enabling seamless scalability across diverse use cases. With a straightforward integration process requiring just four lines of code (LOC), software engineers can easily incorporate FL functionalities into their AI products. Furthermore, EdgeFL offers the flexibility to customize aggregation functions, empowering engineers to adapt them to specific needs. Based on the results, we demonstrate that EdgeFL achieves superior performance compared to existing FL platforms/frameworks. Our results show that EdgeFL reduces weights update latency and enables faster model evolution, enhancing the efficiency of edge devices. Moreover, EdgeFL exhibits improved classification accuracy compared to traditional centralized FL approaches. By leveraging EdgeFL, software engineers can harness the benefits of federated learning while overcoming the challenges associated with existing FL platforms/frameworks. Federated Learning, Machine Learning, Software Engineering, Decentralized Architecture ## I Introduction Federated learning is a machine learning approach that enables training models on decentralized data sources while preserving data privacy. Traditional machine learning models typically require centralizing all the data in one location for training, which can be challenging due to data privacy concerns, legal restrictions, and computational constraints. Federated learning addresses these limitations by allowing models to be trained directly on the devices or servers where the data is generated or stored, without the need to transfer it to a central location [1]. The key advantage of federated learning is its ability to preserve data privacy. Since the data remains on the client devices or servers, it alleviates concerns related to data exposure or sharing sensitive information. Instead of sharing raw data, only the updates to the model parameters, often encrypted or anonymized, are communicated during the training process. This decentralized approach enables organizations and individuals to collaborate on model training without compromising the privacy and security of their data. Federated learning finds applications in various domains, such as healthcare, finance, Internet of Things (IoT), and edge computing [2][3][4]. It allows organizations to leverage the collective knowledge of distributed datasets without violating privacy regulations or exposing sensitive information. Additionally, federated learning can reduce the reliance on costly data transfers and enable training on resource-constrained devices. Despite the benefits of Federated Learning, Federated learning presents unique challenges for software engineers involved in AI engineering [5]. They need to design and develop distributed systems that can effectively handle the communication, coordination, and synchronization between multiple clients and a central server. This requires scalable and fault-tolerant systems to accommodate large-scale deployments while optimizing network utilization and ensuring data consistency across distributed nodes. In addition, algorithmic development and optimization are essential for efficient federated learning. Software engineers need to understand and implement federated learning algorithms and optimization techniques, including Federated Averaging and adaptive learning rate strategies [6]. Developing algorithms that converge to high-quality models while minimizing resource consumption is a challenging task for software engineers. Integration with edge devices and IoT environments adds another layer of complexity. Software engineers must consider the resource constraints of these devices, such as limited computational power and energy consumption [7]. They need to optimize models and algorithms to fit within these constraints and devise efficient mechanisms for model deployment, updates, and synchronization on edge devices. Moreover, software engineers may face challenges related to the availability of comprehensive tooling and frameworks for federated learning. They may need to adapt existing tools, develop custom solutions, or contribute to open-source projects to address specific needs. Building efficient workflows, debugging mechanisms, and monitoring tools tailored to federated learning systems is a demanding task. The existing federated learning (FL) platforms and frameworks from both academia and industry are usually complex to use. FL involves distributed systems, optimization algorithms, privacy techniques, and machine learning models. Integrating these components into a cohesive platform or framework can result in intricate systems with numerous dependencies and configurations. Understanding and navigating these technical intricacies can be challenging for users, especially those without a strong background in distributed systems or machine learning. In addition, FL is applied across a wide range of domains and use cases, each with its specific requirements and constraints. Designing a platform or framework that accommodates this heterogeneity can lead to increased complexity. It becomes a challenge to strike a balance between providing flexibility for customization and maintaining simplicity for ease of use. Last but not least, most of the existing FL frameworks use centralized aggregation in federated learning application development. Relying on a central server creates problems when deploying into production, such as single point of failure, large communication overhead, lack of scalability, etc. In this paper, we present EdgeFL, a lightweight federated learning (FL) framework designed specifically for edge computing environments, aiming to address the challenges associated with centralized aggregation. By adopting an edge-only model training and aggregation approach, EdgeFL eliminates the need for a central server, thereby enabling seamless scalability across diverse use cases. The framework offers a straightforward integration process, requiring only four lines of code (LOC) for software engineers to incorporate FL functionalities into their AI products. Moreover, EdgeFL facilitates the customization of aggregation functions, empowering engineers to customize according to their needs. Thus, the contributions of the paper are the following: 1). We introduce EdgeFL1, the first scalable edge-only FL framework. To accomplish easy-implementation and scalable model training capacity, simple API design and learning flow abstraction are built. 2). To expedite industrial FL training and support large-scale node communication among edges, we suggested a decentralized FL architecture and learning algorithm which enables asynchronous model training. The architecture could serve as a model for future edge-only FL development and study. 3). Based on flexible user definitions, engineers and researchers can quickly construct a customizable model and aggregation approach for diverse needs. 4). EdgeFL offers scalable and seamless deployment capabilities to facilitate rapid prototyping and production-level training for both industrial and research purposes. The remainder of this paper is structured as follows. In Section II, we introduce the background and related work of this study. Section III details our research method, including the implementation, data distribution, machine learning methods applied and evaluation metrics. Section IV presents the system design of our proposed EdgeFL. Section V evaluates our proposed framework and compared it with existing Federated Learning frameworks/platforms. Section VI outlines the discussion on our observed results. Finally, Section VII presents conclusions and future work. ## II Background and Related Work ### _Federated Learning_ Federated learning (FL) is a machine learning paradigm that allows multiple decentralized devices or entities to collaboratively train a shared model without sharing their raw data [8][9][10]. In traditional machine learning approaches, a central server or data aggregator collects and stores all the training data from different sources. The central server then trains a global model using the combined data. However, this centralized approach raises concerns regarding data privacy, security, and the practicality of transferring large volumes of data to a central location [6]. With the rise of edge computing and distributed data sources, there is a growing need to leverage data that resides on various devices or entities while respecting data ownership and privacy. Federated learning addresses this challenge by allowing local devices, such as smartphones, IoT devices, or edge servers, to train a shared model using their locally stored data. As shown in Figure 1, the FL process typically involves the following steps: 1). Initialization: The central server initializes a global model and distributes it to the participating devices or entities. 2). Local Model Training: Each device or entity independently trains the global model using its local data, without sharing the raw data. The local model is trained using gradient descent or other optimization algorithms, updating the model parameters based on its local data. 3). Model Aggregation: After local training, the devices or entities send their locally computed model updates (such as gradients) to the central server. The central server aggregates these updates using techniques like Federated Averaging or Secure Aggregation to obtain a refined global model. 4). Iterative Process: The model training and aggregation process iterates over multiple rounds, allowing the global model to improve over time. Each round typically consists of local model training, model aggregation, and communication between devices and the central server. FL enables collaborative learning from a diverse range of devices or entities, each having its unique data distribution and characteristics. This diversity helps improve the generalization and robustness of the global model by capturing a more comprehensive representation of the data. ### _Existing Federated Learning Platform/Framework_ There are several existing federated learning (FL) platforms and frameworks available that facilitate the development and deployment of FL systems. TensorFlow Federated is an open-source research-oriented framework developed by Google. It extends TensorFlow with FL capabilities, enabling the implementation of FL algorithms and protocols. TFF provides a programming model and APIs for expressing federated computations, along with tools for simulation and evaluation of FL algorithms [11]. PySyft is an open-source library built on top of PyTorch that focuses on privacy-preserving FL and secure multi-party computation [12]. It provides a high-level API for expressing FL computations and offers privacy techniques like differential privacy and secure aggregation. It allows users to work with remote data and models, enabling collaborative learning while protecting privacy. FATE is an industrial-grade FL framework developed by Webank Research [13]. It provides a secure and flexible infrastructure for collaborative model training across distributed entities while preserving data privacy. FATE supports a wide range of machine learning algorithms and provides functionalities for data preprocessing, model evaluation, and privacy protection. It has been widely adopted in industries such as finance, healthcare, and telecommunications. LEAF is an open-source FL framework developed by NVIDIA [14]. It offers a comprehensive set of tools and functionalities for researchers and practitioners to experiment with and evaluate FL algorithms. PaddleFL is a FL framework developed by PaddlePaddle, an open-source deep-learning platform [15]. It provides a distributed infrastructure for training large-scale FL models. PaddleFL supports various optimization algorithms, model architectures, and communication protocols. However, the current FL platforms and frameworks available in academia and industry are complex, demanding a thorough understanding of FL concepts. As shown in Table I, the majority of existing FL systems require the implementation of more than 100 lines of code (LOC) to deploy a FL application, providing limited flexibility for aggregation function customization and lacking support for asynchronous communication schemes. These constraints present significant challenges to software engineers attempting to integrate FL into production environments. Furthermore, because all of these platforms/frameworks rely on a centralized aggregation server, issues such as single-point failure and scalability constraints arise. ## III Research Method In this research, we adopted the empirical methodology and learning procedure outlined by Zhang [16] to conduct a comprehensive quantitative measurement and evaluation of our proposed EdgeFL framework in comparison to existing Federated Learning platforms/frameworks. We aim to present a thorough and robust assessment of the performance and effectiveness of the EdgeFL framework. In the subsequent sections, we provide detailed insights into our implementation approach, present the method employed for dataset partitioning and distribution for heterogeneous simulation, discuss the evaluation metrics employed, and elaborate on the machine learning methods utilized during the experimental analysis. ### _Implementation_ In order to thoroughly evaluate the performance and capabilities of the EdgeFL framework, we conducted experiments using two widely recognized machine learning applications: digit recognition and object recognition. For these experiments, we leveraged the MNIST and CIFAR-10 datasets, which are extensively used in the research field. To facilitate deep learning training and testing, we developed the applications using the PyTorch backend. With the integration of EdgeFL, the FL functionality is seamlessly integrated into these machine learning applications with the addition of just four lines of code (LOC). Furthermore, we containerized the applications, enabling easy deployment on edge devices while maintaining their functionality and performance. The flexibility of the EdgeFL framework allows it to be constructed on various container orchestration clusters, such as Kubernetes, Docker Swarm, etc. In this paper, we utilized Docker Swarm [17] as the cluster of choice. Docker Swarm offers an efficient and scalable environment for managing containerized applications. The services within Docker Swarm facilitate seamless communication among containers, while an internal DNS resolver ensures peer node service communication. By utilizing the capabilities of Docker Swarm, we were Fig. 1: Diagram of Federated Learning training process able to create a robust and scalable deployment environment for the EdgeFL framework, ensuring its suitability for edge devices and distributed computing scenarios. ### _Dataset Distribution_ For the purpose of this study, we used two kinds of edge data distribution to analyze system performance for heterogeneous simulation. #### Iii-B1 Uniform Distribution Within this experimental setup, the training data samples were distributed among the edge nodes following a uniform distribution. This distribution ensured an equal likelihood of data samples from each target class. #### Iii-B2 Normal Distribution Within this configuration, the number of samples in each class within each edge node follows a normal density function. Mathematically, this can be expressed as: \[X\sim\mathcal{N}(\mu,\sigma^{2})\] where \(\mu\) and \(\sigma\) are defined as: \[\mu=\frac{k\times N}{K}\text{, }\sigma=0.2\times N\] In the above equations, \(k\) represents the ID of each edge node, \(K\) denotes the total number of edge nodes, and \(N\) corresponds to the total number of target classes in the training data. This configuration aims to provide varied distributions and different numbers of samples among different edge nodes, allowing each class to have a probability of having the majority of samples in a specific node. ### _Machine Learning Method_ The implementation of the models in this study utilized Python and relied on the following libraries: torch 1.6.0 [18], torchvision 0.7.0 [19], and scikit-learn [20], which were applied in model construction and evaluation. To achieve satisfactory classification results, two distinct convolutional neural networks (CNN) [21] were trained for the MNIST and CIFAR-10 datasets. For the MNIST application, the CNN architecture comprised two 5x5 convolutional layers (with 10 output channels in the first layer and 20 in the second), each followed by 2x2 max pooling. Additionally, a fully connected layer with 50 units employing the ReLU activation function and a linear output layer were included. For the CIFAR-10 application, the CNN architecture featured four 5x5 convolutional layers (with 66 output channels in the first layer, 128 in the second with a stride of 2, 192 in the third, and 256 in the fourth with a stride of 2). Furthermore, two fully connected layers utilizing the ReLU activation function, with 3000 and 1500 units respectively, were incorporated along with a linear output layer. ### _Evaluation Metrics_ To assess the effectiveness of EdgeFL, three key metrics were selected: weights update latency, model evolution time, and model classification performance. #### Iii-D1 Weights update latency Weights update latency measures the time it takes for the model to be transmitted. In centralized architectures which applied by existing FL platforms/frameworks, the central aggregation server collects the models. However, in the decentralized architecture of EdgeFL, where the aggregation function is moved to the edge, a peer node is ready to receive the updated model. The average weights update latency across all edge nodes during one training round is calculated. This metric provides insights into the network situation and communication overhead of each architecture option. Measurement of this metric involves checking the sending and receiving timestamps in all model receivers. #### Iii-D2 Model Evolution time Model evolution time represents the time difference between two different versions of the deployed model at the edge nodes. Similar to weights update latency, the average model evolution time across all edge nodes during one training round is determined. This metric highlights the speed at which local edge devices update their knowledge, which is crucial for systems requiring quick adaptation to rapidly changing environments. Model evolution time is measured in all edge nodes by examining the model deployment timestamp. #### Iii-D3 Model Classification Performance Model classification performance is a vital metric that indicates the quality of the trained model. It measures the percentage of correctly recognized images among the total number of testing images. The classification performance is evaluated on each edge device using their updated models. The test sample distribution should align with the training samples (local test set). The average classification performance across all edge nodes is reported. ## IV System Design of EdgeFL In this section, we present a comprehensive overview of the system design of EdgeFL. In addition, the APIs, functions and EdgeFL learning life-cycle are also presented in this section. ### _System Design_ The EdgeFL allows for easy scalability, fault tolerance, and customization of the FL process. The framework consists of two main components: FL edge nodes and registration nodes. Edge nodes serve as independent participants, facilitating distributed and privacy-preserving model training without the need for a centralized server. The registration nodes act as coordination points to connect the FL edge nodes, enabling them to discover and communicate with each other in a decentralized manner. * FL Edge Nodes: The FL edge nodes are deployed on edge devices and play a crucial role in the FL process. Each FL edge node serves as a participant in the federated learning system. The FL edge nodes execute the FL training algorithm, exchange models with other nodes, and perform local model updates. The FL edge node code provided in the implementation utilizes the Flask framework to serve requested machine-learning model files from other peers. * Registration Nodes: The registration nodes act as co-ordinators for the FL edge nodes. It maintains a list of active peers and provides services for registration, unregistration, and retrieval of peer information. The registration nodes enable FL client nodes to discover and communicate with each other. The implementation of the tracker server utilizes the Flask framework which exposes several APIs to facilitate the FL process. ### _APIs and Services_ Table II summarises the most important APIs and services for EdgeFL, including edge node registration and registration, peer information retrieval and model file serving. * Registration API: FL edge nodes send a registration request to the registration nodes through this API. The request includes the hostname of the FL edge node. Upon successful registration, the registration nodes add the peer information to their list of active peers. * Unregistration API: FL edge nodes use this API to send an unregistration request to the registration nodes when they no longer participate in the FL process. The request includes the hostname of the FL edge node. The registration nodes remove the corresponding peer information from their list of active peers. * Peer Information Retrieval API: FL edge nodes can query the registration nodes for a list of active peers using this API. The registration nodes respond with the list of active peer information, allowing FL edge nodes to discover and communicate with other peers. * Model File Serving API: FL edge nodes expose this API to serve the requested model file. When a peer requests the latest model, the FL edge node responds by sending the machine-learning model file through HTTP response. ### _Function Details and Example Usage_ The proposed EdgeFL framework allows software engineers to easily incorporate federated learning (FL) functionalities into their AI products. In contrast to complex FL platforms and frameworks, EdgeFL provides a streamlined implementation that requires only four lines of code (LOC). Because of this simplicity, software engineers can quickly integrate FL capabilities into their existing AI applications without requiring significant code changes or extensive re-engineering efforts. The following Listing 1 illustrates the example usage of the EdgeFL. ``` #---Continuefromnodetrainingpart--- #---Initializepeerinstance---peer=Peer(config) #---Startpeerinstance---peer.start() ``` ``` #---Pullmodelfromactivepeersandstartaggregationw_latest=peer.aggregation_func() ``` ``` #---model.load_state_dict(w_latest) ``` ``` #---unregister_peer() ``` Listing 1: Usage example of EdgeFL. _peer = Peer(configs)_: Initializing and creating an instance of the Peer class, representing a participant in the EdgeFL framework. The configs include addresses of registration nodes and the configuration of the customized aggregation function. This initialization step ensures that the FL edge node is properly configured to connect to the registration nodes and participate in the FL training and aggregation tasks. The peer object serves as a handle through which the FL edge node can interact with other peers, fetch models, register with the registration nodes, and perform aggregation operations. _peer.start()_: The function initiates the execution of the FL edge node within the EdgeFL framework. When invoked, this function triggers a series of actions that enable the FL edge node to participate in the FL process. It includes registering the FL edge node with the registration nodes, establishing connections with other peers and starting a background instance to serve asynchronous file requests from peers. By calling "peer.start()", the FL edge node becomes an active participant in the EdgeFL framework, contributing to the collaborative model learning while leveraging edge devices' capabilities. _peer.aggregation_func()_: The function performs the aggregation process. When called, this function retrieves models from other FL edge nodes, as identified through the registration nodes, and applies the aggregation algorithm to combine these models into a single updated model. The aggregation function facilitates the collaborative nature of FL by leveraging the contributions of multiple peers to improve the overall model's accuracy and performance. By executing "peer.aggregation_func()", the FL edge node actively contributes to the iterative model aggregation process, promoting the collective intelligence of the EdgeFL framework and enhancing the final model's quality. _peer.unregister_peer()_: The function enables the FL edge node to gracefully exit from the EdgeFL framework. When invoked, this function notifies the registration nodes about the intention to unregister, providing the necessary information such as the hostname of the FL edge node. By calling "peer.unregister_peer()", the FL client node initiates the process of removing itself from the active participant list maintained by the registration nodes. This action ensures the proper management of participants within the EdgeFL framework and allows for efficient resource allocation and coordination among the remaining active peers. ### _EdgeFL Learning Life-Cycle_ The life cycle of the EdgeFL framework involves several key steps for an individual edge node to join, train, share models, aggregate, and eventually leave the FL process. Algorithm 1 provides a detailed FL learning process of an individual edge node. The following is the description of each step: 1. Edge Node Joining: The edge node initializes by creating an instance of the Peer class and background instance for model requests. The node then connects to the registration nodes, registers itself as an active participant, and obtains information about other peers. 2. Model Training: The edge node starts the FL training process, performing local model training using its own dataset. It iteratively updates its local model to improve its performance. 3. Sharing Models: The FL client node retrieves models from other peers in the FL framework identified through the registration nodes. It fetches the latest models from other peers and incorporates them into its local model updates, benefiting from the knowledge and insights of other participants. It is worth noting that the model retrieval process occurs without disrupting the ongoing model training of the edge nodes, thanks to the background serving instance. This asynchronous model aggregation mechanism ensures uninterrupted model training while enabling the FL client node to actively contribute to the collaborative learning process. 4. Model Aggregation: The FL client node executes an aggregation function, combining the locally updated models with the models obtained from other peers. The aggregation function integrates the diverse models to generate a new aggregated model that captures the collective knowledge of all participating nodes. It is important to highlight that the aggregation function in the EdgeFL framework can be customized to specific analysis and case requirements, providing software engineers with the flexibility to define and implement alternative aggregation functions that align with their specific needs. This paper utilizes a default averaging function for general performance analysis. 5. Edge Node Leaving: When an edge node intends to leave the EdgeFL system, the FL client node notifies the registration server of its hostname for identification. The registration server updates the active participant list, removing the leaving edge node. However, it is worth noting that edge nodes have the option to remain in the system even after completing their learning process. By choosing to stay, these nodes contribute by providing their completed learning models to accommodate newly joined nodes. This approach ensures that the system benefits from the availability of finished-learning models, facilitating a seamless on-boarding experience for new participants in the EdgeFL framework. This life cycle repeats as new edge nodes join the FL framework, contribute to the training and aggregation processes, share their models, and eventually leave when they decide to end their participation. The EdgeFL allows for continuous collaborative learning and model improvement while maintaining the privacy and autonomy of individual edge nodes. ### _Containerization and Scalable Deployment_ To ensure easy deployment on edge devices, the EdgeFL framework was containerized using containerization technologies, namely, Docker. The containerization process involved encapsulating all the necessary components, dependencies, and configurations of EdgeFL into a lightweight and portable container image. This approach allows for seamless deployment across a variety of edge devices, regardless of the underlying operating system or hardware architecture. The architectural diagram illustrated in Figure 3 showscases the seamless and scalable deployment of the EdgeFL framework. Within this architecture, each edge node container includes services such as local model training, model aggregation, and model serving. Simultaneously, the registration node container contains services for registration and peer discovery. Through inter-connectivity, seamless communication is facilitated among all nodes within the framework. It is important to note that the number of registration nodes can be expanded in alignment with the number of participating edge nodes. This expansion enables efficient coordination and management within the EdgeFL framework, ensuring smooth coordination. Fig. 2: The Learning Life-Cycle of EdgeFL, including joining the FL process, model training, sharing, aggregation, and eventual node leaving. These stages collectively define the operational flow of EdgeFL within an edge node. By containerizing EdgeFL, software engineers can easily distribute and deploy the framework on edge devices without worrying about intricate installation procedures or compatibility issues. The containerized EdgeFL image contains all the required software libraries, frameworks, and configurations, providing a self-contained environment for running the FL client nodes. Additionally, containerization ensures that the EdgeFL framework remains isolated and independent, preventing conflicts with other software components on the edge device. ## V Evaluation Results This section presents the experimental results of EdgeFL, focusing on three key aspects as defined in Section III-D: (1) Weights update latency, which measures the time required to transmit model weights; (2) Model evolution time, which quantifies the duration for obtaining a new version of the model; and (3) Classification accuracy, evaluated on the edge dataset. To ensure an adequate number of samples on each edge node, our simulations involve a total of 10 nodes, with all nodes actively participating in the training procedure in both MNIST and CIFAR10 applications. This configuration enables comprehensive analysis and evaluation of the EdgeFL framework, providing valuable insights into its performance and effectiveness in real-world scenarios. First, we examine the weights update latency and model evolution time. Our experimental results (Table III) show that our proposed EdgeFL framework outperforms existing federated learning platforms/frameworks in terms of weights update delay and model evolution time in both MNIST and CIFAR10 applications. EdgeFL has smaller delays across the board when it comes to weights update delays. The reduced model delay indicates improved efficiency in transmitting models among edge nodes, demonstrating the effectiveness of our decentralized architecture. EdgeFL also excels at achieving rapid model evolution in scenarios with unequally distributed datasets. EdgeFL enables rapid model evolution by leveraging its decentralized architecture and effectively capitalizing on the pull-based model-sharing mechanism. This mechanism allows edges to promptly update their local models, enhancing their knowledge based on the available data. Consequently, EdgeFL outperforms other frameworks by reducing the time required for model evolution in situations where dataset distribution is imbalanced. These findings highlight the advantages of EdgeFL over traditional federated learning approaches. The superior performance in terms of model delay and evolution time can be attributed to the streamlined communication and aggregation processes within EdgeFL. By utilizing the power of edge computing, EdgeFL minimizes the network overhead and facilitates efficient model updates. The observed improvements in model delay and evolution time have significant implications for real-world applications. The reduced delays enable faster transmission of updated models, ensuring timely access to the most recent knowledge across the edge network. Additionally, the shortened evolution Fig. 3: Containerized architecture for seamless and scalable deployment of the EdgeFL framework time empowers edge devices to promptly adapt to evolving data features, making EdgeFL highly suitable for use cases that require quick model evolution and responsiveness to changing environments. In addition to evaluating weights update delay and model evolution time, we conducted extensive accuracy comparisons between EdgeFL's decentralized averaging and the widely used FedAvg algorithm [8] found in existing federated learning platforms/frameworks. Figure 4 demonstrates the results, which reveal that decentralized averaging outperforms the centralized FedAvg approach when testing the models on edge devices. Notably, the average accuracy achieved by EdgeFL's decentralized averaging is approximately 2% higher for MNIST and 5% for CIFAR-10 datasets. The observed increase in accuracy showcases the efficacy of EdgeFL's decentralized averaging mechanism in improving model performance. With the collective knowledge and insights from distributed edge devices, EdgeFL facilitates enhanced model convergence. As a result, EdgeFL enables more accurate and refined models, which are better equipped to handle the challenges of edge computing environments. Furthermore, our study demonstrates the effectiveness of the asynchronous join feature in EdgeFL, which enables new nodes to seamlessly participate in the existing system and quickly acquire the latest knowledge without requiring retraining from scratch. As shown in Figure 5, we observe that when new nodes (node id: 11, 12) join the system midway through the training process, they promptly attain the same accuracy level as the system accuracy. This outcome showcases the Fig. 4: The comparison of classification accuracy by utilizing FedAvg (commonly used by existing FL platforms/frameworks) and EdgeFL. framework Fig. 5: Midway joined node classification performance in both MNIST and CIFAR-10 application capability of EdgeFL to facilitate efficient knowledge transfer and rapid model convergence for newly joined nodes. The asynchronous join functionality in EdgeFL offers significant advantages in terms of scalability and time-to-adaptability. By allowing new nodes to directly benefit from the collective intelligence of the system without the need for extensive training, EdgeFL significantly reduces the computational burden and time required for onboarding new participants. This feature is particularly valuable in dynamic environments where nodes frequently join and leave the system. The demonstration of the asynchronous join capability in EdgeFL emphasizes its potential for real-world deployments, especially in scenarios where rapid knowledge transfer and quick integration of new nodes are crucial. By leveraging the existing knowledge base and facilitating seamless incorporation of new nodes, EdgeFL empowers federated learning systems to efficiently adapt and evolve over time. These findings highlight the practical benefits of EdgeFL's asynchronous join mechanism and its ability to enhance the scalability and flexibility of federated learning in dynamic edge computing environments. ## VI Discussion This paper focuses on analyzing and interpreting the results obtained from the experiments conducted with the EdgeFL framework. The evaluation of EdgeFL was performed using various metrics. Regarding weights update latency, EdgeFL demonstrated superior performance compared to existing Federated Learning platforms/frameworks. The decentralized communication mechanism employed in EdgeFL effectively reduced weights update delay by about 50%, outperforming centralized alternatives. In terms of model evolution time, EdgeFL showcased notable advantages when dealing with unequally distributed datasets. The decentralized averaging approach facilitated faster model evolution compared to traditional Federated Learning methods, such as FedAvg. By utilizing the inherent pull-based mechanism, EdgeFL enables edge devices to swiftly acquire the latest knowledge from the system without the need to retrain from scratch. This capability is crucial for rapidly adapting to dynamically changing environments, making EdgeFL well-suited for real-world deployments. The evaluation of classification accuracy revealed that EdgeFL's decentralized averaging mechanism consistently outperformed the centralized FedAvg algorithm when testing the model on edge devices. The observed increase of approximately 2% and 5% in average accuracy for MNIST and CIFAR-10 datasets demonstrates the effectiveness of EdgeFL in achieving better classification performance. Moreover, the framework enabled faster model convergence, contributing to improved overall efficiency and effectiveness. Additionally, our experiments demonstrated the asynchronous join feature of EdgeFL, whereby a new node joining the system can promptly access the latest knowledge without requiring retraining from scratch. The experimental results showed that a newly joined node quickly achieved a high accuracy level, even when joining the system halfway through the training process. This feature highlights the scalability and efficiency of EdgeFL, as it enables seamless integration of new nodes into the existing network, without compromising overall performance. Overall, the evaluation and analysis of the EdgeFL framework demonstrated its efficacy in addressing the challenges of scalable and efficient edge deployment in Federated Learning. The framework exhibited better performance in terms of weight update latency, model evolution time, and classification accuracy when compared to existing solutions. The decentralized averaging mechanism, along with its pull-based model-sharing approach, proved to be particularly advantageous for achieving faster model updates and improved convergence. These findings highlight the potential of EdgeFL for various real-world applications, particularly in industrial scenarios where timely and accurate model updates are critical. ## VII Conclusion In this paper, we presented EdgeFL, a novel edge-only decentralized federated learning framework that addresses the challenges of scalability, integration, and efficiency in edge deployments. By leveraging the edge-only model training and aggregation approach, EdgeFL eliminates the need for a central server, allowing for seamless scalability across diverse use cases. The framework offers a straightforward integration process, requiring only four lines of code (LOC) for software engineers to incorporate FL functionalities into their AI products. Additionally, EdgeFL provides engineers with the flexibility to customize aggregation functions according to their specific needs and requirements, enhancing the adaptability and versatility of the framework. Our experimental results and evaluation have highlighted the key strengths and advantages of EdgeFL. The framework outperforms existing FL platforms/frameworks in various aspects. It exhibits the capabilities of EdgeFL in reducing weight update latency and model evolution time by 50% and improving classification accuracy by 2% for the MNIST dataset and 5% for the CIFAR dataset compared to existing FL platforms/frameworks. These findings emphasize the potential of EdgeFL in real-world applications, particularly in industrial scenarios where timely and accurate model updates are critical. In future work, we will further validate and expand the capabilities of the proposed EdgeFL framework with more cases. We also intend to investigate resource optimization techniques such as model compression and quantization to enhance the communication efficiency of edge devices in EdgeFL. Furthermore, the adaptive aggregation strategies that dynamically adjust the aggregation process based on network conditions, device capabilities, and data heterogeneity will also be explored.
2310.01384
Exact Ground States and Phase Diagram of the Quantum Compass Model under an in-plane Field
We consider the square lattice $S$=1/2 quantum compass model (QCM) parameterized by $J_x, J_z$, under a field, $\mathbf{h}$, in the $x$-$z$ plane. At the special field value, $(h_x^\star,h_z^\star)$=$2S(J_x,J_z)$, we show that the QCM Hamiltonian may be written in a form such that two simple product states can be identified as exact ground-states, below a gap. Exact excited states can also be found. The exact product states are characterized by a staggered vector chirality, attaining a non-zero value in the surrounding phase. The resulting gapped phase, which we denote by $SVC$ occupies most of the in-plane field phase diagram. For some values of $h_x>h_z$ and $h_z>h_x$ at the edges of the phase diagram, we have found transitions between the $SVC$ phase and phases of weakly-coupled Ising-chain states, $Z$ and $X$. In zero field, the QCM is known to have an emergent sub-extensive ground-state degeneracy. As the field is increased from zero, we find that this degeneracy is partially lifted, resulting in bond-oriented spin-stripe states, $L$ and $R$, which are each separated from one another and the $SVC$ phase by first-order transitions. Our findings are important for understanding the field dependent phase diagram of materials with predominantly directionally-dependent Ising interactions.
A. D. S. Richards, Erik S. Sørensen
2023-10-02T17:43:52Z
http://arxiv.org/abs/2310.01384v2
# Exact Ground States and Phase Diagram of the Quantum Compass Model under an in-plane Field ###### Abstract We consider the square lattice \(S{=}1/2\) quantum compass model (QCM) parameterized by \(J_{x},J_{z}\), under a field, \({\bf h}\), in the \(x\)-\(z\) plane. At the special field value, \((h_{x}^{*},h_{z}^{*}){=}2S(J_{x},J_{z})\), we show that the QCM Hamiltonian may be written in a form such that two simple product states can be identified as exact ground-states, below a gap. Exact excited states can also be found. The exact product states are characterized by a staggered vector chirality, attaining a non-zero value in the surrounding phase. The resulting gapped phase, which we denote by \(SVC\) occupies most of the in-plane field phase diagram. For some values of \(h_{x}{>}h_{z}\) and \(h_{z}{>}h_{x}\) at the edges of the phase diagram, we have found transitions between the \(SVC\) phase and phases of weakly-coupled Ising-chain states, \(Z\) and \(X\). In zero field, the QCM is known to have an emergent sub-extensive ground-state degeneracy. As the field is increased from zero, we find that this degeneracy is partially lifted, resulting in bond-oriented spin-stripe states, \(L\) and \(R\), which are each separated from one another and the \(SVC\) phase by first-order transitions. Our findings are important for understanding the field dependent phase diagram of materials with predominantly directionally-dependent Ising interactions. Quantum compass models were first introduced as a model of orbital-orbital interactions arising from a Jahn-Teller distortion [1; 2; 3; 4], and both classical and quantum versions have been extensively studied [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19] with the main focus on ground-state properties of two-dimensional models. Interest in compass models intensified with the realization that Kitaev's honeycomb model [20] with bond-directional interactions, a special case of a compass model, potentially can be realized in materials through a super exchange mechanism [21]. In particular, iridium- and ruthenium-based systems in which ligands form edge-sharing octahedra surrounding the transition metal atoms have been proposed as materials which may realize a pseudospin Kitaev model [21], with \(\alpha\)-RuCl\({}_{3}\)[22; 23; 24], a layered two-dimensional honeycomb material, as one of the most promising materials. This has given rise to the class of Kitaev materials [25; 26; 27; 28; 29; 30] that one may view as particular realizations of the broader class of quantum compass models. For Kitaev materials, field-induced spin liquid phases are of special interest due to the potential presence of anyonic excitations, and intriguing results been observed in theoretical studies [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45] and in recent experiments on \(\alpha\)-RuCl\({}_{3}\) when an out-of-plane field [46] is applied in the \([111]\) direction, as well as for an in-plane field [47; 48; 49; 50; 51]. The latter case is of special interest here since we show that for the closely related square lattice quantum compass model (QCM) a twice degenerate exact ground-state below a gap can be found under an in-plane field, inducing an extended phase with other non-trivial phases in proximity. Here, we determine the complete in-plane field phase diagram. The bulk of our results are focused on the QCM, and we first note a number of interesting properties of this model. The QCM, in absence of a magnetic field, has a sub-extensive ground-state degeneracy of \(2{\times}2^{L}\)[9] and topological soliton excitations which are deconfined in one dimension [52]. Through a duality transformation [8], it has been shown that the QCM is equivalent to the Xu-Moore model, originally proposed to model interactions between \(p{+}ip\) superconductor arrays [53]. Furthermore, a duality mapping has also been established between the Xu-Moore model and the transverse-field toric code model [54; 55]. Consequently, a duality mapping exists between the zero field QCM and the transverse field toric code model, and the latter model has been studied under an in-plane field [56] as well as a transverse field [55]. Both classical and quantum QCM models have been studied at finite temperature [11; 7], in both cases finding a transition in the 2D Ising universality class to a low temperature ordered phase. One may also note that, it has been shown in Ref. [57] that two decoupled copies of the QCM can be mapped to the model of interacting Majorana fermions of Ref. [57], relevant to 3D topological insulators with proximity-induced superconductivity. Dualities between each of these models demonstrate how properties of the QCM may be understood in several different contexts. The antiferromagnetic quantum compass model is, \[\mathcal{H}=J\sum_{\bf r}(\hat{S}_{\bf r}^{x}\hat{S}_{{\bf r}+e_{x}}^{x}+ \hat{S}_{\bf r}^{z}\hat{S}_{{\bf r}+e_{z}}^{z})-\sum_{\bf r}{\bf h}\cdot\hat {\bf S}_{\bf r}. \tag{1}\] Here, we set \(g{=}\hbar{=}\mu_{B}{=}1\). Furthermore, we parameterize the field term as \({=}h(\cos\phi_{xx}\cos\theta_{y},\sin\phi_{xz}\cos\theta_{y},\sin\theta_{y})\) and define \(|{\bf h}|=h\) as the field strength. We use \(N=L_{x}\times L_{z}\) to denote the number of sites in the model, and we shall refer to the \(J\hat{S}^{x}\hat{S}^{x}\) coupling as a \(x\)-bond and the \(J\hat{S}^{z}\hat{S}^{z}\) coupling as a \(z\)-bond. In zero field, a unitary transformation around the \(y\)-axis on every second site, relates \(J\) to \(-J\). However, since our focus is on ground-states in the presence of a field, the sign of Figure 1: Exact ground state of the QCM under in-plane field \(h_{xz}^{*}=2JS\sqrt{2}\). Coloured bonds represent Ising interactions. \(J\) matters, and we exclusively focus on the antiferromagnetic (AF) model with \(J{>}0\). We set \(J{=}1\). _Exact Ground and Excited States:_ The exact ground-states for the QCM can be found by the following simple argument. If we consider the Hamiltonian, Eq. (1) for general \(S\), we can write the field term in the form \(-\sum_{\bf r}(h_{x}\hat{S}_{\bf r}^{x}+h_{z}\hat{S}_{x}^{z})\). Following Ref. [58; 59], we then see that with \(\phi_{xz}{=}\pi/4\), where \(h_{x}{=}h_{z}\), we can absorb the field term into the interaction term at the special field value \(h_{z}^{\star}{=}h_{z}^{\star}{=}2JS\) with \(|h_{xz}^{\star}{=}2JS\sqrt{2}\). For a \(L_{x}{\times}L_{z}\) lattice with periodic boundary conditions in both directions and both \(L_{x}\) and \(L_{z}\)_even_, we can then write at \(h_{xz}^{\star}\): \[\mathcal{H} = \mathcal{H}_{p}-2NJS^{2}\] \[\mathcal{H}_{p} = J\sum_{\bf r}\big{[}\left(S-\hat{S}_{\bf r}^{x}\right)(S-\hat{S }_{\bf r+e_{x}}^{x})+ \tag{2}\] \[(S-\hat{S}_{\bf r}^{z})(S-\hat{S}_{\bf r+e_{x}}^{z})\big{]}.\] \(\mathcal{H}_{p}\) is here positive semidefinite, and it follows that if a product state \(|P\rangle\) can be found where each site is in an eigenstate of \(\hat{S}^{\alpha}|\alpha\rangle{=}S|\alpha\rangle\) (\(\alpha{=}x,z\)) such that \(\mathcal{H}_{p}|P\rangle{=}0\), then \(|P\rangle\) is not only an eigenstate, but a ground-state. For the QCM it is straight forward to see that if \(L_{x}\) and \(L_{z}\) are both _even_, and periodic boundary conditions (PBC) are applied, then the two simple product states with \(|x\rangle\) on one sublattice and \(|z\rangle\) on the other, as shown in Fig. 1, are eigenstates of \(\mathcal{H}_{p}\) with eigenvalue 0, and therefore degenerate ground-states with \(E_{0}{=}{-}2NJS^{2}\). This construction trivially generalizes to the case where \(J_{x}{\neq}J_{z}\) where the same ground-states appear at \((h_{x}^{\star},h_{z}^{\star}){=}2S(J_{x},J_{z})\). It is exact for any finite \(L_{x}{\times}L_{z}\) torus under PBC, but does not hold for open boundary conditions (OBC) nor when \(L_{x}\) or \(L_{z}\) are odd. It is interesting to note that the above argument is only superficially related to the remarkable extension of the Lieb-Schultz-Mattis (LSM) theorem for quantum spin chains [60; 61] to the case of an applied field [62; 63], showing that magnetization plateaus can appear, associated with a gapped state, when conserved quantities such as the total magnetization \(\sum_{j}S_{j}^{z}\) are present. In contrast, for the QCM, the magnetization is not conserved, and since we can generalize to the case \(J_{x}{\neq}J_{z}\), any special symmetry axis does not appear important. We also note that similar product states formed with \(S^{\alpha}|\alpha_{m}\rangle{=}m|\alpha_{m}\rangle\) with \(0{<}m{<}S\) will be eigenstates at the field value \(h_{x}{=}h_{z}{=}2Jm\), but not ground-states. In the following, we provide strong numerical evidence for a sizable gap at \(h_{xz}^{\star}\) and demonstrate that the two product states are the _only_ ground-states at \(h_{xz}^{\star}\) under periodic boundary conditions (PBC) with \(L_{x}\), \(L_{z}\) even. We expect that for large systems, lifting these constraints will not change the physics due to the presence of a gap, and we explore the full phase diagram using iPEPS, without imposing PBC. _Methods:_ For an in-plane field there is no sign problem and Monte Carlo methods are applicable, but we have found it advantageous to use iPEPS [64; 65; 66] directly in the thermodynamic limit for the two-dimensional lattice, to obtain high precision results for the field dependent phasediagram of the QCM at zero temperature. For details, see Ref. [67]. In addition, we use exact diagonalization of small clusters, and iDMRG [68; 69; 70; 71; 72; 73; 74; 75] on infinitely long cylinders in the \(x\)-direction, of circumference up to \(L_{z}{=}10\). Typically, we use iDMRG with a bond dimension up to \(D{=}1000\) and \({\epsilon}{=}10^{-11}\). The location of quantum critical points (QCP) are first determined from the susceptibility of the ground state energy per spin \(e_{0}\) with respect to a parameter \(p\), defined as \(\chi_{p}^{e}=-\frac{\partial^{2}e_{0}}{\partial p^{2}}\). In finite systems, at a quantum critical point, \(\chi^{e}\) is known to scale as [76; 77; 78]\(\chi^{e}\sim N^{2/\nu-d-z}\), and is therefore likely to diverge at a QCP, with \(\nu\) and \(z\) the correlation and dynamical critical exponents and \(d\) the spatial dimension. Our numerical results are for the \(S{=}1/2\) QCM, and we consider \(L_{x}{\times}L_{z}{=}N\) lattices. In light of our exact solution mentioned previously, and the natural competition between bond-directional ordering of the QCM in zero field, we define the vector bond Figure 2: Results from ED with PBC on a 4\(\times\)6 lattice, iDMRG with \(L_{z}{=}\)10, and iPEPS versus field strength, \(h_{xz}\), for a field in the [101] direction (\(\phi_{xz}=\pi/4\)). (a) \(\chi_{x_{z}}^{b}\) ED, iPEPS and iDMRG. (b) \(|\mathcal{X}^{y}|\) from ED with a small pinning field 0.005\(h_{z}\) on a single site, iPEPS, and iDMRG. (c) \(\phi\) and bond correlations from iPEPS. (d) energy gaps as obtained from ED. Solid vertical lines indicate \(h_{xz}^{c1}{=}0.540\) and \(h_{xz}^{c2}{=}1.626\) separating the low field \(LR\), \(SVC\), and polarized (PS) states. The dotted vertical line indicates the exactly-solvable point, \(h_{xz}^{\star}=2SJ\sqrt{2}\). chirality \[\mathcal{X}_{\alpha}^{y}=\langle\vec{S}_{\mathbf{r}}\times\vec{S}_{\mathbf{r}+ \mathbf{e_{x}}}\rangle^{y}\quad\alpha=x,z \tag{3}\] along with a nematic order parameter \[\phi=\langle S_{\mathbf{r}}^{x}S_{\mathbf{r}+\mathbf{e_{x}}}^{x}-S_{\mathbf{r} }^{z}S_{\mathbf{r}+\mathbf{e_{x}}}^{z}\rangle. \tag{4}\] quantifying the degree of orthogonality and bond-directional alignment of neighboring spins respectively. We have also found it useful to denote the vector chirality averaged over bond directions as \(\mathcal{X}^{y}=\frac{1}{2}\left(\mathcal{X}_{z}^{y}+\mathcal{X}_{z}^{y}\right)\). _Phases Under [101] Field:_ Our iPEPS, iDMRG and ED calculations can clearly distinguish two phase transitions when varying the strength of the in-plane field along the constant angle \(\phi_{xy}{=}\frac{\pi}{4}\) as shown in Fig.2. The high-field phase is a trivial polarized state (PS). Upon lowering the field, at the upper critical field \(h_{xz}^{c2}{=}1{.}626\), the PS transitions into a phase with substantial vector chirality (\(SVC\)). This can be seen in Fig.2(b), where, at \(h_{xz}^{c1}\), \(|\mathcal{X}^{y}|\) increases, seemingly continuously, from zero in the PS, while a divergence in \(|\chi_{h_{xz}}^{c}|\) is observed. Within the \(SVC\) phase, bond-correlations of the form \(\langle S^{\alpha}S^{\alpha}\rangle\), with \(\alpha{=}(x,y,z)\) tend to zero as the state approaches the exactly solved states (shown in Fig. 1), at \(h_{xz}^{*}{=}2JS\sqrt{2}\). ED results for the gaps, in Fig.2(c), show that the \(SVC\) phase is gapped with a twofold degenerate ground state. A second transition into a low-field region with stripe ordering, occurs as the field is lowered below \(h_{xz}^{c1}{=}0{.}540\). Within the low field region, the line \(h_{x}{=}h_{z}\) for \(h_{xz}{<}h_{xz}^{c1}\) is a first-order critical line, terminating at \(h_{xz}^{c1}\), separating phases of \(x\)-aligned and \(z\)-aligned stripe states [67] that we denote by \(L\) and \(R\) (See Fig. 4). As the field is lowered further to \(h_{xz}{=}0\), we find that the nematic order parameter, shown in Fig. 2(c), saturates to \(\phi=0.126\), in agreement with previous quantum Monte Carlo calculations [11]. _Phases Under [100] Field:_ Notably, the zero-field QCM has the 1D gauge-like symmetries, \[P_{i}=\prod_{j}S_{ie_{x}+je_{z}}^{x}\ \ \text{and}\ \ Q_{i}=\prod_{j}S_{je_{x}+ ie_{z}}^{z}, \tag{5}\] where the \(P_{i}\) and \(Q_{i}\) are incompatible. Arguments based on symmetry analysis imply that the \(S{=}1/2\) QCM ground state is at least 2-fold degenerate [79]. However, exact diagonalization calculations indicate that, when \(L_{x}{=}L_{z}{=}L\), \(2{\times}2^{L}{-}2\) low-energy states collapse onto the \(2\)-fold ground states exponentially fast with increasing \(L\)[9], implying an emergent sub-extensive degeneracy in the thermodynamic limit. Following Ref. [12], we label the eigenstates of the \(P_{i}\) and \(Q_{i}\), as \(|R\rangle\) and \(|L\rangle\), respectively. We have found that adiabatically evolving the \(|R\rangle\) and \(|L\rangle\) states under a small [100] field, \(h_{x}\), produces an energy splitting between the two states, with the \(|R\rangle\)-evolved state, \(|R(\vec{h})\rangle\), having lower energy than the \(|L\rangle\)-evolved state, \(|L(\vec{h})\rangle\), for \(h_{x}{>}h_{z}\). On the other hand, for a small [001] field, \(h_{z}\), it is \(|L(\vec{h})\rangle\) that has the lowest energy. Consequently, as outlined above, there is a first-order transition between the \(|L(\vec{h})\rangle\) and \(|R(\vec{h})\rangle\) states along the line \(h_{x}{=}h_{z}\)[67], reminiscent of the first-order transition studied in Ref. [12]. We find that this line of first-order transitions terminates at the critical point, \(h_{xz}^{c1}{=}0{.}540\), discussed in the previous section. We may then view \(h_{xz}^{c1}\) as a multicritical point since the \(R\), \(L\) and \(SVC\) phase all meet at this point. Furthermore, our calculations indicate that, in zero field, the \(|R\rangle\) and \(|L\rangle\) states are sub-extensively degenerate, and that these degeneracies are lifted when small finite fields are applied [67]. With a field in the [100] direction, the high field PS again undergoes a transition as the field is lowered below a critical field \(h_{x}^{c2}{=}1{.}344\). However, in this case, the ground state consists approximately of alternating linear domains of field-polarized spins and antiferromagnetically ordered spins perpendicular to the field. The vector chirality is therefore non-zero when evaluated on bonds connected to polarized spins, as shown in Fig.3(b), but only across bonds in the [100] direction. This phase has an interesting interpretation; columns of \(x\)-polarized spins lowering the energy by aligning with the field, while columns of \(z\)-oriented spins form strongly coupled antiferromagnetic Ising chains. Due to the nature of the QCM coupling, the two kinds of columns are not coupled. This suggests that this state is effectively one-dimensional in nature. For this reason, we refer to this phase as the \(z\)-chain (\(Z\)) phase, since the columns of spins polarized along \(x\)-direction are essentially inert, although their presence effectively eliminates the coupling between the \(z\)-chains. A sketch of the spin alignments in the \(Z\) phase is shown in Fig. 4. As the field is lowered further, a second transition from the \(Z\) phase to the \(SVC\) phase occurs at \(h_{x}^{c1}{=}0{.}935\). Finally, as \(h_{x}\to 0\), the PEPS approaches the \(|R\rangle\) state. With \(h_{z}{=}0\), the transition Figure 3: (a) \(\chi_{h_{x}}^{c}\) and (b) \(|\mathcal{X}_{y}^{y}|\) and \(|\mathcal{X}_{z}^{y}|\) as obtained from iPEPS calculations versus field strength, \(h_{x}\), for a field parallel to [100] (\(\phi_{xx}=0\)). Solid vertical lines indicate \(h_{x}^{c1}{=}0{.}935\) and \(h_{x}^{c2}{=}1{.}344\) separating the \(SVC\), \(Z\), and polarized (PS) states. A dashed line at \(h_{x}{=}0{.}410\) indicates the limiting value of the transition between \(R\) and \(SVC\) phases as \(h_{z}\to 0\). from the \(SVC\) phase to the \(R\) phase is not directly visible in \(\chi_{h_{x}}^{c}\), but the dashed line in Fig. 3 indicates the limiting value of the transition between \(R\) and \(SVC\) phases as \(h_{z}\to 0\), at \(h_{x}=0.410\). By symmetry of the model, for a field along the \(z\)-direction an analogous phase, \(X\), appear along the \(z\) axis. The \(X\) phase is dominated by rows of spins coupled by antiferromagnetic \(x\)-bonds. _Phase Diagram:_ We have also analyzed the complete phase diagram for a range of field values, \(h_{x},h_{z}\)\(>\)0. The results of our calculations produce the phase diagram as shown in Fig. 4. The most apparent feature of the phase diagram is the large phase surrounding the point \(h_{xz}^{*}\), where the product states \(|P\rangle\), from Fig. 1 are exact ground-states. As shown in panel (c) of Fig. 4, the vector chirality, \(|\mathcal{X}^{y}|\), is found to be substantial throughout this phase, reduced in the \(Z\) and \(X\) phases, and approaching zero in the low field \(L\),\(R\) regime. The nematic order parameter, \(\phi\), is close to zero in the intermediate-field regime for field angles near \(\pi/4\) reflecting a lack of spin alignment along bond directions. In the low-field regime, bond-alignment is found to dominate, with \(|\mathcal{X}^{y}|\) taking a value near zero. Remarkably, we find that the transition between the \(R\) and \(SVC\) phase is almost independent of \(h_{z}\). Likewise, we find the transition between the \(L\) and \(SVC\) phase to be independent of \(h_{x}\). The combined \(R\) and \(L\) phases therefore form a square in the lower left part of the phase diagram, similar to what is seen for the toric code [56]. Even though the field is not applied along an easy axis it is natural to view the \(SVC\) phase as a spin-flopped phase [80], and therefore to expect all transitions between the \(R\), \(L\) \(X\) and \(Z\) to be first order. As it turns out, all our calculations are consistent with this [67]. On the other hand, from our calculations, the transition to the PS phase appears to be continuous. _Discussion:_ For the AF QCM we have shown that two exact ground-states exists at the special field value, \(h_{x}^{*}\)=\(h_{z}^{*}\)=\(2JS\). This special point has a substantial (staggered) vector chirality, \(|\mathcal{X}^{y}|\) and a sizable gap, inducing the \(SVC\) phase that dominates a large part of the phase diagram. Although our numerical results clearly indicate a sizable gap at \(h_{xz}^{*}\) within the \(SVC\), establishing a rigorous proof of this gap would be of considerable interest. Our detailed study of the model under an in-plane magnetic field shows that, aside from the high field PS state, there are five distinct phases in the low to intermediate field regime, the \(SVC\), \(Z\), \(X\), \(L\) and \(R\) phases. Excitations in these phases could be non-trivial. For instance, in zero-field one-dimensional solitonic excitations [52] have been noted, and it is possible that they remain deconfined in the \(L\) and \(R\) phases, as has been observed for the toric code under an in-plane field [56] and the X-cube fracton model [81]. Perhaps the most surprising thing about the phase diagram is the fact that a transition between the \(SVC\) phase and the PS phase exists. After all, since the spins are already partly aligned with the field at \(h_{XZ}^{*}\), one might expect that the PS state could be reached without encountering a phase transition. The QCM at \(h^{*}\) would then be at the transition to, or within, the PS phase. But, as we have showed here, \(h_{xz}^{*}\) is in the distinct \(SVC\) phase. In contrast, if we consider similar exact product states in Kitaev's honeycomb model (KHCM) [20] with antiferromagnetic couplings, \(K_{x},K_{y},K_{z}\), then it is possible to Figure 4: (a) Phase diagram for the quantum compass model under an in-plane field. The phases are labelled as: \(z\)-oriented stripe (R), \(x\)-oriented stripe (L), staggered vector chiral (\(SVC\)), \(z\)-chain (\(Z\)), and \(x\)-chain (\(X\)). We show iPEPS results (blue squares) and iDMRG (colored diamonds) for \(L_{z}\)=6,8,10. Solid black lines are contours of constant field strength. (b) \(|\phi|\) and (c) \(|\mathcal{X}^{y}|\) as obtained from iPEPS for an in-plane field. (d) Dominant ordering of states in the labelled phases, the PS state is meant to show alignment in the field direction. again write the Hamiltonian in the same form as Eq. (2), in terms of a \(\mathcal{H}_{p}\), at the field (\(h_{x}^{\star},h_{y}^{\star},h_{z}^{\star}\))=\(S(K_{x},K_{y},K_{z})\) (note the factor of 2 difference with respect to the QCM). However, for the KHCM it is not possible to find an assignment of the \(|x\rangle\), \(|y\rangle\) and \(|z\rangle\) states to the lattice which is an eigenstate of \(\mathcal{H}_{p}\) with eigenvalue 0. However, one might still expect the KHCM at the corresponding field value of \(|h^{\star}|\)=\(KS\sqrt{3}\) for the isotropic model, to be in the same phase as the product state. But, contrary to the QCM, this turns out _not_ to be the case, since for that value of \(|h^{\star}|\) the KHCM is known to be in the polarized phase [82; 83] for a field in the [111] direction. Nevertheless, it seems possible that similar phases can be found in other materials with predominantly directionally dependent Ising interactions. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) through Discovery Grant No. RGPIN-2017-05759. We thank H.-Y. Kee for fruitful discussions. This research was enabled in part by support provided by SHARCNET (sharcnet.ca) and the Digital Research Alliance of Canada (alliancecan.ca).
2307.04633
Realizing late-time cosmology in the context of Dynamical Stability Approach
We examine the scenario of non-minimally coupled relativistic fluid and $k$-essence scalar field in a flat Friedmann-Lemaitre-Robertson-Walker universe. By adding a non-minimal coupling term in the Lagrangian level, we study the variation of Lagrangian with respect to independent variables, which produces modified scalar field and Friedmann equations. Using dynamical stability approach in different types of interaction models with two types of scalar field potential, we explore this coupled framework. Implementing detailed analysis, we can conclude our models can able to produce stable late-time cosmic acceleration.
Anirban Chatterjee, Saddam Hussain, Kaushik Bhattacharya
2023-07-04T18:37:18Z
http://arxiv.org/abs/2307.04633v1
# Realizing late-time cosmology in the context of Dynamical Stability Approach ###### Abstract We examine the scenario of non-minimally coupled relativistic fluid and \(k\)-essence scalar field in a flat Friedmann-Lemaitre-Robertson-Walker universe. By adding a non-minimal coupling term in the Lagrangian level, we study the variation of Lagrangian with respect to independent variables, which produces modified scalar field and Friedmann equations. Using dynamical stability approach in different types of interaction models with two types of scalar field potential, we explore this coupled framework. Implementing detailed analysis, we can conclude our models can able to produce stable late-time cosmic acceleration. ## 1 Introduction Standard model of cosmology (\(\Lambda\)-CDM model) [1], mainly suffers from two drawbacks, first one is the fine-tuning problem and second one is a cosmic-coincidence problem. In this standard model of cosmology, \(\Lambda\) represents the cosmological constant and CDM denotes the cold-dark matter. Another important downside of the \(\Lambda\)-CDM model from the observational perspective is the discrepancy between the local measurement of present observed value of Hubble's constant and the value predicted from the Planck experiment using \(\Lambda\)-CDM model [2]. These fundamental discrepancies motivate us to study different kinds of cosmological models found in the non-minimally coupled field-fluid sectors [3],[4],[5]. Based on these above considerations, we build a theoretical framework for a coupled field-fluid sector, where field sector is made of a non-canonical scalar field (\(k\)-essence sector [6],[7]) and the fluid sector is composed of pressure-less dust. The non-minimal coupling term is introduced at the Lagrangian level. We employ the variational approach [8] with respect to independent variables that produce modified \(k\)-essence scalar field equations and the Friedmann equations. Then we analyze this coupled field-fluid framework explicitly using the dynamical system technique [9], considering two forms of the scalar field potential, _viz._ inverse power-law type [3] and constant type [4]. After examining these scenarios, both models can produce accelerating attractor solutions and satisfy adiabatic sound speed conditions. ## 2 Theoretical Framework Total action for this non-minimally coupled field-fluid sector [3] can be written as, \[S = \int_{\Omega}d^{4}x\left[\sqrt{-g}\frac{R}{2\kappa^{2}}-\sqrt{-g} \rho(n,s)+J^{\mu}(\varphi_{,\mu}+s\theta_{,\mu}+\beta_{A}\alpha^{A}_{,\mu})- \sqrt{-g}\mathcal{L}(\phi,X)\right.\] \[\left.-\sqrt{-g}f(n,s,\phi,X)\right],\qquad(\mbox{Here, }\kappa^{2}= 8\pi G)\] The first term corresponds to the action's gravitational part, and the second and third terms represent the action related to the relativistic fluid sector. Fourth term implies the action for the \(k\)-essence scalar field. Finally, the last term describes the action for the non-minimal coupling. Non-minimal coupling term \((n,s,\phi,X)\) depends on the functions of both sectors of field \((\phi,X)\) and fluid \((n,s)\). Varying the grand action respect to \(g_{\mu\nu}\), we get the total energy-momentum tensor for this coupled system \(T^{\rm tot.}_{\mu\nu}=T^{(\phi)}_{\mu\nu}+T^{(M)}_{\mu\nu}+T^{\rm(int)}_{\mu\nu}\). Total energy-momentum tensor has been conserved, but the individual components have not been conserved. Variation of above action with respect to the independent variables produce the modified field equation as, \(\mathcal{L}_{,\phi}+\nabla_{\mu}(\mathcal{L}_{,X}\nabla^{\mu}\phi)+f_{,\phi}+ \nabla_{\mu}(f_{,X}\nabla^{\mu}\phi)=0\). In the context of a flat-FLRW metric (\(ds^{2}=-dt^{2}+a(t)^{2}d{\bf x}^{2}\)), the above equation can be redundant for both potential and kinetic terms dependent scalar field case [3] as, \([\mathcal{L}_{,\phi}+f_{,\phi}]-3H\dot{\phi}\,[\mathcal{L}_{,X}+f_{,X}]+ \frac{\partial}{\partial X}(P_{\rm int}+f)(3H\dot{\phi})\;-\ddot{\phi}\,[( \mathcal{L}_{,X}+f_{,X})+2X(\mathcal{L}_{,XX}+f_{,XX})]\;-\dot{\phi}^{2}( \mathcal{L}_{,\phi X}+f_{,\phi X})=0\). And for purely kinetic dependent scalar field [4] as, \(-3H\dot{\phi}\,[\mathcal{L}_{,X}+f_{,X}]+\frac{\partial}{\partial X}(P_{\rm int }+f)(3H\dot{\phi})\;-\ddot{\phi}\,[(\mathcal{L}_{,X}+f_{,X})+2X(\mathcal{L}_{,XX}+f_{,XX})]=0\). Results & Discussion Utilizing some dimensionless variables, we recast field, fluid, and interaction sectors in terms of them. Friedmann equations are also modified in terms of these variables and acted like constraint equations of the dynamical system. Depending on the total number of used independent variables dimension of the phase space has been decided. Depending on the dimension of the phase space, we have divided our analysis into two cases. * **I. Algebraic Coupling with arbitrary scalar field potential:** To study this case, we can choose the form of interaction [3], \(f=\rho\alpha\left(\dfrac{\phi}{\kappa}\right)^{m}\beta X^{n}\)\((\alpha,\beta,m,n\) are constant). Using linear stability approach, we get a total of eight critical points from the phase space analysis, in which two sets are stable and other two sets are showing saddle type nature. From Evolutionary dynamics of this coupled system suggest that a late-time stable accelerating phase can be achieved through this non-minimal coupling. Transfer of energy from field-to-fluid and finally fluid-to-field can also be observed. Total EOS parameter of this coupled sector saturates near \(-1\) at late-time era. At present epoch energy density of dark matter and dark energy are in same order of magnitude, also observed from here. * **II.Algebraic Coupling with constant scalar field potential:** The form of interaction for this case [4] has been chosen \(f=gV_{0}\rho^{a}X^{\beta}M^{-4q}\) (where, Figure 1: Phase space and evolution plots for Case-I (For details see [3]) \(q=-1,g,V_{0},\beta\) are constant). Due to the absence of the potential term the dimension of the phase space is reduced into 2-D. Utilizing the phase space analysis of dynamical system, we have also found one stable critical for this type of system. Phase space is constrained by the modified Friedmann equation, accelerating universe, and sound speed condition. Evolution plot suggests that late-time stable accelerating phase can be achieved and an energy transfer between field to fluid sector has also been observed through this framework of non-minimal coupling. Total EOS parameter of this coupled sector saturates near \(-1\) at late-time era.
2302.01195
Operator splitting based dynamic iteration for linear infinite-dimensional port-Hamiltonian systems
A dynamic iteration scheme for linear infinite-dimensional port-Hamiltonian systems is proposed. The dynamic iteration is monotone in the sense that the error is decreasing, it does not require any stability condition and is in particular applicable to port-Hamiltonian formulations arising from domain decompositions.
Bálint Farkas, Birgit Jacob, Timo Reis, Merlin Schmitz
2023-02-02T16:20:41Z
http://arxiv.org/abs/2302.01195v1
# Operator splitting based dynamic iteration ###### Abstract A dynamic iteration scheme for linear infinite-dimensional port-Hamiltonian systems is proposed. The dynamic iteration is monotone in the sense that the error is decreasing, it does not require any stability condition and is in particular applicable to port-Hamiltonian formulations arising from domain decompositions. **Keywords:** operator splitting, dynamic iteration, system nodes, infinite-dimensional linear systems **MSC Classification:** 47H05, 35A35, 37L65 ## 1 Introduction Operator splitting methods are widely used to reduce (the numerical) solution of a complex problem to the iterative solution of subproblems, into which the original problem is split. How this splitting arises can be based on various considerations, coming from the governing physical laws, from the geometry of the domain over which a certain partial differential equation is considered, from the structure of the problem, from mathematical reasons, or from the combination of these. See, for instance, [28], [29], [14], [23, Ch. IV], [17], [33], [24], [18], [13], [21], [19], [20] for more information and a general overview of splitting methods in various situations. _Operator splitting_ is particularly suitable for problems composed of subsystems which are coupled in a particular way; although such a coupling may not be visible immediately. For example, boundary coupled system or problems with dynamic boundary condition, see, e.g., [8], [9], as well as delay equations have been successfully treated via splitting methods, see, e.g., [5], [6] or [4] for an operator splitting approach in the dissipative situation. It is therefore interesting and important to study operator splitting methods for a class of systems that is closed under certain types of couplings. In this paper we are interested in infinite dimensional port-Hamiltonian systems of a specific structure. The splitting method studied in this paper is originally due to Peaceman and Rachford, see [30], who introduced it in the setting of linear operators. The Peaceman-Rachford splitting was then extended to maximal monotone operators on Banach spaces by Lions and Mercier [27]. And this framework is indeed more suitable for the purposes of this paper, as the occurring operators here will be only affine linear in general. For error analyis of Peaceman-Rachford type splittings and variants we refer, e.g., to [22]. Operator splitting based dynamic iteration schemes for finite dimensional port-Hamiltonian systems were studied in [16]. Here we make the first steps to extend the study to infinite dimensional port-Hamiltonian systems. The Peaceman-Rachford-Lions-Mercier type splitting algorithm will result in a convergent approximation, under suitable conditions, but most importantly the approximation error will be monotonically increasingly convergent to \(0\), a feature that is connected with the port-Hamiltonian structure of the problem. This paper is structured as follows. In the next section we give an explicit description of the dynamic iteration scheme and present our main theorems. The proof of these theorems can be found in Section 5. In Section 3 we give some background information on the theory of system nodes that will be used in the proof. Further, Section 4 is devoted to the proof of maximal monotonicity of one of the splitting operators. We end in Section 6.1 with two examples: A coupled wave-heat system and a domain decomposition for the wave equation. ## 2 Description of the dynamic iteration scheme We consider \(n\in\mathbb{N}\) linear (infinite-dimensional) port-Hamiltonian systems \[\begin{bmatrix}\dot{x}_{i}(t)\\ \mathfrak{y}_{i}(t)\\ y_{i}(t)\end{bmatrix} =S_{i}\begin{bmatrix}x_{i}(t)\\ \mathfrak{u}_{i}(t)\\ u_{i}(t)\end{bmatrix},\qquad t\geq 0,\quad i=1,\ldots,n, \tag{1}\] \[x_{i}(0) =x_{i0},\qquad i=1,\ldots,n,\] where \(x_{i}(t)\in X_{i}\) denotes the state, \(\mathfrak{u}_{i}(t)\in\mathfrak{U}_{i}\) and \(u_{i}(t)\in U_{i}\) denote inputs and \(\mathfrak{y}_{i}(t)\in\mathfrak{U}_{i}\) and \(y_{i}(t)\in U_{i}\) denote outputs of system \(i\) at time \(t\). Here \(\mathfrak{U}_{i}\), \(U_{i}\) and \(X_{i}\) are Hilbert spaces. Denoting the Cartesian product of the spaces \(Z_{1},\ldots,Z_{n}\) by \(\left[\begin{smallmatrix}Z_{1}\\ \vdots\\ Z_{n}\end{smallmatrix}\right]\), we define \[X\coloneqq\left[\begin{smallmatrix}X_{1}\\ \vdots\\ X_{n}\end{smallmatrix}\right],\quad\mathfrak{U}\coloneqq\left[\begin{smallmatrix} \mathfrak{U}_{1}\\ \vdots\\ \mathfrak{U}_{n}\end{smallmatrix}\right]\text{ and }\ U\coloneqq\left[\begin{smallmatrix}U_{1} \\ \vdots\\ U_{n}\end{smallmatrix}\right].\] We assume that the linear operators \(S_{i}\), \(i=1,\ldots,n\), are system nodes on the Hilbert space triples \((\left[\begin{smallmatrix}\mathfrak{U}_{i}\\ U_{i}\end{smallmatrix}\right],X_{i},\left[\begin{smallmatrix}\mathfrak{U}_{i} \\ U_{i}\end{smallmatrix}\right])\). We recall the definition of a system node in Section 3. In particular, this class covers well-posed linear systems [32], boundary control and observation systems [35] and, of course, linear infinite-dimensional systems with bounded control and observation [10]. Further, we assume that the systems are coupled via \[\begin{bmatrix}y_{1}(t)\\ \vdots\\ y_{n}(t)\end{bmatrix}=N_{c}\begin{bmatrix}u_{1}(t)\\ \vdots\\ u_{n}(t)\end{bmatrix},\qquad t\geq 0, \tag{2}\] where \(N_{c}\) is a bounded linear operator from \(U\) to \(U\) satisfying \(\mathrm{Re}\langle y,N_{c}y\rangle\leq 0\) for every \(y\in U\). Hence, the fraktur typeface indicates that the functions can be interpreted as external inputs and outputs that build inputs and outputs of the closed system. As the systems (1) are port-Hamiltonian and the coupling operator \(N_{c}\) satisfies \(\mathrm{Re}\langle y,N_{c}y\rangle\leq 0\) for every \(y\in U\), the interconnected system is again a port-Hamiltonian system. We note, that the systems (1) are port-Hamiltonian if and only if the corresponding system nodes \(S_{i}\) are impedance passive. Further, as stated in [31, Thm. 4.2] a system node is impedance passive if and only if it is (maximal) dissipative. The aim of this article is to develop for given inputs \(\mathfrak{u}_{1},\ldots,\mathfrak{u}_{n}\) and given initial conditions \(x_{10},\ldots,x_{n0}\) for the closed loop system (1)-(2) a dynamic iteration scheme which allows to solve the linear port-Hamiltonian systems \(S_{i}\) separately and also in parallel. Every system node \(S_{i}\) on \((\left[\begin{smallmatrix}\mathfrak{U}_{i}\\ U_{i}\end{smallmatrix}\right],X_{i},\left[\begin{smallmatrix}\mathfrak{U}_{i} \\ U_{i}\end{smallmatrix}\right])\) can be written as \[S_{i}=\left[\begin{smallmatrix}A_{i}\&B_{i}\\ \left[\begin{smallmatrix}C_{i}\&D_{i}\right]_{1}\\ \left[\begin{smallmatrix}C_{i}\&D_{i}\right]_{2}\end{smallmatrix}\right].\] Here \(A_{i}\&B_{i}\coloneqq P_{X_{i}}S_{i}\), \([C_{i}\&D_{i}]_{1}\coloneqq P_{\mathfrak{U}_{i}}S_{i}\) and \([C_{i}\&D_{i}]_{2}\coloneqq P_{U_{i}}S_{i}\), where \(P_{X_{i}}\), \(P_{\mathfrak{U}_{i}}\) and \(P_{U_{i}}\) are the canonical projections onto \(X_{i}\), \(\mathfrak{U}_{i}\) and \(U_{i}\) in \(X_{i}\times\mathfrak{U}_{i}\times U_{i}\). Let \(S\) be the system node on \((\left[\begin{smallmatrix}\mathtt{u}\\ \overline{U}\end{smallmatrix}\right],X,\left[\begin{smallmatrix}\mathtt{u}\\ \overline{U}\end{smallmatrix}\right])\) with \(S_{i}\) "on the diagonal", i.e. the operator \(S\) is of the form \[S=\left[\begin{smallmatrix}A\&B\\ \left[\begin{smallmatrix}C\&D\end{smallmatrix}\right]_{1}\\ \left[\begin{smallmatrix}C\&D\end{smallmatrix}\right]_{2}\right],\] Our assumptions on the system read as follows **Assumption 2.1** (on the node): _Let \(X\), \(U\) be Hilbert spaces. The linear operator \(S=\left[\begin{smallmatrix}A\&B\\ \left[\begin{smallmatrix}C\&D\end{smallmatrix}\right]_{1}\\ \left[\begin{smallmatrix}C\&D\end{smallmatrix}\right]_{2}\right]:\,\mathrm{dom}(S )\subset\left[\begin{smallmatrix}X\\ \mathtt{u}\\ \overline{U}\end{smallmatrix}\right]\rightarrow\left[\begin{smallmatrix}X\\ \mathtt{u}\\ \overline{U}\end{smallmatrix}\right]\) (with \(A\&B=P_{X}S\), \(\left[\begin{smallmatrix}\&C\&D\end{smallmatrix}\right]_{2}\right]=P_{\left[ \begin{smallmatrix}\mathtt{u}\\ \overline{U}\end{smallmatrix}\right]}S\)) has the following properties:_ 1. \(\left[\begin{smallmatrix}A\&B\\ -\left[\begin{smallmatrix}C\&D\end{smallmatrix}\right]_{1}\\ -\left[\begin{smallmatrix}C\&D\end{smallmatrix}\right]_{2}\right]\) _is dissipative,_ 2. \(S\) _is closed. Further,_ \(A\&B\) _is closed with_ \(\mathrm{dom}(A\&B)=\mathrm{dom}(S)\)_._ 3. _For all_ \([\begin{smallmatrix}\mathtt{u}\\ \mathtt{u}\end{smallmatrix}]\in\left[\begin{smallmatrix}\mathtt{u}\\ \overline{U}\end{smallmatrix}\right]\)_, there exists some_ \(x\in X\) _with_ \(\left[\begin{smallmatrix}\mathtt{u}\\ \mathtt{u}\end{smallmatrix}\right]\in\mathrm{dom}(A\&B)\)_._ 4. _The main operator_ \(A\colon\,\mathrm{dom}(A)\subset X\to X\) _with_ \[\mathrm{dom}(A)\coloneqq\left\{x\in X\,|\,(x,0,0)\in\mathrm{dom}(S)\,\right\}\] _and_ \(Ax\coloneqq P_{X}S\left(\begin{smallmatrix}\mathtt{x}\\ \mathtt{0}\\ \mathtt{0}\end{smallmatrix}\right)\) _for all_ \(x\in\mathrm{dom}(A)\) _fulfills_ \[\rho(A)\cap\mathbb{C}_{+}\neq\emptyset.\] _Here_ \(\rho(A)\) _denotes the resolvent set of the linear operator_ \(A\)_, and_ \(\mathbb{C}_{+}\coloneqq\left\{\lambda\in\mathbb{C}\,|\,\operatorname{Re}\lambda >0\,\right\}\)_._ We abbreviate the solution space \(H\coloneqq\mathrm{L}^{2}([0,T];\left[\begin{smallmatrix}X\\ \overline{U}\end{smallmatrix}\right])\). For fixed \(T>0\), a function \(\mathtt{u}\colon[0,T]\rightarrow\mathfrak{U}\) and \(x_{0}\in X\) we consider the operator \[M\colon\,\mathrm{dom}(M)\subset H\to H\] (3a) with \[\mathrm{dom}(M)=\left\{[\begin{smallmatrix}x\\ \mathtt{u}\end{smallmatrix}]\in H\,\Big{|}\,[\begin{smallmatrix}\dot{x}\\ 0\end{smallmatrix}]-\left[\begin{smallmatrix}A\&B\end{smallmatrix}\right]\, \Big{[}\begin{smallmatrix}x\\ \mathtt{u}\\ \mathtt{u}\end{smallmatrix}\Big{]}\in H\,\,\mathrm{and}\,\,x(0)=x_{0}\,\right\}, \tag{3b}\] \[M\begin{bmatrix}x\\ u\end{bmatrix}=\begin{bmatrix}\dot{x}-A\&B\left[\begin{smallmatrix}x\\ \mathtt{u}\\ \mathtt{u}\\ \mathtt{0}\end{smallmatrix}\right]\cr[C\&D]_{2}\left[\begin{smallmatrix}x\\ \mathtt{u}\\ \mathtt{u}\end{smallmatrix}\right]\end{bmatrix}. \tag{3c}\] The precise meaning of \(\dot{x}\) will be clarified in Section 3, when we discuss system nodes, and solution trajectories, see also Remark 4.7. Note that \(M\) is not a linear operator unless \(x_{0}=0\) and \(\mathtt{u}=0\), since it is in general not defined on a vector space. Further, we define \(N\in\mathcal{L}(\mathrm{L}^{2}([0,T];\left[\begin{smallmatrix}X\\ \overline{U}\end{smallmatrix}\right]))\) by \[N\begin{bmatrix}x\\ u\end{bmatrix}\coloneqq\begin{bmatrix}0\\ -N_{c}u\end{bmatrix}. \tag{4}\] We assume that the coupling is such that \(N\) is a maximal monotone operator. Thus for \(\lambda>0\) the operator \((\mathrm{I}-\lambda N)(\mathrm{I}+\lambda N)^{-1}\) is a contraction, see Section 4. _Remark 2.2_: If we consider two systems \((n=2)\) the standard negative feedback \(u_{1}=y_{2}\), \(u_{2}=-y_{1}\) yields a coupling matrix \(N_{c}=\left[\begin{smallmatrix}0&-\mathrm{I}\\ \mathrm{I}&0\end{smallmatrix}\right]\). The system arising from the coupling of \(S_{i}\), \(i=1,\ldots,n\) via \(N_{c}\) (without the output equation for \(\mathfrak{y}\)) is equivalent to the equation \[M\left[\begin{smallmatrix}x\\ u\end{smallmatrix}\right]+N\left[\begin{smallmatrix}x\\ u\end{smallmatrix}\right]=0 \tag{5}\] (see Remark 4.7), which is equivalent to \[\left[\begin{smallmatrix}x\\ u\end{smallmatrix}\right]=(\mathrm{I}+\lambda M)^{-1}(\mathrm{I}-\lambda N)( \mathrm{I}+\lambda N)^{-1}(\mathrm{I}-\lambda M)\left[\begin{smallmatrix}x\\ u\end{smallmatrix}\right],\] where \(\lambda>0\) is arbitrary, see Section 4 for the discussion of the inverse mappings appearing here. We consider an algorithm inspired by ideas of Lions and Mercier as in [27]: \[\left[\begin{smallmatrix}x_{k+1}\\ u_{k+1}\end{smallmatrix}\right]=(\mathrm{I}+\lambda M)^{-1}(\mathrm{I}-\lambda N )(\mathrm{I}+\lambda N)^{-1}(\mathrm{I}-\lambda M)[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}], \tag{6}\] with \([\begin{smallmatrix}x_{0}\\ u_{0}\end{smallmatrix}]\in\mathrm{dom}(M)\) arbitrary. Now we can formulate our second assumption: **Assumption 2.3** (Solution): _For fixed \(x_{0}\in X\), \(T>0\) and \(\mathfrak{u}\in\mathrm{L}^{2}([0,T],\mathfrak{U})\) there exists a solution \([\begin{smallmatrix}x\\ u\end{smallmatrix}]\) to the equation (5) on \([0,T]\)._ The main results of this paper are the following: **Theorem 2.4**: _Let Assumptions 2.1, 2.3 be fulfilled. For the operators \(M,N\) as defined in (3) and (4) let the sequence \([\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}]_{k}\) be defined by (6). Then:_ 1. _For the sequence_ \([\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}]_{k}\) _defined by_ \[[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}]\coloneqq(\mathrm{I}+\lambda M)[\begin{smallmatrix}x_{k} \\ u_{k}\end{smallmatrix}],\quad k\in\mathbb{N},\] _and the function_ \([\begin{smallmatrix}w\\ z\end{smallmatrix}]\coloneqq(\mathrm{I}+\lambda M)[\begin{smallmatrix}x\\ u\end{smallmatrix}]\)_, the sequence_ \((\|[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}]-[\begin{smallmatrix}w\\ z\end{smallmatrix}]\|_{2})_{k}\) _is monotonically decreasing and_ \[\|[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}]-[\begin{smallmatrix}x\\ u\end{smallmatrix}]\|_{2}\leq\|[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}]-[\begin{smallmatrix}w\\ z\end{smallmatrix}]\|_{2},\quad\forall k\in\mathbb{N}.\] 2. \((x_{k})_{k}\) _converges to_ \(x\) _in_ \(\mathrm{L}^{2}([0,T];X)\)_._ 3. \((x_{k})_{k}\) _converges uniformly to_ \(x\) _on_ \([0,T]\) **Theorem 2.5**: _Let additionally to the assumptions of Theorem 2.4 the system be partially strictly output passive with regard to the external output, i.e. there is \(\varepsilon>0\) such that for all \(\left[\begin{smallmatrix}x\\ u\\ u\end{smallmatrix}\right]\in\operatorname{dom}(S)\) the inequality_ \[\operatorname{(PSOP)}\qquad\operatorname{Re}\left\langle\left[\begin{smallmatrix} A\&B\\ -[C\&D]_{1}\\ -[C\&D]_{2}\end{smallmatrix}\right]\left[\begin{smallmatrix}x\\ u\\ u\end{smallmatrix}\right],\left[\begin{smallmatrix}x\\ x\\ u\end{smallmatrix}\right]\right\rangle_{\begin{subarray}{c}X\\ u\\ u\end{smallmatrix}}\leq-\varepsilon\left\|[C\&D]_{1}\left[\begin{smallmatrix} x\\ u\\ u\end{smallmatrix}\right]\right\|_{\mathtt{i}\mathtt{I}}^{2}\] _holds. Then, if \(T\), \(x_{0}\), \(\mathtt{u}\), \(x\), \(u\) are given as in Theorem 2.4, the corresponding external output also converges to \(\mathfrak{y}\coloneqq[C\&D]_{1}\left[\begin{smallmatrix}x\\ u\\ u\end{smallmatrix}\right]\), i.e._ \[\left\|[C\&D]_{1}\left[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}\right]-\mathfrak{y}\right\|_{2,\mathtt{i}\mathtt{I}} \longrightarrow 0.\] **Remark 2.6**: Using the same argument, we obtain convergence of the internal outputs (i.e., \(\lim_{k\to\infty}\|y-y_{k}\|_{2,U}=0\)) under the assumption of partial strict output passivity with regard to the internal output. Then, using (2) and the boundedness of \(N_{c}\) we easily see the convergence of the internal inputs \(u_{k}\). Since we can assume invertibility of \(N_{c}\) without loss of generality, the same holds if the system is partially strictly input passive with regard to the internal input. **Remark 2.7**: 1. The block structure of \(S\) allows a parallelized computation of the subsystems \(S_{i}\). 2. In the splitting algorithm (6) for the sum of two operators such as (5) one can interpret the variable \(\lambda\) as a time step. Therefore this algorithm represents a combination of steps for the first operator alternating ones for the second. ## 3 Background on system nodes Let \(X\), \(U\) and \(Y\) be Hilbert spaces and denote the canonical projections onto \(X\) and \(Y\) in \(\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]\) respectively by \(P_{X}\) and \(P_{Y}\). Let \[S\colon\operatorname{dom}(S)\subset\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]\to\left[\begin{smallmatrix}X\\ Y\end{smallmatrix}\right]\] be a linear operator. Its corresponding _main operator_ is given by \(A\colon\operatorname{dom}(A)\subset X\to X\) with \(\operatorname{dom}(A)\coloneqq\left\{x\in X\,|\left[\begin{smallmatrix}x\\ 0\end{smallmatrix}\right]\in\operatorname{dom}(S)\,\right\}\) and \(Ax\coloneqq P_{X}S[\begin{smallmatrix}x\\ 0\end{smallmatrix}]\) for all \(x\in\operatorname{dom}(A)\). One often sets \[A\&B\coloneqq P_{X}S\qquad\text{and}\qquad C\&D\coloneqq P_{Y}S\] so \(S\) can be written as \[S=\left[\begin{matrix}A\&B\\ C\&D\end{matrix}\right].\] The concept of system nodes poses natural assumptions on the operator \(S\), in order to guarantee favorable properties and a suitable solution concept to the dynamics specified by the differential equation \[\left[\begin{matrix}\dot{x}(t)\\ y(t)\end{matrix}\right]=S\left[\begin{matrix}x(t)\\ u(t)\end{matrix}\right]. \tag{7}\] For a comprehensive study of system nodes, we refer to the monograph [32]. **Definition 3.1** (System node): A _system node_ on the triple \((Y,X,U)\) of Hilbert spaces is a (possibly unbounded) linear operator \(S\colon\operatorname{dom}(S)\subset\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]\to\left[\begin{smallmatrix}X\\ Y\end{smallmatrix}\right]\) satisfying the following conditions: 1. \(S\) is closed. 2. \(P_{X}S\colon\operatorname{dom}(S)\subset\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]\to X\) is closed. 3. For all \(u\in U\), there exists some \(x\in X\) with \(\left[\begin{smallmatrix}x\\ u\end{smallmatrix}\right]\in\operatorname{dom}(S)\). 4. The main operator \(A\) is the generator of a strongly continuous semigroup \(\mathfrak{A}(\cdot)\colon[0,\infty)\to\mathcal{L}(X)\) on \(X\). **Remark 3.2** (System nodes): Let \(S=\left[\begin{smallmatrix}\text{A\&B}\\ \text{C\&D}\end{smallmatrix}\right]\) be a system node on \((Y,X,U)\). 1. It follows from the above definition that \(C\&D\in\mathcal{L}(\operatorname{dom}(A\&B),Y)\), where \(\operatorname{dom}(A\&B)\) is endowed with the graph norm of \(A\&B\). In particular, the operator \(C\) with \(Cx\coloneqq C\&D[\begin{smallmatrix}x\\ 0\end{smallmatrix}]\) fulfills \(C\in\mathcal{L}(\operatorname{dom}(A),Y)\). 2. Since generators of semigroups are densely defined (see [12, Chap. 2, Thm. 1.5]), \(\operatorname{dom}(S)\) is dense in \(\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]\) and for given \(u\in U\) the affine subspace \[\{x\in X\,|\left[\begin{smallmatrix}x\\ u\end{smallmatrix}\right]\in\operatorname{dom}(A\&B)\}\] is dense in \(X\). 3. Since \(A\) is a generator of a \(C_{0}\)-semigroup there is \(\alpha\in\rho(A)\) (in the resolvent set of \(A\)). The completion of \(X\) with respect to the norm \(\|x\|_{X_{-1}}\coloneqq\|(\alpha\operatorname{I}-A)^{-1}x\|\) is denoted by \(X_{-1}\). Note that the topology of \(X_{-1}\) does not depend on the particular choice of \(\alpha\in\rho(A)\)[35, Prop. 2.10.2]. The operator \(A\) extends continuously as \(A_{-1}\colon X\mapsto X_{-1}\); \(A\) and \(A_{-1}\) are similar, hence have the same spectrum and \(A_{-1}\) generates a \(C_{0}\)-semigroup \(\mathfrak{A}_{-1}(\cdot)\colon[0,\infty)\to\mathcal{L}(X_{-1})\) on \(X_{-1}\), which extends \(\mathfrak{A}(\cdot)\) (and which is similar to \(\mathfrak{A}(\cdot)\)), see [12, Sec. II.5]. 4. \(A\&B\) extends to a bounded linear operator \([A_{-1}\ B]\colon\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]\to X_{-1}\), which in fact has such a block structure. Moreover, the domain of \(A\&B\) (equally: the domain of \(S\)) fulfills \[\operatorname{dom}(A\&B)=\left\{\left[\begin{smallmatrix}x\\ u\end{smallmatrix}\right]\in\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]|\,A_{-1}x+Bu\in X\,\right\},\] see [32, pp. 3-4]. 5. For all \(\alpha\in\rho(A)\) the norm \[\|(\begin{smallmatrix}x\\ u\end{smallmatrix})\|_{\alpha}\coloneqq\left(\|x-(\alpha\operatorname{I}-A_{ -1})^{-1}Bu\|_{X}^{2}+\|u\|_{U}^{2}\right)^{1/2}\] is equivalent to the graph norm of \(S\). Moreover, the operator \[\left[\begin{smallmatrix}\operatorname{I}-(\alpha\operatorname{I}-A_{-1})^{-1 }B\\ 0\end{smallmatrix}\right]\] maps \(\operatorname{dom}(S)\) bijectively to \(\left[\begin{smallmatrix}\operatorname{dom}(A)\\ U\end{smallmatrix}\right]\), see [32, Lem. 4.7.3]. **Remark 3.2** (v): _allows to define the concept of the transfer function._ **Definition 3.3** (Transfer function): Let \(S=\left[\begin{smallmatrix}A\&B\\ C\&D\end{smallmatrix}\right]\) be a system node. The _transfer function associated to \(S\)_ is \[\widehat{\mathfrak{D}}\colon\quad\rho(A) \to \mathcal{L}(U,Y),\] \[s \mapsto C\&D\left[\begin{smallmatrix}(s\mathrm{I}-A_{-1})^{-1}B\\ \mathrm{I}\end{smallmatrix}\right].\] Next we briefly recall suitable solution concepts for the differential equation (7) with \(S=\left[\begin{smallmatrix}A\&B\\ C\&D\end{smallmatrix}\right]\) being a system node. **Definition 3.4** (Classical/generalized trajectories): Let \(S=\left[\begin{smallmatrix}A\&B\\ C\&D\end{smallmatrix}\right]\) be a system node on \((Y,X,U)\), and let \(T>0\). A _classical trajectory_ for (7) on \([0,T]\) is a triple \[(x,u,y)\,\in\,\mathrm{C}^{1}([0,T];X)\times\mathrm{C}([0,T];U)\times\mathrm{C }([0,T];Y)\] which for all \(t\in[0,T]\) satisfies (7). A _generalized trajectory_ for (7) on \([0,T]\) is a triple \[(x,u,y)\,\in\,\mathrm{C}([0,T];X)\times\mathrm{L}^{2}([0,T];U)\times\mathrm{L }^{2}([0,T];Y),\] which is a limit of classical trajectories for (7) on \([0,T]\) in the topology of \(\mathrm{C}([0,T];X)\times\mathrm{L}^{2}([0,T];U)\times\mathrm{L}^{2}([0,T];Y)\). If \(S=\left[\begin{smallmatrix}A\&B\\ C\&D\end{smallmatrix}\right]\) is a system node on \((Y,X,U)\), then \(A\&B\) can be regarded as a system node on \((\{0\},X,U)\). Consequently, we may further speak of classical (generalized) trajectories \((x,u)\) for \(\dot{x}=A\&B[\begin{smallmatrix}x\\ u\end{smallmatrix}]\). The following result ensures the existence of unique classical trajectories with suitable control functions and initial values. **Proposition 3.5** (Existence of classical trajectories [32, Thm. 4.3.9]): _Let \(S\) be a system node on \((Y,X,U)\), let \(T>0\), \(x_{0}\in X\) and \(u\in\mathrm{W}^{2,1}([0,T];U)\) with \(\left[\begin{smallmatrix}x_{0}\\ u(0)\end{smallmatrix}\right]\in\mathrm{dom}(S)\). Then there exist unique classical trajectory \((x,u,y)\) for (7) with \(x(0)=x_{0}\). In the case of a well-posed system \(u\in\mathrm{W}^{1,2}([0,T];U)\) is sufficient for the existence of classical trajectories and one also gets \(y\in\mathrm{W}^{1,2}([0,T];Y)\) (see [31, p. 298])._ We provide some further statements on classical/generalized trajectories. **Remark 3.6** (Classical/generalized trajectories): Let \(S=\left[\begin{smallmatrix}A\&B\\ C\&D\end{smallmatrix}\right]\) be a system node on \((Y,X,U)\), and let \(T>0\). 1. Assume that \((x,u)\) is a classical trajectory for \(\dot{x}=A\&B[\begin{smallmatrix}x\\ u\end{smallmatrix}]\). Then \([\begin{smallmatrix}x\\ u\end{smallmatrix}]\in\mathrm{C}([0,T];\mathrm{dom}(S))\). 2. \((x,u)\) is a generalized trajectory for \(\dot{x}=A\&B[\begin{smallmatrix}x\\ u\end{smallmatrix}]\) if, and only if, \[\forall\,t\in[0,T]:\quad x(t)=\mathfrak{A}(t)x(0)+\int_{0}^{t}\mathfrak{A} _{-1}(t-\tau)Bu(\tau)\,\mathrm{d}\tau,\] (8) where the latter has to be interpreted as an integral in the space \(X_{-1}\). In particular, \(x\in\mathrm{C}([0,T];X_{-1})\). 3. If \((x,u,y)\) is a generalized trajectory for (7), then, clearly, \((x,u)\) is a generalized trajectory for \(\dot{x}=A\&B[\begin{smallmatrix}x\\ u\end{smallmatrix}]\). In particular, (8) holds. The output evaluation \(y(t)=C\&D\Big{[}\begin{smallmatrix}x(t)\\ u(t)\end{smallmatrix}\Big{]}\) is--at a first glance--not necessarily well-defined for all \(t\in[0,T]\). However, it is shown in [32, Lem. 4.7.9] that the second integral of \([\begin{smallmatrix}x\\ u\end{smallmatrix}]\) is continuous as a mapping from \([0,T]\) to \(\operatorname{dom}(A\&B)=\operatorname{dom}(S)\). As a consequence, the output can--in the distributional sense--be defined as the second derivative of \(C\&D\) applied to the second integral of \([\begin{smallmatrix}x\\ u\end{smallmatrix}]\). This can be used to show that \((x,u,y)\) is a generalized trajectory for (7) if, and only if, \((x,u)\) is a generalized trajectory for \(\dot{x}=A\&B[\begin{smallmatrix}x\\ u\end{smallmatrix}]\), and \[y=\left(t\mapsto\tfrac{\operatorname{d}^{2}}{\operatorname{d}t^{2}}\,C\&D \int_{0}^{t}(t-\tau)\Big{[}\begin{smallmatrix}x(\tau)\\ u(\tau)\end{smallmatrix}\Big{]}\operatorname{d}\tau\right)\in\operatorname{L}^ {2}([0,T];Y).\] Next we recall the important concept of well-posed systems. **Definition 3.7** (Well-posed systems): Let \(S=\big{[}\begin{smallmatrix}A\&B\end{smallmatrix}\big{]}\) be a system node on \((Y,X,U)\). The system (7) is called _well-posed_, if for some (and hence all) \(T>0\), there exists some \(c_{T}>0\), such that the classical (and thus also the generalized) trajectories for (7) on \([0,T]\) fulfill \[\|x(t)\|_{X}+\|y\|_{\operatorname{L}^{2}([0,T];Y)}\leq c_{T}\big{(}\|x(0)\|_{X} +\|u\|_{\operatorname{L}^{2}([0,T];U)}\big{)}.\] _Remark 3.8_ (Well-posed systems): Let \(S=\big{[}\begin{smallmatrix}A\&B\end{smallmatrix}\big{]}\) be a system node on \((Y,X,U)\) and \(T>0\). Well-posedness of (7) is equivalent to boundedness of the mappings \[\mathfrak{B}_{T}\colon \operatorname{L}^{2}([0,T];U)\to X, \mathfrak{C}_{T}\colon\quad X\to\operatorname{L}^{2}([0,T];Y),\] \[\mathfrak{D}_{T}\colon \operatorname{L}^{2}([0,T];U)\to \operatorname{L}^{2}([0,T];Y),\] where * \(\mathfrak{B}_{T}u=x(T)\), where \((x,u,y)\) is the generalized trajectory for (7) on \([0,T]\) with \(x(0)=0\), * \(\mathfrak{C}_{T}x_{0}=y\), where \((x,u,y)\) is the generalized trajectory for (7) on \([0,T]\) with \(u=0\) and \(x(0)=x_{0}\), * \(\mathfrak{D}_{T}u=y\), where \((x,u,y)\) is the generalized trajectory for (7) on \([0,T]\) with \(x(0)=0\). In view of Remark 3.6 (ii), we have \[\mathfrak{B}_{T}u=\int_{0}^{t}\mathfrak{A}_{-1}(t-\tau)Bu(\tau)\operatorname{ d}\tau\quad\forall u\in\operatorname{L}^{2}([0,T];U).\] In particular, well-posedness implies that the above integral is an element of \(X\). Since the domain of the generator of a \(C_{0}\)-semigroup is invariant under the semigroup operators, for each \(t>0\) and \(x_{0}\in\operatorname{dom}(A)\) one has \(\mathfrak{A}(t)x_{0}\in\operatorname{dom}(A)\). Thus, with \(C\) as in Remark 3.2 (i), we have that for \(y=C\mathfrak{A}(\cdot)x_{0}\), \(x=\mathfrak{A}(\cdot)x_{0}\), \((x,0,y)\) is a classical trajectory for (7) on \([0,T]\) with \(x(0)=x_{0}\). Well-posedness implies that the mapping \(x_{0}\mapsto C\mathfrak{A}(\cdot)x_{0}\) has an extension to a bounded linear operator \(\mathfrak{C}_{T}\colon X\to\operatorname{L}^{2}([0,T];Y)\), see [32, Thm. 4.7.14]. **Lemma 3.9**: _Let \(S=\left[\begin{smallmatrix}A\&B\cr C\&D\end{smallmatrix}\right]\) be a system node on \((Y,X,U)\). Then_ \[S_{\mathrm{ext}}=\begin{bmatrix}A\&B&\mathrm{I}\\ C\&D&0\\ \mathrm{I}&0&0\end{bmatrix} \tag{9}\] _is a system node on \((\left[\begin{smallmatrix}Y\cr X\end{smallmatrix}\right],X,\left[\begin{smallmatrix} U\cr X\end{smallmatrix}\right])\). Further, if (7) is well-posed, then_ \[\begin{bmatrix}\dot{x}(t)\\ y_{\mathrm{ext}}(t)\end{bmatrix}=S_{\mathrm{ext}}\begin{bmatrix}x(t)\\ u_{\mathrm{ext}}(t)\end{bmatrix} \tag{10}\] _is well-posed._ It is straightforward to verify that \(S_{\mathrm{ext}}\) is a system node. The proof of the equivalence between well-posedness of (7) and (10) consists of a straightforward combination of Remark 3.8 with [32, Thm. 4.4.4&4.4.8] and is therefore omitted. Next we recap the notion of partial flow inverse from [32, Def. 6.6.6], which will turn out to be corresponding to a system in which the second part of input is interchanged with the second part of the output. **Definition 3.10**: (Partial flow inverse) A system node \(S=\left[\begin{smallmatrix}A\&B\cr[C\&D]_{1}\cr[C\&D]_{2}\end{smallmatrix}\right]\) on \(\left(\left[\begin{smallmatrix}\mathfrak{Y}\cr Y\end{smallmatrix}\right],X, \left[\begin{smallmatrix}\mathfrak{U}\cr U\end{smallmatrix}\right]\right)\) with main operator \(A\), control operator \(B=[\mathfrak{B}\ \widehat{B}]\) and observation operator \(C=\left[\begin{smallmatrix}\mathfrak{E}\end{smallmatrix}\right]\) is called _partially flow-invertible_ if there exists a system node \(S^{\curvearrow}=\left[\begin{smallmatrix}A\&B\cr[C\&D]_{1}^{\curvearrow}\\ [C\&D]_{2}^{\curvearrow}\end{smallmatrix}\right]\) on \(\left(\left[\begin{smallmatrix}\mathfrak{Y}\cr U\end{smallmatrix}\right],X, \left[\begin{smallmatrix}\mathfrak{U}\cr Y\end{smallmatrix}\right]\right)\) satisfying the following condition: the operator \(\left[\begin{smallmatrix}1&0&0\cr 0&1&0\cr[C\&D]_{2}\end{smallmatrix}\right]\) maps \(\mathrm{dom}(S)\) continuously onto \(\mathrm{dom}(S^{\curvearrow})\), its inverse is \(\left[\begin{smallmatrix}1&0&0\cr 0&1\cr[C\&D]_{2}^{\curvearrow}\end{smallmatrix}\right]\) and \[S =\left[\begin{smallmatrix}[A\&B\cr[C\&D]_{1}^{\curvearrow}\\ 0&0&1\end{smallmatrix}\right]\left[\begin{smallmatrix}1&0&0\cr 0&1&0\cr[C\&D]_{2}^{ \curvearrow}\end{smallmatrix}\right]^{-1}\] on \[\mathrm{dom}(S)\] \[S^{\curvearrow} =\left[\begin{smallmatrix}A\&B\cr[C\&D]_{1}\cr 0&0&1\end{smallmatrix}\right]\left[ \begin{smallmatrix}1&0&0\cr 0&1&0\cr[C\&D]_{2}\end{smallmatrix}\right]^{-1}\] on \[\mathrm{dom}(S^{\curvearrow})\] In this case we call \(S\) and \(S^{\curvearrow}\)_partial flow-inverses_ of each other. If \(S\) is partially flow invertible then for the transfer function \(\widehat{\mathfrak{D}}=\left[\begin{smallmatrix}\widehat{\mathfrak{D}}_{11} \ \widehat{\mathfrak{D}}_{12}\\ \widehat{\mathfrak{D}}_{21}\ \widehat{\mathfrak{D}}_{22}\end{smallmatrix}\right]\) of \(S\) and for the transfer function \(\widehat{\mathfrak{D}}^{\curvearrow}\) of the system node \(S^{\curvearrow}\) \[\widehat{\mathfrak{D}}_{22}(\alpha)\quad\text{and}\quad\widehat{\mathfrak{D}} _{22}^{\curvearrow}(\alpha)\] are invertible for all \(\alpha\in\rho(A)\cap\rho(A^{\curvearrow})\) and we have \(\widehat{\mathfrak{D}}_{22}^{\curvearrow}(\alpha)=[\widehat{\mathfrak{D}}_{2 2}(\alpha)]^{-1}\) (see [32, Thm. 6.6.9&6.6.10]). **Proposition 3.11** (Partial flow-invertibility, [32, Thm. 6.6.11]): _A system node \(S=\left[\begin{smallmatrix}A\&B\cr[C\&D]_{1}\cr[C\&D]_{2}\end{smallmatrix}\right]\) on \(\left(\left[\begin{smallmatrix}\mathfrak{Y}\cr Y\end{smallmatrix}\right],X, \left[\begin{smallmatrix}\mathfrak{U}\cr U\end{smallmatrix}\right]\right)\) is partially flow-invertible if and only if there exists some \(\alpha\in\mathbb{C}\) such that the following two statements are valid:_ 1. _The operator_ \(\left[\begin{smallmatrix}\alpha\mathrm{I}&0&0\\ 0&1&0\\ 0&0&0\end{smallmatrix}\right]-\left[\begin{smallmatrix}A\&B\\ 0\\ \phantom{A\&B}\phantom{A\&B}\\ \phantom{A\&B}\phantom{A\&B}\\ \phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\\ \phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\\ \phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\\ \phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\\ \phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\\ \phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\\ \phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\\ \phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A \&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\\ \phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A \&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B} \phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B}\phantom{A\&B} \phantom{A\&B}\phantom{A\&B}\phantom{A}\phantom{A\&B}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} {A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} {A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A}\phantom{A} \phantom{A}\phantom{A}\phantom Further, \(Y\subset\left[\begin{smallmatrix}X\\ X\end{smallmatrix}\right]\) is called _maximal dissipative_, if it is dissipative and not a proper subset of a dissipative subset of \(\left[\begin{smallmatrix}X\\ X\end{smallmatrix}\right]\). A (possibly nonlinear) operator \(A\colon\operatorname{dom}(A)\subset X\to X\) is called _(maximal) dissipative_, if the graph of \(A\), i.e., \(\left\{\left[\begin{smallmatrix}x\\ Ax\end{smallmatrix}\right]|x\in\operatorname{dom}(A)\right\}\), is (maximal) dissipative. Here are some simple implications and equivalences of this property. **Remark 4.2**: Let \(A\colon\operatorname{dom}(A)\subset X\to X\) be an operator. 1. If \(A\) is linear then \(A\) is (maximal) dissipative if, and only if, \(-A\) is (maximal) monotone. 2. Assume that \(A\) is monotone. It follows from the definition of monotonicity that \(\mathrm{I}+\lambda A\) is injective for all \(\lambda>0\). Moreover, the Minty-Browder theory yields the equivalence of the following three statements, see, e.g., [2, Theorem 2.2 & p. 34]: 1. \(A\) is maximal monotone. 2. \(\mathrm{I}+\lambda A\) is surjective for some \(\lambda>0\). 3. \(\mathrm{I}+\lambda A\) is surjective for all \(\lambda>0\). Consequently, if \(A\) is maximal monotone, then \(\mathrm{I}+\lambda A\) is bijective for all \(\lambda>0\). The Cauchy-Schwarz inequality yields that \[\left\|(\mathrm{I}+\lambda A)x-(\mathrm{I}+\lambda A)y\right\|\geq\left\|x-y \right\|\quad\forall x,y\in\operatorname{dom}(A),\] hence \((\mathrm{I}+\lambda A)^{-1}\colon X\to X\) is contractive. Furthermore, \((\mathrm{I}-\lambda A)(\mathrm{I}+\lambda A)^{-1}\) is contractive. This follows with \(\tilde{x}\coloneqq(\mathrm{I}+\lambda A)^{-1}x\) and \(\tilde{y}\coloneqq(\mathrm{I}+\lambda A)^{-1}y\) from \[\left\|x-y\right\|^{2}-\left\|(\mathrm{I}-\lambda A) (\mathrm{I}+\lambda A)^{-1}x-(\mathrm{I}-\lambda A)(\mathrm{I}+ \lambda A)^{-1}y\right\|^{2}\] \[=\left\|\tilde{x}-\tilde{y}+\lambda A\tilde{x}-\lambda A\tilde{ y}\right\|^{2}-\left\|\tilde{x}-\tilde{y}-(\lambda A\tilde{x}-\lambda A\tilde{y}) \right\|^{2}\] \[=\operatorname{Re}\lambda\langle\tilde{x}-\tilde{y},A\tilde{x}-A \tilde{y}\rangle\geq 0.\qed\] 3. Consequently, if \(A\) is linear and maximal dissipative, then \(\lambda\mathrm{I}-A\) is surjective for all \(\lambda>0\). **Remark 4.3**: Let \(S=\left[\begin{smallmatrix}A\&B\\ C\&D\end{smallmatrix}\right]\colon\operatorname{dom}(S)\subset\left[\begin{smallmatrix} X\\ U\end{smallmatrix}\right]\to\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]\) be an operator with the properties as specified in Assumptions 2.1. 1. Dissipativity of \(\left[\begin{smallmatrix}A\&B\\ -C\&D\end{smallmatrix}\right]\) directly implies that \(A\) is dissipative. Using Remark 4.2 together with \(\rho(A)\cap\mathbb{C}_{+}\neq\emptyset\), \(A\) is even maximal dissipative. By the Lumer-Phillips theorem [12, Chap. 2, Thm. 3.15], we obtain that \(A\) generates a strongly continuous semigroup. Consequently, \(S\) is a system node. 2. It follows from [31, Lem. 4.3] that \(\left[\begin{smallmatrix}A\&B\\ -C\&D\end{smallmatrix}\right]\) is maximal dissipative. 3. The transfer function \(\widehat{\mathfrak{D}}\) of \(S\) is defined on \(\mathbb{C}_{+}\). Moreover, \(\widehat{\mathfrak{D}}(s)\) is monotone (and thus maximal monotone as it is a bounded operator) for all \(s\in\mathbb{C}_{+}\)[31, Thm. 4.2]. Further, the system (7) is well-posed if, and only if, \(\left\{\left\|\widehat{\mathfrak{D}}(\sigma+\imath\omega)\right\|\left| \omega\in\mathbb{R}\right.\right\}\) is bounded for some (and hence any) \(\sigma>0\)[31, Thm. 5.1]. * The generalized (and thus also the classical) trajectories of (7) fulfill the _dissipation inequality_ \[\|x(t)\|_{X}^{2}\leq\|x(0)\|_{X}^{2}+2\int_{0}^{t}\mathrm{Re}\langle u(\tau),y( \tau)\rangle_{U}\,\mathrm{d}\tau\qquad\forall\,t\in[0,T],\] (11) see [31, Thm. 4.2]. **Lemma 4.4**: _Assume that \(S=\left[\begin{smallmatrix}A\&B\\ \left[C\&D\right]_{1}\\ \left[C\&D\right]_{2}\end{smallmatrix}\right]\colon\,\mathrm{dom}(S)\subset \left[\begin{smallmatrix}X\\ \frac{1}{U}\\ U\end{smallmatrix}\right]\to\left[\begin{smallmatrix}X\\ \frac{1}{U}\\ U\end{smallmatrix}\right]\) has the properties as specified in Assumptions 2.1 and is partially flow-invertible. Then the partial flow inverse \(S^{\curvearrowright}=\left[\begin{smallmatrix}A\&B\\ \left[C\&D\right]_{1}^{\curvearrowright}\\ \left[C\&D\right]_{2}^{\curvearrowright}\end{smallmatrix}\right]\) fulfills that \(\left[\begin{smallmatrix}A\&B\\ \left[C\&D\right]_{1}^{\curvearrowright}\\ \left[C\&D\right]_{2}^{\curvearrowright}\end{smallmatrix}\right]\) is dissipative._ _Proof_ By Remark 4.3 (iv), the generalized trajectories of (7) fulfill the dissipation inequality (11). By using Proposition 3.12 and the trivial fact that \(\mathrm{Re}\langle u(\tau),y(\tau)\rangle=\mathrm{Re}\langle y(\tau),u(\tau)\rangle\) for all \(\tau\in[0,T]\), we see that the generalized trajectories for the system associated to the node \(S^{\curvearrowright}\) again fulfill the dissipation inequality. Then [31, Thm. 4.2] yields that \(\left[\begin{smallmatrix}[A\&B]^{\curvearrowright}\\ -\left[C\&D\right]_{1}^{\curvearrowright}\\ -\left[C\&D\right]_{2}^{\curvearrowright}\end{smallmatrix}\) is dissipative. \(\square\) _Remark 4.5_: A consequence of Lemma 4.4 is that partial flow inverses of system nodes fulfilling Assumptions 2.1 again fulfill Assumptions 2.1. In particular, \(\left[\begin{smallmatrix}[A\&B]^{\curvearrowright}\\ -\left[C\&D\right]_{1}^{\curvearrowright}\\ -\left[C\&D\right]_{2}^{\curvearrowright}\end{smallmatrix}\) is maximal dissipative. **Lemma 4.6**: _Assume that \(S=\left[\begin{smallmatrix}A\&B\\ \left[C\&D\right]_{1}\\ \left[C\&D\right]_{2}\end{smallmatrix}\right]\colon\,\mathrm{dom}(S) \subset\left[\begin{smallmatrix}X\\ \frac{1}{U}\\ U\end{smallmatrix}\right]\to\left[\begin{smallmatrix}X\\ \frac{2}{Y}\end{smallmatrix}\right]\) has the properties as specified in Assumptions 2.1, and let \(\gamma\geq 0\), \(\delta>0\). Then the system node_ \[S_{\gamma,\delta}=\left[\begin{smallmatrix}(A-\gamma\Gamma)\&B\\ \left[\begin{smallmatrix}0&1&0\\ \left[C\&D+\delta\Gamma\right]\right]_{2}\end{smallmatrix}\right]\] _is partially flow-invertible. Moreover, for the partial flow-inverse \(S_{\gamma,\delta}^{\curvearrowright}\) of \(S_{\gamma,\delta}\), the subsystem \(\widetilde{\Sigma}_{22}^{\curvearrowright}\) (i.e., the restriction of the system node to \((Y,X,U)\)) is well-posed. Here, we denote the system given by the node \(S_{\gamma,\delta}\) by \(\widetilde{\Sigma}=\left[\begin{smallmatrix}\widetilde{\Sigma}_{11}& \widetilde{\Sigma}_{12}\\ \widetilde{\Sigma}_{21}&\widetilde{\Sigma}_{22}\end{smallmatrix}\right]\) and the one for \(S_{\gamma,\delta}^{\curvearrowright}\) by \(\widetilde{\Sigma}^{\curvearrowright}\) respectively._ _Proof_ Since \(\left[\begin{smallmatrix}A\&B\\ \left[\begin{smallmatrix}0&0&0\\ -\left[C\&D\right]_{2}\end{smallmatrix}\right]\) is maximal dissipative aswell by Remark 4.3 (ii), we obtain that \[\left[\begin{smallmatrix}(\delta-\gamma)\Gamma\&0&0\\ 0&\Gamma&0\\ 0&0&0\end{smallmatrix}\right]-\left[\begin{smallmatrix}(A-\gamma\Gamma)\&B\\ 0&-\Gamma&0\\ \left[C\&D+\delta\Gamma\right]_{2}\end{smallmatrix}\right]=-\left[\begin{smallmatrix} (A-\delta\Gamma)\&B\\ \left[\begin{smallmatrix}0&-\Gamma\\ 0&-\Gamma\\ \left[C\&D\right]_{2}\end{smallmatrix}\right]=\left[\begin{smallmatrix} \Gamma&0&0\\ 0&\Gamma&0\\ 0&0&-1\end{smallmatrix}\right]\left(\delta\Gamma-\left[\begin{smallmatrix}A\&B \\ 0&0&0\\ -\left[C\&D\right]_{2}\end{smallmatrix}\right]\right)\] has a bounded inverse, which we partition as \(\left[\begin{smallmatrix}M_{11}&M_{12}&M_{13}\\ M_{21}&M_{22}&M_{23}\\ M_{31}&M_{32}&M_{33}\end{smallmatrix}\right]\). Moreover, by \[\mathrm{Re}\left\langle\left[\begin{smallmatrix}x\\ u\\ u\end{smallmatrix}\right],\left(\delta\mathrm{I}-\left[\begin{smallmatrix}A \&B\\ 0&0&0\\ -[C\&D]_{2}\end{smallmatrix}\right]\right)\left[\begin{smallmatrix}x\\ u\\ u\end{smallmatrix}\right]\right\rangle\geq\delta\big{(}\|x\|_{X}^{2}+\|\mathbf{u} \|_{\mathbb{I}}^{2}+\|\mathbf{u}\|_{U}^{2}\big{)}\quad\forall\,\left[ \begin{smallmatrix}x\\ u\\ u\end{smallmatrix}\right]\in\mathrm{dom}(S),\] we obtain from the construction of \(M_{ij}\), \(i,j=1,2,3\) that \[\mathrm{Re}\left\langle\left[\begin{smallmatrix}x\\ u\\ u\end{smallmatrix}\right],\left[\begin{smallmatrix}M_{11}&M_{12}&M_{13}\\ M_{21}&M_{22}&M_{23}\\ M_{31}&M_{32}&M_{33}\end{smallmatrix}\right]\left[\begin{smallmatrix}x\\ \delta u\\ u\end{smallmatrix}\right]\right\rangle\!\!>\!\!0\quad\forall\,\left[\begin{smallmatrix} x\\ u\\ u\end{smallmatrix}\right]\in\left[\begin{smallmatrix}X\\ \mathbb{I}\\ U\end{smallmatrix}\right]\setminus\left\{\left[\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right]\right\}.\] In particular, \[\mathrm{Re}\langle x,M_{11}x\rangle>0\quad\forall x\in X\setminus\{0\},\] hence \(M_{11}\) is injective and has dense range. This together with the boundedness of \(M_{11}\) implies that the inverse of \(-M_{11}\) is again maximal dissipative, and the Lumer-Phillips theorem [12, Chap. 2, Thm. 3.15] yields that \(-M_{11}^{-1}\) generates a strongly continuous semigroup on \(X\). Now we can conclude from Proposition 3.11 that \(S_{\gamma,\delta}\) possesses a partial flow inverse \(S_{\gamma,\delta}^{\curvearrow}\). It remains to prove that \(S_{\gamma,\delta}^{\curvearrow}\) defines a well-posed subsystem \(\widetilde{\Sigma}_{22}^{\curvearrow}\). Let \(\widehat{\mathfrak{D}}=\left[\begin{smallmatrix}\widehat{\mathfrak{D}}_{11}& \widehat{\mathfrak{D}}_{12}\\ \widehat{\mathfrak{D}}_{21}&\widehat{\mathfrak{D}}_{22}\end{smallmatrix}\right]\) be the transfer function of \(S\). Then \(s\mapsto\delta\mathrm{I}+\widehat{\mathfrak{D}}_{22}(\gamma+s)\) is the transfer function of \(\widetilde{\Sigma}_{22}\). On the other hand, by Remark 4.333, \(\widehat{\mathfrak{D}}_{22}(\gamma+s)\) is monotone for all \(s\in\mathbb{C}_{+}\), which gives rise to \[\|(\delta\mathrm{I}+\widehat{\mathfrak{D}}_{22}(\gamma+s))^{-1}\|\leq\tfrac{1 }{\delta}\quad\forall\,s\in\mathbb{C}_{+}.\] On the other hand, since \(\left(\delta\mathrm{I}+\widehat{\mathfrak{D}}_{22}(\gamma+s)\right)^{-1}\) is the transfer function of the subsystem \(\widetilde{\Sigma}_{22}^{\curvearrow}\) by Definition 3.10, we can conclude from Remark 4.333 that this subsystem is well-posed. Let \(S=\left[\begin{smallmatrix}A\&B\\ [C\&D]_{1}\\ [C\&D]_{2}\end{smallmatrix}\right]\colon\mathrm{dom}(S)\subset\left[\begin{smallmatrix} X\\ \mathbb{I}\\ U\end{smallmatrix}\right]\to\left[\begin{smallmatrix}X\\ \mathbb{I}\\ U\end{smallmatrix}\right]\) be as in Assumptions 2.1, and let \(T>0\), \(\mathbf{u}\colon[0,T]\to\mathfrak{U}\) and \(x_{0}\in X\). We recall the definition of the operator \[M\colon\mathrm{dom}(M)\subset H\coloneqq\mathrm{L}^{2}([0,T];\left[ \begin{smallmatrix}X\\ U\end{smallmatrix}\right])\to H\] (12a) with \[\mathrm{dom}(M)=\left\{[\begin{smallmatrix}x\\ u\end{smallmatrix}]\in H\,\Big{|}\,[\begin{smallmatrix}\dot{x}\\ 0\end{smallmatrix}]-\left[\begin{smallmatrix}A\&B\\ -[C\&D]_{2}\end{smallmatrix}\right]\,\left[\begin{smallmatrix}x\\ u\\ u\end{smallmatrix}\right]\in H\,\,\mathrm{and}\,x(0)=x_{0}\,\right\}, \tag{12b}\] \[M\begin{bmatrix}x\\ u\end{bmatrix}=\begin{bmatrix}\dot{x}-A\&B\left[\begin{smallmatrix}x\\ u\\ u\\ u\end{smallmatrix}\right]\\ \left[C\&D\right]_{2}\left[\begin{smallmatrix}x\\ u\\ u\end{smallmatrix}\right]\end{bmatrix}. \tag{12c}\] _Remark 4.7_: By \(\left[\begin{smallmatrix}\dot{x}\\ 0\end{smallmatrix}\right]-\left[\begin{smallmatrix}A\&B\\ [C\&D]_{2}\end{smallmatrix}\right]\left[\begin{smallmatrix}x\\ u\\ u\end{smallmatrix}\right]\in\mathrm{L}^{2}([0,T];\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right])\), we mean that there exist \(w\in\mathrm{L}^{2}([0,T];X)\), \(z\in\mathrm{L}^{2}([0,T];U)\) such that \(\left[\begin{smallmatrix}x\\ u\end{smallmatrix}\right]\) fulfills \(\dot{x}=A\&B\left[\begin{smallmatrix}x\\ u\\ u\end{smallmatrix}\right]+w\) and \(z=[C\&D]_{2}\left[\begin{smallmatrix}x\\ u\end{smallmatrix}\right]\) in the sense of generalized trajectories in Definition 3.44. These functions indeed fulfill \[[\begin{smallmatrix}w\\ z\end{smallmatrix}]=M[\begin{smallmatrix}x\\ u\end{smallmatrix}].\] As the action of \(M\) is defined via generalized trajectories, Remark 3.6 (ii) yields that \(x\in\mathrm{C}([0,T];X_{-1})\), hence the initial condition \(x(0)=x_{0}\) is well-defined. Hence, generalized trajectories of the closed system (1)-(2) are equivalent to solutions to the equation (5). Our main result of this section is presented in the following. **Theorem 4.8**: _Let \(S\colon\operatorname{dom}(S)\subset\left[\begin{smallmatrix}X\\ \operatorname{\mathbb{I}}\\ \operatorname{\mathbb{I}}\end{smallmatrix}\right]\to\left[\begin{smallmatrix} X\\ \operatorname{\mathbb{I}}\\ \operatorname{\mathbb{I}}\end{smallmatrix}\right]\) be as in Assumptions 2.1, and let \(T>0\), \(\mathfrak{u}\in\mathrm{L}^{2}([0,T];\mathfrak{U})\) and \(x_{0}\in X\). Then the operator \(M\) as in (12) is closed and maximal monotone._ _Proof_ _Step 1:_ We show that \(M\) is monotone. Let \([\begin{smallmatrix}x_{1}\\ u_{1}\end{smallmatrix}],[\begin{smallmatrix}x_{2}\\ u_{2}\end{smallmatrix}]\in\operatorname{dom}(M)\). Denote \[[\begin{smallmatrix}w_{i}\\ z_{i}\end{smallmatrix}]\coloneqq M[\begin{smallmatrix}x_{i}\\ u_{i}\end{smallmatrix}],\quad i=1,2.\] Then \([\begin{smallmatrix}w\\ z\end{smallmatrix}]\coloneqq\left[\begin{smallmatrix}w_{1}-w_{2}\\ z_{1}-z_{2}\end{smallmatrix}\right]\), \([\begin{smallmatrix}x\\ u\end{smallmatrix}]\coloneqq\left[\begin{smallmatrix}x_{1}-x_{2}\\ u_{1}-u_{2}\end{smallmatrix}\right]\) fulfill \(x(0)=0\), and \[[\begin{smallmatrix}\dot{x}\\ z\end{smallmatrix}]=\left[\begin{smallmatrix}A\&B\\ [C\&D]_{2}\end{smallmatrix}\right]\left[\begin{smallmatrix}\frac{x}{0}\\ u\end{smallmatrix}\right]+[\begin{smallmatrix}w\\ 0\end{smallmatrix}],\] which gives \[\left[\begin{smallmatrix}\dot{x}\\ z\end{smallmatrix}\right]=\underbrace{\left[\begin{smallmatrix}A\&B&1\\ [C\&D]_{2}&0\\ \operatorname{\mathbb{I}}&0&0\end{smallmatrix}\right]}_{=:S_{\mathrm{ext}}} \left[\begin{smallmatrix}\frac{x}{0}\\ u\\ w\end{smallmatrix}\right],\] in the sense of generalized solutions. Since \(S\) fulfills Assumptions 2.1, it is straightforward to see that so does \(S_{\mathrm{ext}}\), too. Then the dissipation inequality (see Remark 4.3 (iv)) yields \[0 \leq\tfrac{1}{2}\|x(T)\|_{X}^{2}=\tfrac{1}{2}\|x(T)\|_{X}^{2}- \tfrac{1}{2}\|x(0)\|_{X}^{2}\] \[\leq\int_{0}^{T}\operatorname{Re}\left\langle\left[\begin{smallmatrix }u(\tau)\\ w(\tau)\end{smallmatrix}\right],\left[\begin{smallmatrix}z(\tau)\\ x(\tau)\end{smallmatrix}\right]\right\rangle_{\left[\begin{smallmatrix}U\\ X\end{smallmatrix}\right]}\mathrm{d}\tau\] \[=\int_{0}^{T}\operatorname{Re}\left\langle\left[\begin{smallmatrix }u(\tau)\\ x(\tau)\end{smallmatrix}\right],\left[\begin{smallmatrix}z(\tau)\\ w(\tau)\end{smallmatrix}\right]\right\rangle_{\left[\begin{smallmatrix}U\\ X\end{smallmatrix}\right]}\mathrm{d}\tau\] \[=\int_{0}^{T}\operatorname{Re}\left\langle\left[\begin{smallmatrix }x(\tau)\\ u(\tau)\end{smallmatrix}\right],\left[\begin{smallmatrix}w(\tau)\\ z(\tau)\end{smallmatrix}\right]\right\rangle_{\left[\begin{smallmatrix}X \\ U\end{smallmatrix}\right]}\mathrm{d}\tau\] \[=\int_{0}^{T}\operatorname{Re}\left\langle\left[\begin{smallmatrix }x_{1}(\tau)-x_{2}(\tau)\\ u_{1}(\tau)-u_{2}(\tau)\end{smallmatrix}\right],\left[\begin{smallmatrix}w_{1} (\tau)-w_{2}(\tau)\\ z_{1}(\tau)-z_{2}(\tau)\end{smallmatrix}\right]\right\rangle_{\left[ \begin{smallmatrix}X\\ U\end{smallmatrix}\right]}\mathrm{d}\tau\] \[=\operatorname{Re}\left\langle[\begin{smallmatrix}x_{1}^{1}\\ u_{1}^{2}\end{smallmatrix}]-[\begin{smallmatrix}x_{2}^{2}\\ u_{2}\end{smallmatrix}],M[\begin{smallmatrix}x_{1}^{1}\\ u_{1}^{2}\end{smallmatrix}]-M[\begin{smallmatrix}x_{2}^{2}\\ u_{2}\end{smallmatrix}]_{\mathrm{L}^{2}([0,T];\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right])}\right\rangle.\] _Step 2:_ By Remark 4.2 (ii) we only need to show that \(\lambda\mathbb{I}+M\) is surjective for any given \(\lambda>0\). So take \(\lambda>0\), \(w\in\mathrm{L}^{2}([0,T];X)\), \(z\in\mathrm{L}^{2}([0,T];U)\). Lemma 4.6 implies that the system node \[S_{\lambda,\lambda}=\left[\begin{smallmatrix}(A-\lambda\mathbb{I})\&B\\ 0&\mathrm{I}&0\\ [C\&(D+\lambda\mathbb{I})]_{2}\end{smallmatrix}\right]\] is partially flow-invertible. Denote the partial flow inverse by \[S_{\lambda,\lambda}^{\curvearrow}=\left[\begin{smallmatrix}[\bar{A}\&\bar{A} \&\bar{B}]^{\curvearrow}\\ 0&\mathrm{I}&0\\ [C\&\bar{A}\&\bar{B}]^{\curvearrow}\end{smallmatrix}\right].\] Lemma 4.6 implies that this further defines a well-posed subsystem Since the closed system has a solution, for fixed \(\mathfrak{u}\) there is a (generalized) trajectory \((x_{\mathfrak{u}},u_{\mathfrak{u}},y_{\mathfrak{u}})\). Due to the well-posedness of the subsystem we can choose \(\widehat{u}=u-u_{\mathfrak{u}}\) as the new input of the subsystem and again obtain a well-defined trajectory. Doing this for arbitrary inputs yields that for fixed \(\mathfrak{u}\) and for all internal inputs \(u\) there is trajectory. Then Lemma 3.9 yields that \[S\widehat{\wedge}_{\lambda,\lambda,\text{ext}}=\left[\begin{smallmatrix}[ \widetilde{A}\&\&\&\widetilde{B}]^{\curvearrowright}&\text{I}\\ 0&\text{I}&0&0\\ [\widetilde{C}\&\&\widetilde{D}]^{\curvearrowright}_{\text{I}}&0\\ \text{I}&0&0&0\end{smallmatrix}\right]\] has the same well-posedness property. Hence, there exist \(x\in\mathrm{C}([0,T];X)\) with \(x(0)=x_{0}\) and \(u\in\mathrm{L}^{2}([0,T];U)\) with \[\left[\begin{smallmatrix}\dot{\mathfrak{u}}\\ u\\ u\\ x\end{smallmatrix}\right]=S\widehat{\wedge}_{\lambda,\lambda,\text{ext}}\left[ \begin{smallmatrix}x\\ \mathfrak{u}\\ z\\ w\end{smallmatrix}\right], \tag{13}\] and thus, \[\left[\begin{smallmatrix}\dot{\mathfrak{u}}\\ u\\ u\\ \end{smallmatrix}\right]=S\widehat{\wedge}_{\lambda,\lambda}\left[\begin{smallmatrix} x\\ \mathfrak{u}\\ z\\ \end{smallmatrix}\right]+\left[\begin{smallmatrix}w\\ 0\\ 0\end{smallmatrix}\right].\] The definition of partial flow inverse yields \[\left[\begin{smallmatrix}\dot{x}\\ u\\ \end{smallmatrix}\right]=\left[\begin{smallmatrix}(A-\lambda\mathrm{I})\&\&B\\ 0&\text{I}&0\\ 0&0&\text{I}\end{smallmatrix}\right]\left[\begin{smallmatrix}\text{I}&0&0\\ 0&\text{I}&0\\ [C\&\&(D+\lambda\mathrm{I})]_{2}\end{smallmatrix}\right]^{-1}\left[ \begin{smallmatrix}x\\ \mathfrak{u}\\ \mathfrak{u}\\ \end{smallmatrix}\right]+\left[\begin{smallmatrix}w\\ 0\\ 0\end{smallmatrix}\right]\] This yields together with \(z=[C\&(D+\lambda\mathrm{I})]_{2}\left[\begin{smallmatrix}x\\ u\\ \end{smallmatrix}\right]\) \[\left[\begin{smallmatrix}\dot{x}\\ \mathfrak{u}\\ z\\ \end{smallmatrix}\right]=S_{\lambda,\lambda}\left[\begin{smallmatrix}x\\ \mathfrak{u}\\ u\\ \end{smallmatrix}\right]+\left[\begin{smallmatrix}w\\ 0\\ 0\end{smallmatrix}\right]\] with \(x(0)=x_{0}\). Since the second line of this equation is redundant, this is equivalent to \[\left[\begin{smallmatrix}\dot{x}\\ z\\ \end{smallmatrix}\right]=\left[\begin{smallmatrix}(A-\lambda\mathrm{I})\&B\\ [C\&(D+\lambda\mathrm{I})]_{2}\end{smallmatrix}\right]\left[\begin{smallmatrix} x\\ \mathfrak{u}\\ u\\ \end{smallmatrix}\right]+\left[\begin{smallmatrix}w\\ 0\\ 0\end{smallmatrix}\right].\] By definition of \(M\), this means that \((\lambda\mathrm{I}+M)[\begin{smallmatrix}x\\ u\\ u\\ \end{smallmatrix}]=\left[\begin{smallmatrix}w\\ z\\ \end{smallmatrix}\right]\) and with Remark 4.2 we obtain the maximal monotonicity of the operator. ## 5 Proof of the main theorems This section is devoted to the proof of Theorem 2.4 and 2.5. Proof of Theorem 2.4.: Let again \([\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}]\in\mathrm{dom}(M)\) and denote \([\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}]:=(\mathrm{I}+\lambda M)[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}]\). Then, following the proof of [3, Thm. 23] we have \[\left[\begin{smallmatrix}w_{k+1}\\ z_{k+1}\end{smallmatrix}\right] =(\mathrm{I}+\lambda M)(\mathrm{I}+\lambda M)^{-1}(\mathrm{I}- \lambda N)(\mathrm{I}+\lambda N)^{-1}(\mathrm{I}-\lambda M)\left[\begin{smallmatrix }x_{k}\\ u_{k}\end{smallmatrix}\right]\] \[=(\mathrm{I}-\lambda N)(\mathrm{I}+\lambda N)^{-1}(\mathrm{I}- \lambda M)(\mathrm{I}+\lambda M)^{-1}\left[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}\right]\] \[=(\mathrm{I}-\lambda N)(\mathrm{I}+\lambda N)^{-1}(2\mathrm{I}- (\mathrm{I}+\lambda M))(\mathrm{I}+\lambda M)^{-1}\left[\begin{smallmatrix}w_{ k}\\ z_{k}\end{smallmatrix}\right]\] \[=(\mathrm{I}-\lambda N)(\mathrm{I}+\lambda N)^{-1}(2\left[ \begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}\right]-\left[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}\right]) \tag{14}\] and analogously \[[\begin{smallmatrix}w\\ z\\ \end{smallmatrix}]=(\mathrm{I}-\lambda N)(\mathrm{I}+\lambda N)^{-1}(2[ \begin{smallmatrix}x_{u}\\ u\\ \end{smallmatrix}]-[\begin{smallmatrix}w\\ z\\ \end{smallmatrix}]).\] Let \(\omega>0\) and \(\mathrm{L}^{2}_{\omega}([0,T];X):=\{f\colon[0,T]\to X\mid\mathrm{e}^{-\cdot \omega}f(\cdot)\in\mathrm{L}^{2}([0,T];X)\}\) equipped with the norm \(\|\cdot\|_{2,\omega,X}\) defined as \[\|f\|^{2}_{2,\omega,X}=\int_{0}^{T}\mathrm{e}^{-2t\omega}\|f(t)\|^{2}_{X}\, \mathrm{d}t\,.\] Analogously, define \(\mathrm{L}^{2}_{\omega}([0,T];\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right])\) with norm \(\|\cdot\|_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}.\) Clearly, as \(T\in(0,\infty),\) we have \(\mathrm{L}^{2}_{\omega}([0,T];X)=\mathrm{L}^{2}([0,T];X)\) with equivalent norms. Let \([\begin{smallmatrix}x_{1}^{1}\\ u_{1}\end{smallmatrix}],[\begin{smallmatrix}x_{2}^{2}\\ u_{2}\end{smallmatrix}]\in\mathrm{dom}(M).\) The dissipativity inequality implies \[\mathrm{Re}\left([\begin{smallmatrix}x_{1}\\ u_{1}\end{smallmatrix}]-[\begin{smallmatrix}x_{2}^{2}\\ u_{2}\end{smallmatrix}],M[\begin{smallmatrix}x_{1}\\ u_{1}\end{smallmatrix}]-M[\begin{smallmatrix}x_{2}^{2}\\ u_{2}\end{smallmatrix}]_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}\right.\] Denote \[\Delta\left[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}\right]\coloneqq[\begin{smallmatrix}x\\ u\end{smallmatrix}]-[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}]\,,\qquad\Delta\left[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}\right]\coloneqq[\begin{smallmatrix}w\\ z\end{smallmatrix}]-[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}]\,,\] then, using the monotonicity of \(M\) and \(N\) we obtain for \(\omega>0\) \[\|\Delta\left[\begin{smallmatrix}w_{k+1}\\ z_{k+1}\end{smallmatrix}\right]\|^{2}_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}-\|\Delta\left[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}\right]\|^{2}_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}\] \[\quad=\|(\mathrm{I}-\lambda N)(\mathrm{I}+\lambda N)^{-1}\left[ (2[\begin{smallmatrix}x\\ u\end{smallmatrix}]-[\begin{smallmatrix}w\\ z\end{smallmatrix}])-(2[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}]-[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}])\right]\|^{2}_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}\] \[\quad-\|\Delta\left[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}\right]\|^{2}_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}\] \[\quad\leq\|2\Delta\left[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}\right]-\Delta\left[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}\right]\|^{2}_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}-\|\Delta\left[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}\right]\|^{2}_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}\] \[\quad=4\|\Delta\left[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}\right]\|^{2}_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}-4\,\mathrm{Re}\left(\Delta\left[\begin{smallmatrix} x_{k}\\ u_{k}\end{smallmatrix}\right],\Delta\left[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}\right]\right)_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}\] \[\quad=4\|[\begin{smallmatrix}x_{k}\\ u\end{smallmatrix}]-[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}]\|^{2}_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}-4\,\mathrm{Re}([\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}]-[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}],(\mathrm{I}+\lambda M)[\begin{smallmatrix}x_{k}\\ u\end{smallmatrix}]-(\mathrm{I}+\lambda M)\left[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}\right])_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}\] \[\quad=-4\lambda\,\mathrm{Re}([\begin{smallmatrix}x_{1}^{1}\\ u_{k}\end{smallmatrix}]-[\begin{smallmatrix}x_{k}^{1}\\ u_{k}\end{smallmatrix}],M[\begin{smallmatrix}x_{u}^{1}\\ u\end{smallmatrix}]-M[\begin{smallmatrix}x_{k}^{1}\\ u_{k}\end{smallmatrix}])_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}\] \[\quad\leq-2\lambda\mathrm{e}^{-2\omega T}\|x(T)-x_{k}(T)\|^{2}_{ X}-4\lambda\omega\|x-x_{k}\|^{2}_{2,\omega,X}\leq 0. \tag{15}\] Hence, \(\|\left[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}\right]-[\begin{smallmatrix}w\\ z\end{smallmatrix}]\|_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}\) is monotonically decreasing (therefore convergent) and Remark 4.2 yields the required inequality for \(\mathrm{L}^{2}_{\omega}([0,T];X).\) Since the monotonicity of \(M\) also holds in \(\mathrm{L}^{2}([0,T];X),\) the same statement is true for \(\omega=0.\) Now rephrasing the last inequality gives \[\|x-x_{k}\|^{2}_{2,\omega,X}\leq\tfrac{1}{4\lambda\omega}\left(\|\Delta\left[ \begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}\right]\|^{2}_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}-\|\Delta\left[\begin{smallmatrix}w_{k+1}\\ z_{k+1}\end{smallmatrix}\right]\|^{2}_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}\right),\] which leads to _b)_ since the norms of \(\mathrm{L}^{2}_{\omega}([0,T];X)\) and \(\mathrm{L}^{2}([0,T];X)\) are equivalent. Lastly, we repeat the calculation (15) but with \(t\in(0,T]\) instead of \(T\) in the last step and obtain \[\|x(t)-x_{k}(t)\|^{2}_{X}\leq\frac{\mathrm{e}^{2\omega T}}{2\lambda}\left(\| \Delta\left[\begin{smallmatrix}w_{k}\\ z_{k}\end{smallmatrix}\right]\|^{2}_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}-\|\Delta\left[\begin{smallmatrix}w_{k+1}\\ z_{k+1}\end{smallmatrix}\right]\|^{2}_{2,\omega,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}\right),\] which completes the proof. Proof of Theorem 2.5.: Let \([\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}]\in\mathrm{dom}(M),\)\((x,\mathfrak{u},\mathfrak{v})\) be a generalized trajectory and \(\mathfrak{v}_{k}\coloneqq[C\&D]_{1}\left[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}\right].\) Using the (PSOP) property, the dissipation inequality refines to \[\mathrm{Re}\left\langle[\begin{smallmatrix}x\\ u\end{smallmatrix}]-[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}]\,,M[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}]-M[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}]\right\rangle_{2,\left[\begin{smallmatrix}X\\ U\end{smallmatrix}\right]}\] \[\quad\geq\tfrac{1}{2}\|x(T)-x_{k}(T)\|^{2}_{X}-\mathrm{Re}\left\langle \left[\begin{smallmatrix}x\\ u\end{smallmatrix}\right]-[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}]\,,\left[\begin{smallmatrix}A\&B\\ -[C\&D]_{2}\end{smallmatrix}\right]\left(\left[\begin{smallmatrix}x\\ u\end{smallmatrix}\right]-\left[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}\right]\right)\right\rangle_{2,\left[\begin{smallmatrix}X \\ U\end{smallmatrix}\right]}\] \[\quad=\tfrac{1}{2}\|x(T)-x_{k}(T)\|^{2}_{X}-\mathrm{Re}\left\langle \left[\begin{smallmatrix}x\\ u\end{smallmatrix}\right]-\left[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}\right],\left[\begin{smallmatrix}A\&B\\ -[C\&D]_{2}\end{smallmatrix}\right]\left(\left[\begin{smallmatrix}x\\ u\end{smallmatrix}\right]-\left[\begin{smallmatrix}x_{k}\\ u_{k}\end{smallmatrix}\right]\right)\right\rangle_{2,\left[\begin{smallmatrix}X \\ U\end{smallmatrix}\right]}\] \[v_{tt}(\zeta,t) =v_{\zeta\zeta}(\zeta,t), \zeta\in(-1,0),t>0,\] \[v(-1,t) =0,\quad v_{t}(0,t)=u_{1}(t),\quad y_{1}(t)=v_{\zeta}(0,t), t>0,\] \[v(\zeta,0)=v_{0}(\zeta),\quad v_{t}(\zeta,0)=\psi(\zeta), \zeta\in(-1,0).\] In [11] it has been shown that the wave part is an impedance passive system node \(S_{1}=\left[\begin{smallmatrix}A_{1}\&B_{1}\\ C_{1}\&D_{1}\end{smallmatrix}\right]\) on \(\left[\begin{smallmatrix}\mathbb{C}\\ \mathbb{H}_{\ell}^{1}(0,1)\\ \mathbb{L}^{2}(-1,0)\\ \mathbb{C}\end{smallmatrix}\right]\), where \[\mathrm{H}_{\ell}^{1}(0,1)\coloneqq\{v\in\mathrm{H}^{1}(0,1)\mid v(-1)=0\}.\] The state of the system is given by \(\left[\begin{smallmatrix}v(\cdot,t)\\ v_{t}(\cdot,t)\end{smallmatrix}\right]\). As \(S_{1}\) is impedance passive, the operator \(\left[\begin{smallmatrix}A_{1}\&B_{1}\\ -C_{1}\&D_{1}\end{smallmatrix}\right]\) is dissipative, see [31, Theorem 4.2]. The heat part with Coleman-Gurtin thermal law is described by \[w_{t}(\zeta,t) =w_{\zeta\zeta}(\zeta,t)+\int_{0}^{\infty}\!\!g(s)w_{\zeta\zeta} (\zeta,t-s)\,\mathrm{d}s, \zeta\in(0,1),t >0,\] \[w(1,t) =0, t >0,\] \[u_{2}(t) =-w_{\zeta}(0,t)-\int_{0}^{\infty}\!\!g(s)w_{\zeta}(0,t-s)\, \mathrm{d}s, t >0,\] \[y_{2}(t) =w(0,t), t >0,\] \[w(\zeta,0) =w_{0}(\zeta),\quad w(\zeta,-s)=\varphi_{0}(\zeta,s), \zeta\in(0,1),s >0.\] The heat part is an impedance passive system system node \(S_{2}=\left[\begin{smallmatrix}A_{2}\&B_{2}\\ C_{2}\&D_{2}\end{smallmatrix}\right]\) on \((\mathbb{C},\mathrm{L}^{2}(0,1)\times\mathcal{M},\mathbb{C})\), see [11]. Here \(\mathcal{M}\coloneqq\mathrm{L}^{2}_{\mu}((0,\infty);\mathrm{H}_{r}^{1}(0,1))\), where \(\mathrm{L}^{2}_{\mu}\) is the space of all square-integrable functions with respect to the measure \(\mu(s)\mathrm{d}s\) and \(\mathrm{H}_{\ell}^{1}(0,1)\coloneqq\{w\in\mathrm{H}^{1}(0,1)\mid w(1)=0\}\). The state of the system is given by \(\left[\begin{smallmatrix}w(\cdot,t)\\ s\mapsto\int_{0}^{s}w(\cdot,t-\sigma)\,\mathrm{d}\sigma\end{smallmatrix}\right]\). Thus Assumption 2.1 is satisfied and as the two systems are coupled via \(u_{1}(t)=y_{2}(t)\) and \(u_{2}(t)=-y_{1}(t)\) the corresponding coupling operator \(N\) is skew-adjoint and thus dissipative. Therefore our dynamic iteration scheme is applicable and it is possible to solve the heat and wave part independently and in parallel via suitable methods. **Remark 6.1**: One could also include external inputs to the system by setting the value of \(v\) or \(w\) equal to a given \(\mathfrak{u}\) instead of \(0\). ### Wave equation on an L-shaped/decomposable domain We consider a wave equation as in [26] and use our technique to decompose the domain with a coupling on the connecting boundary. The system is given in the following form: \[\rho(\xi)z_{tt}(\xi,t) =\operatorname{div}(T(\xi)\operatorname{grad}z(\xi,t))-(dz_{t}) (\xi,t),\quad\xi\in\Omega,t\geq 0,\] \[0 =z_{t}(\xi,t)\quad\text{on }\Gamma_{0}\times[0,\infty),\] \[\mathfrak{u}(\xi,t) =\nu\cdot(T(\xi)\operatorname{grad}z(\xi,t))\quad\text{on }\Gamma_{1}\times[0,\infty),\] \[\mathfrak{y}(\xi,t) =z_{t}(\xi,t)\quad\text{on }\Gamma_{1}\times[0,\infty),\] \[z(\xi,0) =z_{0}(\xi),\quad z_{t}(\xi,0)=\omega(\xi)\quad\text{on }\Omega,\] where \(\nu\colon\partial\Omega\to\mathbb{R}^{2}\) is the unit outward normal vector of an L-shaped domain \(\Omega\subset\mathbb{R}^{2}\). More precisely, we consider \(\Omega,\Gamma_{0},\Gamma_{1}\subseteq\mathbb{R}^{2}\), with \[\Omega =\operatorname{int}(\overline{\Omega_{1}}\cup\overline{\Omega_{2 }}),\] \[\Omega_{1} =(0,1)\times(0,2), \Omega_{2} =(1,2)\times(0,1),\] \[\Gamma_{1} =(0,1)\times\{2\}\subset\partial\Omega, \Gamma_{0} =\partial\Omega\setminus\overline{\Gamma}_{1}.\] As usual, \(z(\xi,t)\) denotes the displacement of the wave at point \(\xi\in\Omega\) and time \(t\geq 0\), \(\mathfrak{u}\) is the input given by a force on the boundary part \(\Gamma_{1}\) and \(\mathfrak{y}\) is the output measured as the velocity at \(\Gamma_{1}\). The physical parameters are included via the Young's modulus \(T(\cdot)\) and the mass density \(\rho(\cdot)\) (not to be confused with the resolvent set), which are both assumed to be measurable, positive, and they have a bounded inverses. \(d\) can be interpreted as an internal damping which is assumed to be a bounded nonnegative and measurable function on \(\Omega\) (often given by a multiplication operator). We split the problem into two wave equations, each on the rectangles \(\Omega_{1}\) and \(\Omega_{2}\), which interact at the boundary interface \[\Gamma_{\text{int}}\coloneqq\{1\}\times(0,1).\] First note that, for the sets \[\Gamma_{10} =\big{(}\{0\}\times[0,2)\big{)}\cup\big{(}[0,1)\times\{0\}\big{)} \cup\big{(}\{1\}\times(1,2)\big{)},\] \[\Gamma_{20} =\big{(}(1,2]\times\{0\}\big{)}\cup\big{(}\{2\}\times[0,1]\big{)} \cup\big{(}[1,2]\times\{1\}\big{)}.\] we have \(\partial\Omega=\partial\Omega_{1}\cup\partial\Omega_{2}\setminus\Gamma_{ \text{int}}\) as well as \(\partial\Omega_{2}=\overline{\Gamma}_{20}\cup\overline{\Gamma}_{\text{int}}\) and \(\partial\Omega_{1}=\overline{\Gamma}_{10}\cup\overline{\Gamma}_{\text{int}} \cup\overline{\Gamma}_{1}\) (see Figure 1). Further, the set \(\overline{\Gamma}_{10}\cap\overline{\Gamma}_{20}=\overline{\Gamma}_{10}\cap \overline{\Gamma}_{\text{int}}=\overline{\Gamma}_{20}\cap\overline{\Gamma}_{ \text{int}}=\{1\}\times\{0,1\}\) is of line measure zero. Now, by denoting the unit outward normals of \(\Omega_{1}\) and \(\Omega_{2}\) respectively by \(\nu_{1}\) and \(\nu_{2}\), we consider the two systems \[\rho(\xi)z_{tt}(\xi,t) =\operatorname{div}(T(\xi)\operatorname{grad}z(\xi,t))-(dz_{t}) \,(\xi,t),\quad\xi\in\Omega_{1},t\geq 0,\] \[0 =z_{t}(\xi,t)\quad\text{on }\Gamma_{10}\times[0,\infty),\] \[\mathfrak{u}(\xi,t) =\nu_{1}\cdot(T(\xi)\operatorname{grad}z(\xi,t))\quad\text{on }\Gamma_{1}\times[0,\infty),\] \[u_{1}(\xi,t) =\nu_{1}\cdot(T(\xi)\operatorname{grad}z(\xi,t))\quad\text{on }\Gamma_{\text{int}} \times[0,\infty), \tag{16}\] \[\mathfrak{y}(\xi,t) =z_{t}(\xi,t)\quad\text{on }\Gamma_{1}\times[0,\infty),\] \[y_{1}(\xi,t) =z_{t}(\xi,t)\quad\text{on }\Gamma_{\text{int}}\times[0,\infty),\] \[z(\xi,0) =z_{0}(\xi),\quad z_{t}(\xi,0)=\omega(\xi)\quad\text{on }\Omega_{1},\] and \[\rho(\xi)z_{tt}(\xi,t) =\operatorname{div}(T(\xi)\operatorname{grad}z(\xi,t))-\left(dz_{t} \right)(\xi,t),\quad\xi\in\Omega_{2},t\geq 0,\] \[0 =z_{t}(\xi,t)\quad\text{on }\Gamma_{20}\times[0,\infty),\] \[u_{2}(\xi,t) =z_{t}(\xi,t)\quad\text{on }\Gamma_{\text{int}}\times[0,\infty), \tag{17}\] \[y_{2}(\xi,t) =\nu_{2}\cdot(T(\xi)\operatorname{grad}z(\xi,t))\quad\text{on }\Gamma_{\text{int}}\times[0,\infty),\] \[z(\xi,0) =z_{0}(\xi),\quad z_{t}(\xi,0)=\omega(\xi)\quad\text{on }\Omega_{2},\] together with the interface conditions \[u_{1}(\xi,t)=-y_{2}(\xi,t),\;\;u_{2}(\xi,t)=y_{1}(\xi,t),\quad\text{on }\Gamma_{\text{int}}\times[0,\infty). \tag{18}\] Trace spaces.In this paragraph \(\Omega\) denotes a general bounded Lipschitz domain in \(\mathbb{R}^{2}\). The results here will be applied later to the domains described previously. To properly introduce the right formulation and spaces for the above systems, consider the _trace operator_\(\gamma\colon\mathrm{H}^{1}(\Omega)\to\mathrm{H}^{1/2}(\partial\Omega)\) which maps \(x\in\mathrm{H}^{1}(\Omega)\) to its boundary trace \(x|_{\partial\Omega}\), where \(\mathrm{H}^{1/2}(\partial\Omega)\) denotes the Sobolev space of fractional order \(1/2\)[1]. By the _trace theorem_[15, Thm. 1.5.1.3], \(\gamma\) is bounded and surjective. Further, \(\mathrm{H}(\operatorname{div},\Omega)\) is the space of all square integrable functions whose weak divergence exists and is square integrable. That is, for \(\mathrm{H}^{1}_{0}(\Omega):=\ker\gamma\), \[z=\operatorname{div}x\quad\Longleftrightarrow\quad\forall\varphi\in\mathrm{H }^{1}_{0}(\Omega):\;-\langle\operatorname{grad}\varphi,x\rangle_{\mathrm{L}^{ 2}(\Omega;\mathbb{C}^{2})}=\langle\varphi,z\rangle_{\mathrm{L}^{2}(\Omega)}.\] Defining \(\mathrm{H}^{-1/2}(\partial\Omega)\coloneqq\mathrm{H}^{1/2}(\partial\Omega)^ {*}\) with respect to the pivot space \(\mathrm{L}^{2}(\partial\Omega)\), the _normal trace_ of \(x\in\mathrm{H}(\operatorname{div},\Omega)\) is well-defined by \(w=\gamma_{N}x\in\mathrm{H}^{-1/2}(\partial\Omega)\) with \[\forall z\in\mathrm{H}^{1}(\Omega):\;\langle w,\gamma z\rangle_{\mathrm{H}^{ -1/2}(\partial\Omega),\mathrm{H}^{1/2}(\partial\Omega)}=\langle\operatorname {div}x,z\rangle_{\mathrm{L}^{2}(\Omega)}+\langle x,\operatorname{grad}z \rangle_{\mathrm{L}^{2}(\Omega;\mathbb{C}^{2})}.\] Figure 1: Illustration of the partition of \(\Omega\) and its boundaries Green's formula [34, Chap. 16] yields that, indeed \(w(\xi)=\nu(\xi)^{\top}x(\xi)\) for all \(\xi\in\partial\Omega\), if \(\Omega\) and \(x\) are smooth. Further, \(\gamma_{N}\colon\mathrm{H}(\mathrm{div},\Omega)\to\mathrm{H}^{-1/2}(\partial\Omega)\) is bounded and surjective [34, Lem. 20.2]. For a relatively open set \(\Gamma\subset\partial\Omega\), we consider \[\mathrm{H}^{1}_{\Gamma}(\Omega)\coloneqq\left\{f\in\mathrm{H}^{1}(\Omega) \,\big{|}\,(\gamma f)|_{\Gamma}=0\text{ in }\mathrm{L}^{2}(\Gamma)\,\right\}.\] Further, for a one-dimensional Lipschitz manifold \(\Gamma\subset\partial\Omega(\subseteq\mathbb{R}^{2})\) with boundary, \(\mathrm{H}^{1/2}_{0}(\Gamma)\) is the space of elements of \(\mathrm{H}^{1/2}(\Gamma)\) which can, in \(\mathrm{H}^{1/2}\) be extended to zero outside \(\Gamma\). It can be concluded from the trace theorem that trace operator has a natural restriction to a bounded and surjective operator \(\gamma_{\Gamma}\colon\mathrm{H}^{1}_{\partial\Omega\setminus\Gamma}(\Omega) \to\mathrm{H}^{1/2}_{0}(\Gamma)\), i.e., \(x\in\mathrm{H}^{1}_{\partial\Omega\setminus\Gamma}(\Omega)\) is mapped to its trace \(x|_{\Gamma}\). Defining \(\mathrm{H}^{-1/2}(\Gamma)\coloneqq\mathrm{H}^{1/2}(\Gamma)^{*}\), the _normal trace_ of \(x\in\mathrm{H}(\mathrm{div},\Omega)\) at the relatively open set \(\Gamma\subset\partial\Omega\) is well-defined by \(w=\gamma_{N,\Gamma}x\in\mathrm{H}^{-1/2}_{\partial\Omega\setminus\Gamma}(\Gamma)\) with \[\forall z\in\mathrm{H}^{1}_{\partial\Omega\setminus\Gamma}(\Omega):\ \langle w,\gamma z \rangle_{\mathrm{H}^{-1/2}(\Gamma),\mathrm{H}^{1/2}(\Gamma)}=\langle\mathrm{ div}\,x,z\rangle_{\mathrm{L}^{2}(\Omega)}+\langle x,\mathrm{grad}\,z\rangle_{ \mathrm{L}^{2}(\Omega;\mathbb{C}^{2})}.\] We further set \[\mathrm{H}_{\Gamma}(\mathrm{div},\Omega)=\left\{z\in\mathrm{H}(\mathrm{div}, \Omega)\,|\,\gamma_{N,\Gamma}z=0\,\right\}.\] **System nodes.** Next we introduce system nodes \(S_{1}\), \(S_{2}\) corresponding to each of the subsystems arising from the split of \(\Omega\) into \(\Omega_{1}\) and \(\Omega_{2}\). In the following, we equip the spaces \(X_{i}=\mathrm{L}^{2}(\Omega_{i})\times\mathrm{L}^{2}(\Omega_{i};\mathbb{C}^{2})\) with the _energy norm_ \[\|\,(\,\begin{subarray}{c}p_{i}\\ q_{i}\end{subarray}\,)\|_{2}^{2}\coloneqq\int_{\Omega_{i}}\rho^{-1}(\xi)p_{i}( \xi)^{2}+T(\xi)q_{i}(\xi)^{\top}q_{i}(\xi)\,\mathrm{d}\xi,\quad i=1,2.\] Note that, by the assumption that \(\rho,T\) are positive-valued with \(T\rho,\rho^{-1},T,T^{-1}\in\mathrm{L}^{\infty}(\Omega)\), the energy norm is equivalent to the standard norm in \(\mathrm{L}^{2}(\Omega_{i})\times\mathrm{L}^{2}(\Omega_{i};\mathbb{C}^{2})\). Then the weak formulations of (16) and (17) are, for \[x_{i}(t)\coloneqq\left.\left[\,\begin{subarray}{c}\rho z_{t}\\ T^{-1}\,\mathrm{grad}\,z\end{subarray}\,\right]\right|_{\Omega_{i}}\in X_{i} \coloneqq\left[\,\begin{subarray}{c}\mathrm{L}^{2}(\Omega_{i})\\ \mathrm{L}^{2}(\Omega_{i};\mathbb{C}^{2})\end{subarray}\,\right]\!,\quad i=1,2,\] given by \[\left[\,\begin{subarray}{c}\dot{x}_{1}\\ y_{1}\end{subarray}\,\right]=S_{1}\left[\,\begin{subarray}{c}x_{1}\\ u_{1}\end{subarray}\,\right],\quad\left[\,\begin{subarray}{c}\dot{x}_{2}\\ y_{2}\end{subarray}\,\right]=S_{2}[\,\begin{subarray}{c}x_{2}\\ u_{2}\end{subarray}\,].\] Hereby, for \(\mathfrak{U}=\mathfrak{U}_{1}=\mathrm{H}^{-1/2}(\Gamma_{0})\), \(U_{1}=\mathrm{H}^{-1/2}(\Gamma_{\mathrm{int}})\), the first system node is given by \[S_{1}\colon\,\mathrm{dom}\,S_{1}\to\left[\,\begin{subarray}{c}X_{1}\\ \left[\,\begin{subarray}{c}\dot{\mathfrak{U}}\\ \dot{\mathfrak{U}}_{1}\end{subarray}\,\right]\,\right]\] with \[\mathrm{dom}\,S_{1}=\left\{\left[\,\begin{subarray}{c}\left[\,\begin{subarray} {c}p_{1}\\ q_{1}\end{subarray}\,\right]\\ \left[\,\begin{subarray}{c}\mathfrak{U}\\ u_{1}\end{subarray}\,\right]\end{subarray}\,\right]\in\left[\,\begin{subarray}{c} X_{1}\\ \left[\,\begin{subarray}{c}\mathfrak{U}\\ U_{1}\end{subarray}\,\right]\end{subarray}\,\right]\,\right|\,\rho^{-1}p_{1} \in\mathrm{H}^{1}_{\Gamma_{10}}(\Omega_{1})\,\wedge\,Tq_{1}\in\mathrm{H}( \mathrm{div},\Omega_{1})\,\right\}\] and \[S_{1}\left[\begin{smallmatrix}p_{1}\\ q_{1}\\ u\\ u_{1}\end{smallmatrix}\right]=\left[\begin{smallmatrix}\operatorname{div}(Tq_{1})-d \rho^{-1}p_{1}\\ \operatorname{grad}(\rho^{-1}p_{1})\\ \gamma_{\Gamma_{1}}(\rho^{-1}p_{1})\\ \gamma_{\Gamma_{\operatorname{int}}}(\rho^{-1}p_{1})\end{smallmatrix}\right].\] Likewise, \(U_{2}=\operatorname{H}^{1/2}(\Gamma_{\operatorname{int}})\), the second system node reads \[S_{2}\colon\operatorname{dom}S_{2}\to\left[\begin{smallmatrix}X_{2}\\ \left[\begin{smallmatrix}\operatorname{M}\\ \operatorname{U}_{2}\end{smallmatrix}\right]\end{smallmatrix}\right]\] with \[\operatorname{dom}S_{2}=\left\{\left[\begin{smallmatrix}\left[\begin{smallmatrix} p_{2}\\ q_{2}\\ u\\ u_{2}\end{smallmatrix}\right]\end{smallmatrix}\right]\in\left[\begin{smallmatrix} X_{2}\\ \left[\begin{smallmatrix}\operatorname{M}\\ \operatorname{U}_{2}\end{smallmatrix}\right]\end{smallmatrix}\right]\begin{array} {c}\left|\rho^{-1}p_{2}\in\operatorname{H}_{\Gamma_{20}}^{1}(\Omega_{2}) \,\wedge\,Tq_{2}\in\operatorname{H}(\operatorname{div},\Omega_{1})\\ \wedge u_{2}=\gamma_{\Gamma_{\operatorname{int}}}\rho^{-1}p_{2}\end{array}\right\}\] and \[S_{2}\left[\begin{smallmatrix}p_{2}\\ q_{2}\\ u\\ u_{2}\end{smallmatrix}\right]=\left[\begin{smallmatrix}\operatorname{div}(Tq_{2}) -d\rho^{-1}p_{2}\\ \operatorname{grad}(\rho^{-1}p_{2})\\ 0\\ \gamma_{N,\Gamma_{\operatorname{int}}}(Tq_{2})\end{smallmatrix}\right].\] _Remark 2_: To be precise, the output spaces are not equal to the corresponding input spaces but dual to them (as hinted in the definition of the trace spaces). Hence, we can identify them with each other via isomorphic dual mappings and still use our results. Dissipativity follows from the definition of the trace operators, whereas the closedness claims follow from closedness of the divergence and gradient operators together with boundedness of the involved trace operators. Further, it can be shown that the main operators \(A_{1}\), \(A_{2}\) of \(S_{1}\) and \(S_{2}\), resp., fulfill \(\operatorname{dom}A_{1}=\operatorname{dom}A_{1}^{*}\), \(\operatorname{dom}A_{2}=\operatorname{dom}A_{2}^{*}\) and \[A_{i}^{*}\left[\begin{smallmatrix}p_{i}\\ q_{i}\end{smallmatrix}\right]=\left[\begin{smallmatrix}-\operatorname{div}(Tq _{i})-d\rho^{-1}p_{i}\\ -\operatorname{grad}(\rho^{-1}p_{i})\end{smallmatrix}\right],\quad i=1,2.\] This shows that they are both maximally dissipative, whence they generate a strongly continuous semigroup on \(X_{i}\). Hence, given initial data and existence of a solution we can apply the presented splitting algorithm with the proven convergence properties. An additional advantage of this technique in this special case is that the decomposition into rectangles allows the usage of, say, spectral type methods to solve the two wave equations over rectangles separately, see, e.g., [7], [25]. _Remark 3_: Of course, nothing prevents to consider more than two rectangles and couple the corresponding wave equations analogously to the above. Moreover, the presented coupling is only one possible choice. If we forget about the interpretation of \(u_{i}\) and \(y_{i}\) as inputs and outputs and see the system as behavioral, choosing \(u_{1}=u_{2}\) and \(y_{1}=y_{2}\) suggests itself.
2308.06879
Towards Open-Set Test-Time Adaptation Utilizing the Wisdom of Crowds in Entropy Minimization
Test-time adaptation (TTA) methods, which generally rely on the model's predictions (e.g., entropy minimization) to adapt the source pretrained model to the unlabeled target domain, suffer from noisy signals originating from 1) incorrect or 2) open-set predictions. Long-term stable adaptation is hampered by such noisy signals, so training models without such error accumulation is crucial for practical TTA. To address these issues, including open-set TTA, we propose a simple yet effective sample selection method inspired by the following crucial empirical finding. While entropy minimization compels the model to increase the probability of its predicted label (i.e., confidence values), we found that noisy samples rather show decreased confidence values. To be more specific, entropy minimization attempts to raise the confidence values of an individual sample's prediction, but individual confidence values may rise or fall due to the influence of signals from numerous other predictions (i.e., wisdom of crowds). Due to this fact, noisy signals misaligned with such 'wisdom of crowds', generally found in the correct signals, fail to raise the individual confidence values of wrong samples, despite attempts to increase them. Based on such findings, we filter out the samples whose confidence values are lower in the adapted model than in the original model, as they are likely to be noisy. Our method is widely applicable to existing TTA methods and improves their long-term adaptation performance in both image classification (e.g., 49.4% reduced error rates with TENT) and semantic segmentation (e.g., 11.7% gain in mIoU with TENT).
Jungsoo Lee, Debasmit Das, Jaegul Choo, Sungha Choi
2023-08-14T01:24:18Z
http://arxiv.org/abs/2308.06879v2
# Towards Open-Set Test-Time Adaptation ###### Abstract Test-time adaptation (TTA) methods, which generally rely on the model's predictions (e.g., entropy minimization) to adapt the source pretrained model to the unlabeled target domain, suffer from noisy signals originating from 1) incorrect or 2) open-set predictions. Long-term stable adaptation is hampered by such noisy signals, so training models without such error accumulation is crucial for practical TTA. To address these issues, including open-set TTA, we propose a simple yet effective sample selection method inspired by the following crucial empirical finding. While entropy minimization compels the model to increase the probability of its predicted label (i.e., confidence values), we found that noisy samples rather show decreased confidence values. To be more specific, entropy minimization attempts to raise the confidence values of an individual sample's prediction, but individual confidence values may rise or fall due to the influence of signals from numerous other predictions (i.e., wisdom of crowds). Due to this fact, noisy signals misaligned with such 'wisdom of crowds', generally found in the correct signals, fail to raise the individual confidence values of wrong samples, despite attempts to increase them. Based on such findings, we filter out the samples whose confidence values are lower in the adapted model than in the original model, as they are likely to be noisy. Our method is widely applicable to existing TTA methods and improves their long-term adaptation performance in both image classification (e.g., 49.4% reduced error rates with TENT) and semantic segmentation (e.g., 11.7% gain in mIoU with TENT). ## 1 Introduction Despite the recent advancements of deep learning, models still show a significant performance degradation when confronted with large domain shifts (_e.g.,_ changes of cities with different landscapes during autonomous driving) [8, 36, 42, 30, 12]. Among various studies, test-time adaptation (TTA) is at the center of attention due to its practicality in not requiring 1) the source data during the adaptation stage and 2) ground truth labels of the target domain [61]. TTA models widely utilize a self-training strategy (_e.g.,_ entropy minimization), which uses the model's prediction as the target of the loss function [61, 9, 62, 49, 15, 14, 66, 50, 69, 27]. Since TTA models rely on their own predictions during the adaptation, they are inevitably prone to utilizing noisy signals. In this paper, noisy signals indicate supervisions that originated from 1) incorrect or 2) open-set predictions. Fig. 1 shows that performing adaptation with such noisy signals significantly degrades the TTA performance. Specifically, the pink pixels indicate the mispredicted pixels (_e.g.,_ predicting sidewalks as roads in the second row), and the red ones are the predictions on open-set classes that were not included in the train set (_e.g.,_ predicting guardrails and the garbage truck as roads in the first and second rows, respectively). Such an example clearly demonstrates that TTA in real-world applications needs to address such open-set classes since mispredicting guardrails as roads may cause serious accidents during autonomous driving. However, as shown in Fig. 3, previous studies focused on TTA with covariate shifts (_i.e.,_ domain shifts) only and did not address TTA that also includes semantic shifts (_i.e.,_ including open-set classes). Regarding its significance and practicality, adaptation with unknown classes included (_i.e.,_ open-set TTA) should be also addressed. Fig. 2 shows our empirical analysis that discloses an important finding to address such an issue. While entropy minimization enforces the model to increase the probability value of its predicted label (_i.e.,_ confidence values), we found that it often fails to increase them on the wrong samples. While previous studies [62, 49] resorted to finding an adequate confidence value or loss value to prevent error accumulation, the process of determining it is cumbersome, and utilizing such a _static_ threshold shows limited performance. We train TENT [61] with different thresholds for the analysis: (a) without thresholding, (b) selecting samples with confidence value higher or equal to 0.91, (c) selecting samples with loss values smaller than the entropy threshold proposed in EATA [49], and (d) selecting samples that achieve higher confidence value with the adaptation model \(\theta_{a}\) compared to that with the original model \(\theta_{o}\). As shown, using the confidence difference between \(\theta_{o}\) and \(\theta_{a}\) for selecting correct samples outperforms utilizing the static thresholds. While b) and c) show significantly high recall values (_i.e.,_ selecting actual correct samples well), it rather indicates that they simply select most of the samples and fail to filter out noisy samples considering the substantially low precision values (_i.e.,_ low ratio of correct samples among the selected ones). Footnote 1: We used the best confidence value \(p\) after grid search of \(p\in\{0.5,0.8,0.9,0.95,0.99\}\) The intuition behind using the confidence difference is as follows. Although entropy minimization enforces the model to increase the confidence value on the predicted label of an individual sample, the individual confidence value may rise or fall, influenced by the signals that originated from numerous other predictions (_i.e.,_ wisdom of crowds). To be more specific, the noisy signals that do not align with such 'wisdom of crowds', commonly found in the correct signals, fail to raise the individual confidence scores of wrong samples, even with the supervision from entropy minimization to increase them. By using such an observation, we select samples that achieve higher confidence value using \(\theta_{a}\) compared to that using \(\theta_{o}\). Since we reflect the knowledge state of the model on each individual sample, our selection is implicitly a dynamic thresholding strategy, which outperforms the previously-used static strategies. Our _simple yet effective_ sample selection method is widely applicable to existing TTA methods and improves their performances on both image classification and semantic segmentation. Figure 3: Description of open-set TTA. While previous studies assume covariate shifts (_i.e.,_ Cityscapes to BDD-100K), they fail to address the semantic shifts (_i.e.,_ guardrails only shown in BDD-100K). This paper addresses both closed-set and open-set test-time adaptation. Figure 2: Utilizing confidence difference for selecting correct samples. Pseudo-labeling samples (_i.e.,_ selecting correct samples) by using a fixed threshold does not guarantee a reasonable level of pseudo-labeling performance, which is demonstrated by the significantly low precision values. On the other hand, we maintain a reasonable level of both precision and recall by using the confidence difference between \(\theta_{o}\) and \(\theta_{a}\), improving the test-time adaptation performance overall. Our contributions are summarized as follows: * We propose a novel sample selection method that filters out noisy samples using the confidence difference between \(\theta_{a}\) and \(\theta_{o}\) based on the finding that noisy samples, both closed-set wrong samples, and open-set samples, generally show decreased confidence values on the originally predicted label. * This is the first paper to address open-set test-time adaptation, adapting to a target domain including test samples of unknown classes, which has not been explored in existing TTA studies despite its importance and practicality. * Our proposed method can be applied to various test-time adaptation methods and improves their performances on both image classification using CIFAR-10/100-C and TinyImageNet-C (_e.g.,_ 49.38% reduced error rates with TENT in open-set TTA), and semantic segmentation (_e.g.,_ 11.69% gain in mIoU with TENT) using real-world datasets including BDD-100K and Mapillary. ## 2 Wisdom of Crowds in Entropy Minimization ### Problem Setup During the test-time adaptation, models adapt to a target domain with \(N\) number of test samples in the test set \(D_{T}\), \(\{x_{i},\}_{i=1}^{N}\in D_{T}\), without target labels provided. Given a pretrained model \(\theta_{o}\), we update \(\theta_{o}\) to adapt to a novel target domain, where the adapted model is then defined as \(\theta_{a}\). For a test sample \(x\), we define \(\tilde{y}=f(x;\theta_{o})\in\mathbb{R}^{C}\) and \(\hat{y}=f(x;\theta_{a})\in\mathbb{R}^{C}\) as the softmax outputs of the original model \(\theta_{o}\) and the adapted model \(\theta_{a}\), respectively, where \(C\) denotes the number of classes. With the predicted class \(c_{o}=\operatorname*{argmax}_{c}f(x;\theta_{o})\) of the original model, we define the probability value on the predicted label as confidence value \(\tilde{y}^{c_{o}}\). Similarly, the confidence value of the adapted model \(\theta_{a}\) on the label \(c_{o}\), predicted by the original model, is defined as \(\hat{y}^{c_{o}}\). The main objective of test-time adaptation is to correctly predict \(c_{a}=\operatorname*{argmax}_{c}f(x;\theta_{a})\) using the adapted model, especially under large data distribution shifts. ### Motivation Decreased confidence valuesWhile entropy minimization enforces the model to increase the confidence value of its originally predicted label, we empirically found that wrong samples mostly show decreased values (_i.e.,_\(\tilde{y}^{c_{o}}<\tilde{y}^{c_{o}}\)). For the experiment, we perform test-time adaptation using TENT [61] for 50 rounds using CIFAR-10-C to simulate a long-term adaptation. One round includes continuously changing 15 corruption types, so we repeat it 50 times without resetting the model. With \(t_{i}\) indicating the \(i^{th}\) round, Fig. 4 (purple graph) shows that the number of samples achieving \(\hat{y}^{c_{o}}<\tilde{y}^{c_{o}}\), showing decreased confidence values, among \(N\) number of test samples increases as adaptation proceeds even with the entropy minimization that enforces the model to increase its confidence value on the originally predicted label. In fact, the green graph in Fig. 4 shows that the ratio of wrong samples among the samples with decreased confidence values also increases as adaptation proceeds. The main reason for such an observation is due to the 'wisdom of crowds', the signals learned from numerous other samples influencing the confidence level of individual samples. Specifically, although the individual signal from each sample compels the model to increase the confidence value of its own predicted label, this effect may be canceled out if other dominant signals show different patterns. Wisdom of crowds from correct samplesWe empirically found that models generally learn the wisdom of crowds from the correct samples. Fig. 5 demonstrates such a point with the histogram of 1) confidence values and 2) confidence difference, \(\hat{y}^{c_{o}}-\tilde{y}^{c_{o}}\), using TENT [61] adapted for 50 rounds. We observe that a substantial number of the samples achieving \(\hat{y}^{c_{o}}-\tilde{y}^{c_{o}}\geq 0\) are correct samples (blue). To be more specific, utilizing the confidence difference for distinguishing correct samples from wrong samples (red) achieves an AUROC of 89.42, which outperforms utilizing the confidence value of the adaptation model, achieving an AUROC of 58.29. Such an observation discloses two findings. First, since samples achieving \(\hat{y}^{c_{o}}\geq\tilde{y}^{c_{o}}\) are mostly correct ones, the Figure 4: As adaptation proceeds, the number of samples with decreased confidence values increases (purple graph). Additionally, among those samples, the ratio of wrongly predicted samples also increases (green graph). \(t_{i}\) indicates the \(i^{th}\) round during the long-term adaptation. Figure 5: Utilizing the confidence difference distinguishes between the correct samples (blue) and the wrong samples (red) better (AUROC of 89.42) than using the confidence values (AUROC of 58.29). We used the same model (_i.e.,_ TENT [61] adapted for 50 rounds) for the visualization. dominant signals necessary for increasing the confidence values (_i.e.,_ wisdom of crowds) are originated from the correct samples. Second, \(\hat{y}^{c_{a}}-\tilde{y}^{c_{a}}\) is an adequate metric to distinguish between correct and wrong samples. Misaligned wrong signalsWe further empirically analyze why signals from wrong samples fail to increase the confidence values of the original model. The main reason is that signals originated from wrong samples misalign with the 'wisdom of crowds' obtained from the correct samples. For the analysis, we compute the cosine similarity of gradients between two samples with the same predicted label in Fig. 7. For a given predicted label \(i\) (column), we compute \(s^{j,i}\), the cosine similarity of gradients obtained between samples of ground truth label \(j\) (row) and those of predicted label \(i\) as, \[s^{j,i}=\frac{1}{M_{1}M_{2}}\sum_{k=1}^{M_{1}}\sum_{l=1}^{M_{2}}\frac{g_{k}^{j, i}\cdot g_{l}^{j,i}}{\|g_{k}^{j,i}\|\|g_{l}^{i,i}\|},l\neq k\text{ if }j=i, \tag{1}\] where \(g_{k}^{j,i}\) indicates the gradient vector of \(k^{th}\) sample among \(M_{1}\) number of samples with the ground truth label \(j\) and the predicted label \(i\), \(g_{l}^{i,i}\) indicates the gradient vector of \(l^{th}\) sample among \(M_{2}\) number of samples with the ground truth label \(i\) and the predicted label \(i\) (_i.e.,_ correct samples), and \(i,j\in C\). In other words, given a certain predicted label, we compare the gradients of the correct samples and those of the samples with the same predicted label either correct or wrong. Thus, the diagonal elements are the results obtained by comparing the gradients between correct samples and the off-diagonal elements are obtained by comparing the gradients between correct samples and the wrong samples with the same predicted label. We add the description of how the cosine similarity of each pair is computed on the right side of Fig. 7. Given a certain column in Fig. 7, all entropy minimization losses enforce the model to increase the probability value of the same predicted label. However, we found that the signals (_i.e.,_ gradients) may differ depending on the actual ground truth labels. Specifically, the correct samples show high cosine similarity of gradients (diagonal elements, _e.g.,_\(s_{2,2}\)) compared to the ones with wrong samples (off-diagonal elements, _e.g.,_\(s_{0,2}\)). Since Fig. 5 shows that the correct signals dominate the wisdom of crowds required for increasing the confidence value of the originally predicted label, signals that are different from these dominant signals can be suppressed and do not raise confidence values. We want to clarify that the wisdom of crowds does not guarantee a model to utilize the correct signals only. Even with the wisdom of crowds, the model supervises itself with wrong predictions if the noisy losses are not filtered out. Such self-training with wrong knowledge significantly deteriorates the TTA performance of models, especially during the long-term adaptation [28]. In fact, such an issue has been widely studied in fields beyond TTA, known as the _confirmation bias_[67, 2, 60, 38, 37]. To address such an issue in TTA, we propose a sample selection method to filter out noisy samples by using the wisdom of crowds. ### Proposed Method As shown in Fig. 6, we propose a _simple yet effective_ sample selection method using the confidence difference between \(\tilde{y}^{c_{a}}\) and \(\tilde{y}^{c_{a}}\). Our sample selection criterion is formulated as \[\Phi(\hat{y}^{c_{a}},\tilde{y}^{c_{a}})=\mathbb{1}\left(\hat{y}^{c_{a}}\geq \tilde{y}^{c_{a}}\right), \tag{2}\] where \(\Phi(\cdot)\) is our sample selection criterion and \(\mathbb{1}(\cdot)\) is the indicator function. Our total objective function using entropy minimization is formulated as \[\mathcal{L}^{\text{main}}(x;\theta_{a})=\Phi(\hat{y}^{c_{a}},\tilde{y}^{c_{a}} )\cdot H(\hat{y}_{i})-\lambda_{\text{max}}H(\overline{y}). \tag{3}\] Figure 7: Cosine similarity of gradients between samples with the same predicted label. We observe that wrong signals (_i.e.,_ off-diagonal elements) misalign with the correct signals (_i.e.,_ diagonal elements) that dominate the wisdom of crowds. \(H(p)=\Sigma_{k=1}^{C}p^{k}\log p^{k}\), \(\overline{y}=\frac{1}{N}\Sigma_{k=1}^{C}\hat{y_{i}}\), and \(\lambda_{max}\) is the scalar value for balancing the two loss values. Note that \(H(\overline{y})\) has been widely used in previous studies [9, 42, 32, 41, 3, 5] to prevent the model from making imbalanced predictions towards a certain class. Recent studies require the pre-deployment stage that obtains the necessary information needed for each method by using the samples from the source data before the adaptation phase [9, 49, 42]. However, we want to emphasize that our method does not require such a pre-deployment stage as well as those samples from the source distribution. Due to such an advantage, our method can be easily applied to existing TTA methods without additional preparations. Through extensive experiments, we demonstrate the wide applicability of our method to existing TTA methods. ## 3 Experiments ### Experimental Setup **Datasets** For the image classification task, we use the widely used corruption benchmark datasets: CIFAR-10/100-C and TinyImageNet-C. We apply 15 different types of corruptions (e.g., gaussian noise) to CIFAR-10/100 [33] and TinyImageNet [34]. Pretrained models are trained on the clean train set and adapted to the corrupted test set. For the open-set setting, we use SVHN [47] for CIFAR-10/100-C, and ImageNet-O [23] for TinyImagenet-C, where we apply the same corruption type as the original test sets. We term the datasets as SVHN-C and ImageNet-O-C, respectively. We apply the identical corruption type in order to construct open-set samples that are drawn from the same domain shift but with unknown classes. For the semantic segmentation task under continually changing domains, we use a model pretrained on GTAV [53], and evaluate it with Cityscapes [10], BDD-100K [64], and Mapillary [48]. For semantic segmentation with a fixed target domain with multiple rounds, we use the Cityscapes for the source distribution and BDD-100K [64], GTAV [53], Mapillary [48], and SYNTHIA [54] for the target distributions. Note that the semantic segmentation task inherently includes open-set classes in the test set (e.g., traffic cones in BDD100K not shown during training with Cityscapes). **Evaluation settings** Following the recent TTA studies, we evaluate TTA models under continuously changing domains without resetting the model after each domain [62, 42, 49]. For the closed-set and open-set continual long-term TTA in the image classification, we perform adaptation for 50 rounds to simulate a long-term TTA with continu \begin{table} \begin{tabular}{c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{CIFAR-10-C} & \multicolumn{2}{c|}{CIFAR-100-C} & \multicolumn{2}{c|}{TinyImageNet-C} & \multicolumn{2}{c}{Average} \\ & Closed & Open & Closed & Open & Closed & Open & Closed & Open \\ \hline Source [65] & 18.27 & 18.27 & 46.75 & 46.75 & 76.71 & 76.71 & 47.24 & 47.24 \\ BN Adapt [46] & 14.49 & 15.73 & 39.26 & 42.67 & 61.90 & 63.00 & 38.55 & 40.47 \\ GCE [68] & 43.76 & 87.94 & 44.45 & 88.69 & 97.25 & 99.00 & 61.82 & 91.88 \\ Conjugate [15] & 49.57 & 92.25 & 98.97 & 98.79 & 99.38 & 99.46 & 82.64 & 96.83 \\ \hline ENT & 87.06 & 89.26 & 56.35 & 98.76 & 99.43 & 99.50 & 80.95 & 95.84 \\ + Ours & **17.33 (-69.73)** & **23.98 (-65.28)** & **37.69 (-18.66)** & **40.48 (-58.28)** & **58.93 (-40.50)** & **64.01 (-35.49)** & **37.98 (-42.97)** & **42.82 (-53.02)** \\ \hline TENT [61] & 45.84 & 85.22 & 42.34 & 85.22 & 98.10 & 99.16 & 62.09 & 89.87 \\ + Ours & **14.10 (31.74)** & **15.77 (-69.45)** & **38.62 (-3.72)** & **42.57 (-42.65)** & **60.87 (-37.23)** & **63.13 (-36.03)** & **37.86 (-24.23)** & **40.49 (-49.38)** \\ \hline EATA [49] & 29.78 & 82.05 & 49.31 & 98.75 & 59.82 & 63.47 & 46.30 & 81.42 \\ + Ours & **14.07 (-15.71)** & **15.65 (-66.40)** & **38.44 (-10.87)** & **42.47 (-56.28)** & **59.80 (-0.02)** & **62.08 (-1.39)** & **37.44 (-8.86)** & **40.07 (-41.35)** \\ \hline SWR [9] & 10.21 & 90.55 & 35.78 & 73.05 & 62.39 & 76.13 & 36.13 & 79.91 \\ + Ours & **10.12 (-0.09)** & **72.58 (-17.97)** & **35.64 (-0.14)** & **45.68 (-27.37)** & **55.15 (-7.24)** & **61.91 (-14.22)** & **33.64 (-2.49)** & **60.06 (-19.85)** \\ \hline \hline \end{tabular} \end{table} Table 1: Error rates of image classification after 10 rounds of adaptation (_i.e.,_ long-term test-time adaptation). We note the performance gain by reduced error rates. \begin{table} \begin{tabular}{c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{CIFAR-10-C} & \multicolumn{2}{c|}{CIFAR-100-C} & \multicolumn{2}{c|}{TinyImageNet-C} & \multicolumn{2}{c}{Average} \\ & Closed & Open & Closed & Open & Closed & Open & Closed & Open \\ \hline Source [65] & 18.27 & 18.27 & 46.75 & 46.75 & 76.71 & 76.71 & 47.24 & 47.24 \\ BN Adapt [46] & 14.49 & 15.73 & 39.26 & 42.67 & 61.90 & 63.00 & 38.55 & 40.47 \\ GCE [68] & 43.76 & 87.94 & 44.45 & 88.69 & 97.25 & 99.00 & 61.82 & 91.88 \\ Conjugate [15] & 49.57 & 92.25 & 98.97 & 98.79 & 99.38 & 99.46 & 82.64 & 96.83 \\ \hline ENT & 87.06 & 89.26 & 56.35 & 98.76 & 99.43 & 99.50 & 80.95 & 95.84 \\ + Ours & **17.33 (-69.73)** & **23.98 (-65.28)** & **37.69 (-18.66)** & **40.48 (-58.28)** & **58.93 (-40.50)** & **64.01 (-35.49)** & **37.98 (-42.97)** & **42.82 (-53.02)** \\ \hline TENT [61] & 45.84 & 85.22 & 42.34 & 85.22 & 98.10 & 99.16 & 62.09 & 89.87 \\ + Ours & **14.10 (31.74)** & **15.77 (-69.45)** & **38.62 (-3.72)** & **42.57 (-42.65)** & **60.87 (-37.23)** & **63.13 (-36.03)** & **37.86 (-24.23)** & **40.49 (-49.38)** \\ \hline EATA [49] & 29.78 & 82.05 & 49.31 & 98.75 & 59.82 & 63.47 & 46.30 & 81.42 \\ + Ours & **14.07 (-15.71)** & **15.65 (-66.40)** & **38.44 (-10.87)** & **42.47 (-56.28)** & **59.80 (-0.02)** & **62.08 (-1.39)** & **37.44 (-8.86)** & **40.07 (-41.35)** \\ \hline SWR [9] & 10.21 & 90.55 & 35.78 & 73.05 & 62.39 & 76.13 & 36.13 & 79.91 \\ + Ours & **10.12 (-0.09)** & **72.58 (-17.97)** & **35.64 (-0.14)** & **45.68 (-27.37)** & **55.15 (-7.24)** & **61.91 (-14.22)** & **33.64 (-2.49)** & **60.06 (-19.85)** \\ \hline \hline \end{tabular} \end{table} Table 2: ously changing domains. We report both TTA performances after 1 round (_i.e.,_ short-term TTA) and 50 rounds (_i.e.,_ long-term TTA). Note that we evaluate predictions made during online model adaptation, not after visiting the entire test set, strictly following the established TTA settings. For the open-set TTA, we construct the mini-batch that includes an equal number of closed-set samples (e.g., CIFAR-10-C, shot noise) and open-set samples (e.g., SVHN-C, shot noise). Although included in the mini-batch, we exclude open-set samples from the evaluation and only evaluate models with closed-set samples. To the best of our knowledge, our work is the first paper to conduct experiments with the open-set TTA. We report the error rates and mean intersection of union (mIoU) for image classification and semantic segmentation, respectively. BaselinesWe mainly compare our method with previous methods addressing noisy labels [68] or improving pseudo-labeling performances in TTA [15, 49]. Note that ENT denotes updating all parameters while TENT [61] only updates affine parameters of the batch normalization layers, both utilizing the entropy minimization loss function. Gray-shaded digits indicate the performance gain by applying our method to each baseline model, and bold digits indicate the better performance between the two methods. Implementation detailsFor the image classification, we use the learning rate of \(1e\)-\(3\) and \(1e\)-\(4\) for models updating only affine parameters (TENT [61], EATA [49], GCE [68], Conjugate [15]) and all parameters (ENT, SWR [9]), respectively. We use the batch size of 200 and Adam optimizer [31] for all experiments. For experiments conducting small batch sizes in Table 8, we use the learning rate of \(1e\)-\(4\) and update models after 200 steps, following TTN [42]. For the semantic segmentation, we use the learning rate of \(1e\)-\(6\) and batch size of 2 following TTN. Regarding using TTN in semantic segmentation, we update the test batch statistics in an online manner to further improve the segmentation performance for all experiments. Further details on our experimental setup are included in our supplementary. ### Results Image classificationAs shown in Table 1, existing TTA models show a large performance degradation during the long-term adaptation. This is mainly due to the confirmation bias, caused by the unsupervised losses that inevitably include noisy losses. We significantly improve the long-term performance of the existing four different TTA models in both closed-set and open-set TTA. For example, we improve the error rate of TENT [61] by an average of 24.23% and 49.38% in the closed-set and open-set settings, respectively. Note that we do not use prior knowledge of whether the target distribution includes open-set samples or not. Additionally, Table 2 shows that our method also generally improves the short-term TTA performances. While previous studies focused on improving the performance of closed-set TTA until now, our results show that they suffer from a large performance drop when adapted with open-set classes included. We believe that this is a practical setting since we can not guarantee that samples from the target distributions are always drawn from the \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{BDD-100K} & \multicolumn{2}{c|}{Magillary} & \multicolumn{2}{c|}{GTAV} & \multicolumn{2}{c|}{SYNTHIA} & \multicolumn{2}{c}{Average} \\ & Round 1 & Round 10 & Round 1 & Round 10 & Round 1 & Round 10 & Round 1 & Round 10 & Round 1 & Round 10 \\ \hline Source [6] & 43.50 & 43.50 & 54.37 & 54.37 & 44.55 & 44.55 & 22.78 & 22.78 & 41.30 & 41.30 \\ BN Adapt [46] & 43.60 & 43.60 & 47.66 & 47.66 & 43.22 & 43.22 & 25.72 & 25.72 & 40.05 & 40.05 \\ TFN [42] & 48.43 & 48.43 & 57.28 & 57.28 & 46.71 & 46.71 & 26.41 & 26.41 & 44.71 & 44.71 \\ \hline TENT [61] & 48.90 & 47.57 & 57.94 & 53.36 & 48.14 & 17.91 & 26.88 & 13.36 & 45.47 & 33.05 \\ \hline + Ours & 48.90 & **48.88 (+1.31)** & 57.94 & **56.40 (+3.13)** & **48.28 (+0.14)** & **47.98 (+3.07)** & **26.90 (+0.02)** & **25.62 (+12.26)** & **45.51 (+0.04)** & **44.74 (+11.60)** \\ \hline SWR [9] & 49.39 & 49.68 & **59.33** & **59.70** & 47.82 & 48.13 & **28.40** & 1.18 & 46.24 & 39.67 \\ \hline + Ours & **49.88 (+0.49)** & **50.57 (+0.89)** & **58.79 (-0.54)** & 58.89 (-0.81) & **49.17 (+1.35)** & **49.27 (+1.14)** & 27.75 (-0.65) & **27.82 (+26.54)** & **46.40 (+0.16)** & **46.64 (+6.97)** \\ \hline \hline \end{tabular} \end{table} Table 4: Semantic segmentation performance (mIoU) on a fixed target domain with 10 rounds of adaptation. We use DeepLabV3Plus-ResNet-50 [6] pretrained on Cityscapes dataset. \begin{table} \begin{tabular}{l|c c c|c} \hline \hline Time & \(t\) & & \multirow{2}{*}{Average} \\ \cline{2-2} \cline{5-5} Method & Cityscapes & BDD-100K & Mapillary \\ \hline Source [6] & 34.74 & 16.15 & 36.97 & 29.29 \\ BN Adapt [46] & 40.77 & 25.21 & 39.10 & 35.03 \\ TTN [42] & 46.28 & 28.07 & 45.46 & 39.94 \\ \hline TENT [61] & 46.73 & 29.59 & 35.69 & 37.34 \\ \hline + Ours & **46.76 (+0.03)** & **30.55 (+0.96)** & **43.42 (+7.73)** & **40.24 (+2.90)** \\ \hline SWR [9] & 46.17 & 10.70 & 1.28 & 19.38 \\ \(\pm\) Ours & **46.65 (+0.48)** & **32.28 (+21.58)** & **45.09 (+43.81)** & **41.34 (+21.96)** \\ \hline \hline \end{tabular} \end{table} Table 3: Semantic segmentation performance (mIoU) on continuously-changing target domains with 1 round of adaptation. We evaluate with DeepLabV3Plus-ResNet-50 [6] pretrained on GTAV dataset. classes learned during the training stage. Such results indicate that improving the TTA performance with open-set classes is yet to be explored in the future. **Semantic segmentation** Table 3 shows the semantic segmentation performance with continuously changing domains. We evaluated a model pretrained on GTAV [53] with real-domain datasets (Cityscapes [10], BDD-100K [64], and Mapillary [48]) in order to simulate the situation where real-world target datasets are not available with only synthetic datasets provided. We observe that the performance gain by applying our method increases as the adaptation proceeds. For example, SWR [9] (Table 3b - red) suffers from a large performance drop with the last target domain, Mapillary (1.28 mIoU), while ours (Table 3b - blue) shows a stable level of performance (45.09 mIoU). Regarding Table 3b, we evaluate models after certain steps and show the average mIoU up to then. While the model without adaptation (_i.e.,_ source) does not suffer from the error accumulation, it fails to bring performance gain. On the other hand, our method not only brings performance gain but also circumvents error accumulation by filtering the noisy losses. Table 4 also reports the semantic segmentation performance with a fixed target domain over multiple rounds of adaptation. We observe that applying our method improves the performance of TENT [61] and SWR [9] by an average of 11.69 mIoU and 6.97 mIoU, respectively, after 10 rounds. As aforementioned, performing test-time adaptation in semantic segmentation needs to address not only the wrong predictions but also the inherently included open-set classes in the target distribution. Our method again improves TTA performance by effectively discarding such noisy pixels. We believe such a filtering mechanism is especially important in safety-critical applications in two aspects. First, it prevents the performance drop caused by learning with noisy losses. Second, when confronted with unknown objects, we could alarm a device immediately, which could be the starting point for it to take a different action (e.g., autonomous vehicles swerving directions to avoid running into wild animals unexpectedly shown on roads) [26]. ## 4 Further Analysis ### Utilizing Confidence Difference as Thresholds We show that the confidence difference is an adequate metric to differentiate between correct samples and noisy samples, given that a pretrained model is adapting to a novel domain. For the evaluation, we train TENT [61] and compare utilizing confidence difference as the thresholding metric with existing prediction-based out-of-distribution (OoD) methods [21, 19, 43]. By setting the correct samples as the positive samples, we analyze two different negative samples: negative samples 1) including closed-set wrong samples and 2) excluding closed-set wrong samples. The former case shows how well a given metric differentiates between correct samples and noisy samples, including both closed-set and open-set samples. The latter case evaluates how well a given metric distinguishes between the correct samples and open-set samples only. Table 5 shows that using confidence difference outperforms the existing OoD metrics in both cases. In addition to the superior performance, another advantage of using the confidence difference is that we can filter the noisy samples immediately, while the existing OoD metrics need the entire test samples in order to choose the threshold with the best AUROC score. Such a result indicates that confidence difference can also be widely used to distinguish out-of-distribution samples in future studies with adapted models. ### Comparisons on Resource Costs Along with the TTA performances, Table 6 compares the memory usage and the time consumption of the baseline models and our method applied to TENT [61]. For the TTA performance, we average the long-term adaptation performance of closed-set and open-set TTA for each dataset. For memory usage, we use the official code of TinyTL [4] to calculate both the model parameters and the intermediate activation size, following the previous studies [25, 63, 59]. The time indicates the amount of time consumed for the forward process and the backpropagation. Since we utilize the outputs of \(\theta_{o}\), our method accompanies an additional for \begin{table} \begin{tabular}{c|c c c c} \hline \hline \multirow{2}{*}{Method} & CIFAR-10-C & CIFAR-100-C & CIFAR-100/5WIN-C \\ & Error Rate (\%) & Error Rate (\%) & Memory (MB) & Time (ms) \\ \hline ENT & 88.16 & 77.56 & 1147 & 22.98 \\ SWR [9] & 50.38 & 54.42 & 1155 & 47.97 \\ TENT [61] & 65.53 & 63.78 & 556 & 18.38 \\ EATA [49] & 55.92 & 74.03 & 559 & 37.04 \\ \hline **TENT [61]**\(\diamond\)**Ours** & **14.94** & **40.60** & **565** & **26.62** \\ \hline \hline \end{tabular} \end{table} Table 6: Comparisons on error rates (%), memory (MB), and time (ms). For the time, we report the average time after 5000 trials on NVIDIA RTX A5000. \begin{table} \begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{CIFAR-10 / SVHN-C} & \multicolumn{2}{c}{CIFAR-100 / SVHN-C} \\ & AUROC\(\uparrow\) & FPR@TR99\(\downarrow\) & AUROC\(\uparrow\) & FPR@TR99\(\downarrow\) \\ \hline MSP [21] & 51.87 & 92.39 & 60.69 & 87.96 \\ Max Logit [19] & 54.68 & 90.31 & 64.88 & 85.45 \\ Energy [43] & 54.68 & 90.30 & 64.87 & 85.46 \\ Ours & **88.24** & **40.34** & **83.76** & **64.86** \\ \hline \hline \end{tabular} \end{table} Table 5: Utilizing the confidence difference for thresholding in open-set test time adaptation. We use TENT [61] adapted to each target domain including open-set classes (SVHN-C) for 50 rounds. ward process. However, as shown, such an additional forward process is negligible compared to the state-of-the-art models. For example, our method applied to TENT brings a significant performance gain with only half the memory and time compared to SWR [9]. Further details on resource costs, along with the results on semantic segmentation, are included in our supplementary. ### Applicability on Various Models Since our method focuses on improving the pseudo-labeling quality of entropy minimization, it does not rely on model architectures. Table 7 shows that applying our method consistently outperforms the baseline models with ResNet50 [18] and WideResNet28 [65] that were used in previous TTA studies [9, 42]. Such results demonstrate that our method is widely applicable to various architectures. ### Robustness to Hyper-parameters In real-world applications, we may not know an adequate learning rate before encountering test samples or may not use an optimal batch size due to memory constraints. In such a case, we need an approach with a stable performance regardless of such hyper-parameters. Table 8 shows that our method is more robust to such hyper-parameters compared to TENT [61], which is highly dependent on them. Such results demonstrate the scalability of our method when we do not know the optimal hyper-parameters. ## 5 Related Work **Test-Time Adaptation** The main differences between TTA studies and other studies addressing domain shifts such as domain generalization [70, 40, 29, 16, 45, 56] or unsupervised domain adaptation (UDA) [52, 7, 41, 44, 39] is that TTA studies do not utilize 1) the source data during the adaptation stage and 2) ground truth labels on the target distribution [61, 62, 49, 15, 9, 42]. Recent studies [62, 42, 13] show that TTA models suffer from a large performance degradation with continually changing domains and a long-term adaptation. To tackle such a challenging problem, this paper mainly evaluates long-term adaptation with continually changing domains. **Noisy Signals in Test-Time Adaptation** As aforementioned, one of the key challenges in TTA is that the model is prone to utilizing wrong predictions. Preventing the model from learning with noisy supervision has been studied widely beyond TTA [68, 52, 24, 17, 1, 35, 51]. However, the main difference between TTA and these studies is that TTA studies assume that we cannot revisit the sample after performing adaptation with it. Such an assumption limits from proposing methods that require knowledge of the full data distributions [57, 1] or consistency of predictions for a given sample [52, 58]. Without such knowledge, we use the difference of confidence scores between \(\theta_{o}\) and \(\theta_{a}\) by using the wisdom of crowds to improve pseudo labeling. ## 6 Conclusion This paper proposed a _simple yet effective_ data sample selection that is widely applicable to existing various test-time adaptation methods. Based on the observation that signals from wrong samples fail to increase the confidence values of the predicted labels even with entropy minimization, we only select the samples that achieve higher confidence values with the adaptation model compared to those with the original model. This is mainly due to the wisdom of crowds, the dominant signals generally found in the correct samples influencing signals of other samples. Our method improved TTA performance on the existing TTA methods on both image classification and semantic segmentation. Additionally, we proposed a novel evaluation setting, an open-set TTA, which was overlooked until now even with its importance and practicality. We hope our work inspires future researchers to conduct more practical TTA research that improves both closed-set and open-set TTA. \begin{table} \end{table} Table 7: Error rates of image classification on CIFAR-10-C using diverse architectures. \begin{table} \end{table} Table 8: Error rates of image classification on TinyImageNet-C with diverse learning rates and batch sizes. Std. is the abbreviation of the standard deviation. **Acknowledgement** We would like to thank Kyuwoong Hwang, Sungrack Yun, Simyung Chang, Hyunsin Park, Janghoon Cho, Juntae Lee, Hyoungwoo Park, Seokeon Choi, Seunghan Yang, and Sunghyun Park of the Qualcomm AI Research team for their valuable discussions.
2306.07013
Combining Reinforcement Learning and Barrier Functions for Adaptive Risk Management in Portfolio Optimization
Reinforcement learning (RL) based investment strategies have been widely adopted in portfolio management (PM) in recent years. Nevertheless, most RL-based approaches may often emphasize on pursuing returns while ignoring the risks of the underlying trading strategies that may potentially lead to great losses especially under high market volatility. Therefore, a risk-manageable PM investment framework integrating both RL and barrier functions (BF) is proposed to carefully balance the needs for high returns and acceptable risk exposure in PM applications. Up to our understanding, this work represents the first attempt to combine BF and RL for financial applications. While the involved RL approach may aggressively search for more profitable trading strategies, the BF-based risk controller will continuously monitor the market states to dynamically adjust the investment portfolio as a controllable measure for avoiding potential losses particularly in downtrend markets. Additionally, two adaptive mechanisms are provided to dynamically adjust the impact of risk controllers such that the proposed framework can be flexibly adapted to uptrend and downtrend markets. The empirical results of our proposed framework clearly reveal such advantages against most well-known RL-based approaches on real-world data sets. More importantly, our proposed framework shed lights on many possible directions for future investigation.
Zhenglong Li, Hejun Huang, Vincent Tam
2023-06-12T10:31:36Z
http://arxiv.org/abs/2306.07013v1
Combining Reinforcement Learning and Barrier Functions for Adaptive Risk Management in Portfolio Optimization ###### Abstract Reinforcement learning (RL) based investment strategies have been widely adopted in portfolio management (PM) in recent years. Nevertheless, most RL-based approaches may often emphasize on pursuing returns while ignoring the risks of the underlying trading strategies that may potentially lead to great losses especially under high market volatility. Therefore, a risk-manageable PM investment framework integrating both RL and barrier functions (BF) is proposed to carefully balance the needs for high returns and acceptable risk exposure in PM applications. Up to our understanding, this work represents the first attempt to combine BF and RL for financial applications. While the involved RL approach may aggressively search for more profitable trading strategies, the BF-based risk controller will continuously monitor the market states to dynamically adjust the investment portfolio as a controllable measure for avoiding potential losses particularly in downtrend markets. Additionally, two adaptive mechanisms are provided to dynamically adjust the impact of risk controllers such that the proposed framework can be flexibly adapted to uptrend and downtrend markets. The empirical results of our proposed framework clearly reveal such advantages against most well-known RL-based approaches on real-world data sets. More importantly, our proposed framework shed lights on many possible directions for future investigation. ## 1 Introduction In financial markets, only investing in a single asset brings huge uncertainties and risks once trading decisions are biased from the asset changes. To diversify investment risks, investors are suggested to allocate their capital to a set of assets with different natures during the trading period. However, as a fundamental financial problem, given a portfolio of financial products like stocks, futures, options and bonds, optimizing the ratios of assets in a portfolio to maximize returns at a low risk level is a challenge for all investors. According to the efficient market hypothesis [10] and the investment market is an incomplete information game, there are numerous arbitrage opportunities existing in the financial market, but meanwhile they will be immediately filled in. Thus, accurately catching the change of assets by analyzing historical data is a key feature to construct profitable trading strategies in a portfolio. Inspired by the Modern Portfolio Theory [23], more advanced portfolio theories such as Capital Growth Theory [13] and Black-Litterman Model [4] are presented to adapt the actual financial markets with practical constraints. Yet in the highly volatile financial markets, the traditional theories may not generate effective trading strategies due to many strict assumptions. Over the past decade, machine learning and deep learning techniques have been introduced to manage portfolios by predicting price movements [26, 9] or directly optimizing the weights of assets [28, 21]. Through discovering underlying patterns from historical market data in both microeconomics and macroeconomics, those intelligent methods have achieved excess earnings than traditional methods in which the trading signals are generated by simple combinations with some handcrafted technical indicators. Furthermore, in terms of the mechanism that trading agents execute orders after interacting with financial markets, more efforts have been made recently on applying Reinforcement Learning (RL) to optimize online portfolios by observing the current states of the trading environment in real time [27, 30, 29]. However, most existing RL-based portfolio optimization methods may hardly learn an effective and stable trading strategy due to the data efficiency. More specifically, since the highly volatile financial market leads to the market style frequently changing, the trained RL agents may not achieve success on the real-time environment as the distribution of the current data may shift. This will surely increase the uncertainty of portfolios and also bring high-risk exposures. Besides, most previous RL-based methods aim to maximize the long-term profit yet less take into account the short-term risk management of a portfolio, whereas in fact that fund managers are more concerned about investment risk exposures than returns in volatile financial markets. Despite having potential returns, the risky investment may lead to a high maximum drawdown in a short period, which is unacceptable to capital holders. In addition, to balance the returns and risks, some combined performance indicators like sharpe ratio and sorting ratio integrating returns and risks are used as the optimization target, but they cannot explicitly manage portfolio risks in a single transaction. To constrain the system dynamics within safe regions on automatic driving and robotics, [1, 6, 15] introduce Barrier Function (BF) based constraint controllers to adjust decisions generated by model-free RL algorithms where any risky action will be compensated for maintaining safe states while RL agents keep exploring policies with high rewards. Yet [6] uses linear programming in simple cases, potentially suffering from complex constraints. [15] employed sum-of square programming to restrict the exploration of RL agents in polynomial systems, albeit at the expense of increased time costs. Furthermore, those RL-BF approaches do not satisfy the formulation of risk management in portfolio optimization. Moreover, the previous works strictly control the RL actions all the time for which they lack the flexibility to adapt to different scenarios. To both explore profitable strategies and reduce risk exposures throughout the trading period, a Risk-manageable Portfolio Optimization (RiPO) framework integrating both RL-based trading agents and BF-based risk controllers is proposed in the paper to achieve high long-term profits under acceptable short-term risks. First, by formulating a portfolio management problem as a Partially Observable Markov Decision Process (POMDP), a model-free RL framework is given to explore profitable trading strategies. Second, a BF-based risk controller is constructed by the second-order cone programming in terms of risk constraints, monitoring the potential investment risks brought by aggressive RL trading strategies and then adjusting the portfolios for avoiding huge losses. In addition, considering the risk aversion of investors and different market states, two flexible mechanisms named Adaptive Risk Strategy (ARS) and Dynamic Contribution Mechanism (DCM) are proposed to adjust the strength of risk constraints and the impact of risk controllers to the overall trading strategies for adapting to different market styles. In uptrend markets, the proposed framework relaxes risk constraints to pursue higher excess returns under acceptable risk levels. Conversely, the risk exposure will be strictly constrained to avoid potentially huge losses in downtrend markets. This will enhance the flexibility of the proposed framework to invest assets in actual highly volatile financial markets. The main contributions of the proposed RiPO framework are summarized as follows: 1. The RiPO framework is the first attempt to integrate RL and BF-based constraint programming for financial applications. The risky trading decisions generated by RL agents can be continuously monitored and adjusted for explicitly managing the risk exposures while keeping the exploration ability of RL approaches to search for profitable strategies. 2. Compared with the previous RL-BF methods only tested in simple cases, the risk controller of the proposed framework combines the second-order cone programming and BF-based constraints to formulate more complex applications in actual financial markets. By modeling the relationship between investment risks and acceptable risk ranges, the potential risks are effectively reduced particularly in downtrend markets. 3. Instead of completely dominating RL agents by controllers the whole time, two adaptive mechanisms in the RiPO are described to flexibly adjust the impact of risk controllers in terms of investor preference and market states, which earns higher returns by loosing risk constraints in uptrend markets but strictly manages risks in downtrend markets for reducing losses. Due to the nature of financial markets, it should be pointed out that portfolio risk management is not an absolute control that manages risks under any expected level in any case. In fact, the proposed framework is expected to avoid risky investments as possible so that the maximum drawdown and overall losses can be reduced. ## 2 Preliminaries ### Portfolio Optimization Online portfolio management is a multi-period trading strategy that the capital is periodically reallocated to the selected assets. In this work, there are two assumptions listed below. **Assumption 1**.: _The portfolio will be only considered from long positions in this work._ **Assumption 2**.: _The turnover rate of assets in a portfolio satisfies the requirements of each order execution._ Assumption 1 implies that investors cannot short assets unless they hold the long positions, while Assumption 2 encourages the evaluation of the proposed framework more close to reality. Based on these considerations, two primary objectives in portfolio optimization task are given, return maximization and risk minimization, respectively. Some basic financial terms are introduced as follows: **Definition 1**.: _(Portfolio Value) The value of a portfolio at time \(t\) can be denoted by_ \[C_{t}=\sum_{i=1}^{N}w_{t,i}p_{t,i}^{c}, \tag{1}\] _where \(N\) is the number of assets in a portfolio, \(w_{t,i}\) is the weight of an \(i^{\text{th}}\) asset, and \(p_{t,i}^{c}\) is the close price of \(i^{\text{th}}\) asset at time \(t\). Therefore, the portfolio is constrained based on Assumption 1 and 2 as_ \[\forall w_{t,i}\in\mathbf{W}_{t}:\quad w_{t,i}\geq 0,\sum_{i=1}^{N}w_{t,i}=1, \tag{2}\] _where \(\mathbf{W}_{t}\in\mathbf{W}\) is the weight vector \(\mathbf{W}\) at time \(t\)._ Definition 1 implies that the risk would be varied in terms of purposes, the corresponding covariance-weight risk from the Markowitz model [23] and the volatility of strategies provided a view of the short-term risk and long-term risk. **Definition 2**.: _(Short-term Risk) The portfolio risk at time \(t\) can be presented as below_ \[\begin{split}&\sigma_{p,t}=\sigma_{\beta}+\sigma_{\alpha,t}\\ &\sigma_{\alpha,t}=\sqrt{\mathbf{W}_{t}^{T}\Sigma_{k}\mathbf{W}_ {t}}=\|\Sigma_{k}\mathbf{W}_{t}\|_{2}.\end{split} \tag{3}\] _where \(\sigma_{\alpha,t}\) is the trading strategy risk, \(\sigma_{\beta}\) is the market risk and \(\mathbf{W}_{t}\in\mathcal{R}^{N\times 1}\) is the matrix of weights. The covariance matrix \(\Sigma_{k}\in\mathcal{R}^{N\times N}\) between any two assets can be calculated by the rate of daily returns of assets in the past \(k\) days._ **Definition 3**.: _(Long-term Risk) The strategy volatility is used to measure the portfolio risk in the whole trading period, which is the sample variance of daily return rate of the trading strategy._ **Definition 4**.: _(Sharpe Ratio) The Sharpe Ratio (SR) is a usual performance indicator for evaluating a portfolio with the consideration of returns \(R\), risk-free rate \(r_{f}\) and portfolio risk \(\sigma\), which is given as_ \[SR=\frac{R-r_{f}}{\sigma}. \tag{4}\] Portfolio optimization has been studied in few decades. The technical analysis methods can be concluded into four categories including Follow-the-Winner, Follow-the-Loser, Pattern Matching Approaches, and Meta-Learning Algorithms [18]. They try to capture the price momentum by using handcrafted financial indicators. Recently, more investors are attracted by DL/RL technique. Except for the regular price data, [21, 29] introduce news data to collect extra information for the portfolio management. In terms of model structures, [27, 16] present specific modules to deal with assets information independently and also capture the correlations among assets. In addition, [28] adjusts portfolios and optimizes trading time points to achieve the online trading in minute levels. Nevertheless, most of the studies on portfolio optimization cannot explicitly constrain the investment risk exposure when using RL-based approaches to explore profitable strategies. ### Barrier Function Originally inspired from Lyapunov functions, barrier function is introduced to identify safe regions and drive controllers working inside the defined safe boundaries in control theory [2, 3]. Assume that a system dynamic can be denoted as \[s_{t+1}=f(s_{t})+g(s_{t})a_{t}+d(s_{t}), \tag{5}\] where \(s_{t}\in S\) is the system state at \(t\), \(a\in A\) is the action at \(t\), \(f:S\to S\) is the nominal unactuated dynamics, \(g:S\to A\) is the nominal actuated dynamics and \(d:S\to S\) is the unknown dynamics. A safe set \[C=\{s\in S:h(s,a)\geq 0\}, \tag{6}\] can be described by the superlevel set of a barrier function \(h:S\to\mathbf{R}\) in this dynamical system, where \(h\) is a continuously differentiable function and also satisfies that \(\frac{\partial f}{\partial s}\neq 0\) when \(h(s)=0\). According to Nagumo's Theorem [5], the safe set \(C\) will be forward invariant if there exists \[\forall s\in C,\quad\frac{\Delta h(s_{t},a_{t})}{\Delta t}\geq 0, \tag{7}\] where \(\Delta t\) represents a time interval, and \(\Delta h(s_{t},a_{t})=h(s_{t+1})-h(s_{t})\) when considering a discrete-time barrier function. Further, considering the relaxation for safe constraints with a locally Lipschitz class \(\mathcal{K}\) function \(K\) such that \[\sup_{a_{t}\in A}\left[h(s_{t+1})-h(s_{t})+K(h(s_{t}))\right]\geq 0, \tag{8}\] If there exists a feasible action \(a_{t}\) satisfying the above BF-based constraint, then the system can be expected stay at the safe state at time \(t+1\). This can fill the gap that the explored actions generated by RL agents may not concern the status of each state due to the long-term reward expectation. ## 3 Problem Formulation Since financial markets are influenced by many factors like unpredictable black swan events and system risks, it is difficult to collect all relevant information for a perfect investment decision. Therefore, instead of directly capturing the actual market states that are the hidden states of financial markets, the trading strategies can only rely on part of observable market data. Typically, the meta observable market data are the historical prices and volumes of each asset in a portfolio. ### Partially Observable Markov Decision Process For simplifying the optimization process, it is assumed that the next actual market state \(s_{t+1}\) solely depends on the current actual market state \(s_{t}\) as: \(p(s_{t+1}|s_{t},s_{t-1},\ldots,s_{1})=p(s_{t+1}|s_{t}),s\in S\), where \(p\) is the conditional probability and \(S\) is a finite set of actual states. Besides, the set of meta observable state of \(\hat{\pi}^{\text{th}}\) asset at timestamp \(t\) can be denoted as \(o_{t,i}^{\text{meta}}=\{p_{i,i}^{o},p_{t,i}^{l},p_{i,i}^{l},p_{i,i}^{e},\text{ vol}_{t,i}\}\), where \(p_{t,i}^{o},p_{t,i}^{h},p_{t,i}^{l},p_{t,i}^{c}\) are the open/high/low/close price, and \(\text{vol}_{t,i}\) is the trading volume. Furthermore, except for the meta observable states, some extra technical indicators derived from the \(o_{t,i}^{\text{meta}}\) will be introduced to be a part of observable market states to help analyze underlying patterns of market trends. Define \(o_{t,i}^{tech}=\{k_{t,t_{1}1},k_{t,i,2},\ldots,k_{t,i,j}\}\), where \(o_{t,i}^{tech}\) is a set of technical indicators, and \(k_{t,i,j}\) is the \(j^{\text{th}}\) technical indicator of \(i^{\text{th}}\) asset at timestamp \(t\). Beyond that, the current account status can be observed and be considered to make reasonable trading signals, which can be denoted as \(o_{t}^{a}=\log\frac{C_{t}}{C_{\text{init}}}\), where \(C_{t}\) is the current capital and \(C_{\text{init}}\) is the initial capital. In general, the portfolio management process can be modeled as a POMDP that can be defined as a tuple \((S,A,T,R,\Omega,O,\gamma)\), where \(S\) denotes a finite set of actual market states, \(A\) is a finite set of actions, \(T(s_{t+1}|s_{t},a_{t})\) denotes a set of conditional transition probabilities between \(s_{t+1}\) and \(s_{t}\) under the action \(a_{t}\), \(R(s_{t+1}|s_{t},a_{t})\) presents the reward function, \(\Omega\) indicates a finite set of observable states, \(O(o_{t+1}|s_{t+1},a_{t})\) is a finite set of conditional observation probabilities between \(o_{t+1}\) and \(s_{t+1}\) under the action \(a_{t}\), and \(\gamma\in[0,1)\) is the discount factor. The objective of portfolio management is to learn the decision policy \(\pi:S\to A\) that can maximize the expected total rewards at all timestamps. It can be defined as \[J(\pi^{*})=\max_{\pi\in\Pi}\;\mathbf{E}_{\tau\sim\pi}[\sum_{t=1}^{\infty}\gamma ^{t-1}R_{t}], \tag{9}\] where \(\pi^{*}\) is the optimal policy, \(\tau\sim\pi\) is a trajectory under the policy \(\pi\) and \(\Pi\) is the possible policy space. To further approximate the solutions of POMDP problems, the history of previous observations up to the current timestamp can be recognized as a pseudo-state to estimate the actual state through a mapping function \(\phi:\mathcal{H}\rightarrow\phi(\mathcal{H})\), where \(\phi\left(\mathcal{H}\right)=\phi\left(H\right)|H\in\mathcal{H}\), \(H_{t}\in\mathcal{H}\) is the observation history up to timestamp \(t\), \(\mathcal{H}\) is the space of all possible observable histories [11]. Furthermore, the POMDP problem can be reformulated as a tuple \((\hat{S},A,\hat{T},\hat{R},\gamma)\). Specifically, \(\hat{S}=\phi(\mathcal{H})\) is the estimated actual states, \(\hat{T}(\hat{s}_{t+1}|\hat{s}_{t},a_{t})\) is the estimated transition function in which \(\hat{s}_{t+1},\hat{s}_{t}\in\hat{S}\) and \(a_{t}\in A\), \(\hat{R}(\hat{s}_{t+1}|\hat{s}_{t},a_{t})\) is the estimated reward function, and the decision policy \(\pi:\;\phi\left(\mathcal{H}\right)\to A\) with \(\pi\in\Pi\). Accordingly, the bellman equation should be reformulated as \[\begin{split} V^{\pi}\left(\hat{s}_{t}\right)=& R\left(\hat{s}_{t},\;\pi\left(\hat{s}_{t} \right)\right)+\\ &\gamma\sum_{\hat{s}_{t+1}}P\left(\hat{s}_{t+1}|\hat{s}_{t},\pi \left(\hat{s}_{t}\right)\right)V^{\pi}(\hat{s}_{t+1}),\end{split} \tag{10}\] where \(\hat{s}_{t}=\phi(H_{t})\), \(V^{\pi}\left(\hat{s}_{t}\right)\) is the expected reward in \(\hat{s}_{t}\) under the policy \(\pi\). ### Observation and Action Another featured property of portfolio optimization is that the trading signals given by the agent will not significantly influence the trend of asset prices unless the trading volumes are large enough and the paper will not discuss such extreme cases. Thus, the market observation function can be reformulated as \[O(o_{t+1}^{meta},o_{t+1}^{tech}|s_{t+1},a_{t})\approx O(o_{t+1}^{meta},o_{t+1}^ {tech}|s_{t+1}), \tag{11}\] As discussed in [22], the conditional transition probability between account status \(o_{t+1}^{a}\) and \(a_{t}\) can be expressed as \(O(o_{t+1}^{a}|a_{t})\) Then the observation transition probability can be reformulated as \(O(o_{t+1}|s_{t+1},a_{t})\), where \(o_{t+1}=(o_{t+1}^{meta},o_{t+1}^{tech},o_{t+1}^{a})\) and \(o_{t+1}\in\Omega\). Related technical indicators are listed in Appendix. Since the financial markets in different countries have different regulations. To simplify the trading behaviors, only the long position is considered in this paper. The normalized weight of \(i^{\text{th}}\) asset in a portfolio is defined as \(a_{t,i}\), where \(a_{t,i}\in[0,\ 1]\), \(\sum_{i=1}^{N}a_{t,i}=1\), and \(N\) is the number of assets in the portfolio. To close the realistic trading environment, two practical factors including transaction cost \(\varsigma\) and slippage \(\xi\) are considered in each transaction. The reward at timestamp \(t\) can be defined as \[r_{t}=[-\varsigma+\sum_{i=1}^{N}a_{t,i}(\frac{p_{t}^{c}-p_{t-1}^{c}+\xi}{p_{t- 1}^{c}})]\eta, \tag{12}\] where \(\xi\sim\mathcal{U}\left(-\xi_{lower},\ \xi_{upper}\right)\), \(\xi_{\text{lower}}\) and \(\xi_{\text{upper}}\) are the lower and upper boundaries of slippage, and \(\eta\) is the scaling factor in the reward function. ## 4 Methodology ### Overall Framework An overview of the RiPO is depicted in Fig. 1. The final investment strategy in the RiPO framework comes from the RL-based trading agent and risk management module. More specifically, the risk management module includes a BF-based risk controller, a Dynamic Contribution Mechanism (DCM), and an Adaptive Risk Strategy (ARS). Initially, learned from the current market states and the optimized policy, the RL-based trading agent suggests the weights of assets in a portfolio for the next trading period. However, some suggestions may ignore short-term risks as the RL-based trading agents are expected to earn long-term profits. To balance the expected long-term returns and short-term risks, the risk controller evaluates the risk exposure of original RL-based trading strategies and dynamically adjusts the portfolio for managing the near future risk within an acceptable range. Since the financial market always changes, the risk controller should adapt to different market states for higher returns and lower risks. Thus, there are two adaptive mechanisms named DCM and ARS enhancing the flexibility of risk controller to monitor the RL-based trading agents from the perspective of the impact of risk controllers and the strength of risk constraints. The detailed steps of the RiPO framework are described in Algorithm 1. ``` 0: RL algorithm settings, portfolio trading settings 0: The optimal RL policy \(\pi^{*}\) 1: Initialize the RL policy \(\pi_{0}\) and memory tuple \(\hat{D}\). 2:for\(k=1\) to \(Episode\)do 3: Reset the trading environment and set the initial action \(a_{0}\). 4:for\(t=1\) to \(T\)do 5: Observe the current market state \(o_{t}\). 6: Calculate the reward \(r\). 7: Store tuple (\(o_{t-1}\), \(a_{t-1}\), \(o_{t}\), \(r_{t-1}\)) in \(\hat{D}\). 8: Sample suggested action \(a_{t}^{RL}\) from the RL-based agent in terms of the current policy \(\pi\). 9: Update acceptable risk \(\sigma_{s,t}\) by using the ARS module. 10: Collect adjusted action \(a_{t}^{Ctrl}\) from the risk controller. 11: Update the contribution factor of the risk controller \(\lambda_{t}\). 12: Adjust the current portfolio with the action \(a_{t}=a_{t}^{RL}+\lambda_{t}a_{t}^{Ctrl}\). 13:if the RL policy update condition is triggered then 14: Update the RL policy \(\pi\) by learning the historical trading data from \(\hat{D}\). 15:endif 16:endfor 17:endfor 18:return the optimal RL policy \(\pi^{*}\) ``` **Algorithm 1** The Training Procedure of the RiPO Framework ### Barrier Function-based Risk Management with Reinforcement Learning The RL-based portfolio optimization approach has great exploration capabilities on discovering profitable strategies, but it may give unreasonable actions when the current data distribution shifts due to financial market changing. Conversely, the programming-based methods can strictly satisfy the required constraints yet lack the abilities to explore the underlying patterns from raw data. Given by that, with the integration of barrier function to constrain the portfolio risk exposure, a model-based risk controller is formulated to cooperate with RL-based trading agents for modeling online portfolio optimization problem concerning both long-term returns and short-term risks. First, the system dynamics can be written as \[\begin{bmatrix}\hat{P}_{t+1}^{c}\\ \sigma_{p,t+1}\end{bmatrix}=\begin{bmatrix}\Delta P_{t+1}\\ \sigma_{\beta}\end{bmatrix}+\begin{bmatrix}0\\ \Sigma_{k,t+1}^{\frac{1}{2}}\end{bmatrix}a_{t}, \tag{13}\] where \(a_{t}=a_{t}^{RL}+a_{t}^{Ctrl}\) represents the upcoming adjusted weight of assets at time \(t\), \(\Sigma_{k,t+1}\) is the covariance matrix, \(\sigma_{p,t}\), \(\sigma_{\beta}\) are the portfolio risk and market risk as denoted in Definition 2. In terms of the risk-aware investment intuition, the objective of risk controllers is reducing the loss of expected profits while satisfying the risk constraint. Thus, the risk controller can be modeled as \[\begin{split}& a_{t}^{Ctrl}=\operatorname*{arg\,min}_{a_{t}^{Ctrl} }\ \sum_{i=1}^{N}-a_{t,i}^{Ctrl}\Delta p_{t+1,i}\\ \text{s.t.}& h\left(\sigma_{p,t+1}\right)-h\left(\sigma_{p,t}\right)+ \alpha\left(h\left(\sigma_{p,t}\right)\right)\geq 0,\\ & 0\leq a_{t,i}^{RL}+a_{t,i}^{Ctrl}\leq 1,\ \forall i\in 1,\ 2,\..,\ N,\\ &\sum_{i=1}^{N}\left(a_{t,i}^{RL}+a_{t,i}^{Ctrl}\right)=1,\end{split} \tag{14}\] where \(a_{t}^{Ctrl}\), \(a_{t}^{RL}\in\mathbf{R}^{N}\), \(\Delta p_{t+1}\) is the estimated price change from t to t+1 in terms of the moving average idea in this paper. For the risk constraint \(\sigma_{p,t}\in[0,\sigma_{s,t}]\), there exists an acceptable set C such that \[C=\{\sigma_{p,t}:h\left(\sigma_{p,t}\right)\geq 0\}, \tag{15}\] where \(\sigma_{s,t}\) is the upper boundary of acceptable risk at \(t\). Then, the portfolio risk can be managed within an acceptable region if it satisfies \[\sup_{a_{t}\in A}\left[h\left(\sigma_{p,t+1},\ a_{t}\right)-h\left(\sigma_{p,t} \right)+K(h(\sigma_{p,t}))\right]\geq 0. \tag{16}\] Define \(h=p^{T}s+q\), (\(p\in\mathbf{R}^{n},q\in\mathbf{R}\)), let \[c^{*}=h(\sigma_{s,t+1},\sigma_{\beta})-h\left(\sigma_{p,t}\right)+\alpha(h(\sigma_ {p,t})).\] Furthermore, considering the portfolio risk constraint \(0\leq\sigma_{p,t}\leq\sigma_{s,t}\), the barrier function \(h\) can be redefined as \[h\left(\sigma_{p,t}\right)=\sigma_{s,t}-\sigma_{p,t}. \tag{17}\] The BF-based constraint can be reformulated as below \[\begin{split}\sigma_{\alpha,t+1}&=\sqrt{a_{t}^{T}\; \Sigma_{k,t+1}a_{t}}\\ &=\|\Sigma_{k,t+1}^{\frac{1}{2}}a_{t}\|_{2}\\ &=\|\Sigma_{k,t+1}^{\frac{1}{2}}a_{t}^{Ctrl}+\Sigma_{k,t+1}^{ \frac{1}{2}}a_{t}^{RL}\|_{2}\\ &\leq c^{*},\end{split} \tag{18}\] where \(\Sigma_{k,t+1}\) is calculated by the historical price series and the estimation of \(\Delta p_{t+1}\), and \(\Sigma_{k,t+1}^{\frac{1}{2}}a_{t}^{RL}\) is a constant. Thus, the risk controller performs as a second-order cone program. More detailed derivation is given in the Appendix. After collecting the compensating adjustment \(a_{t}^{Ctrl}\) of portfolios from the BF-based controller, the final trading decision \(a_{t}\) would be made for the next trading period. The rewards \(r_{t}\) and its action \(a_{t}\) are stored into the memory of RL algorithms for further training, which can promote RL training efficiency for reaching optimal policy. ### Dynamic Contribution Mechanism Intuitively, the risk preferences of investors are not fixed the whole trading period in actual financial markets. Particularly, they are willing to take higher investment risks for gaining higher returns in uptrend markets. Therefore, more aggressive and risky investment strategies are allowed at the moment. Conversely, when a downtrend market appears, most investors prefer to strictly constrain the portfolio risk to avoid huge losses even though there will miss some profitable opportunities. Originally inspired by [25] using non-linear transformation to balance exploration and exploitation at different optimization stages, a dynamic contribution mechanism is introduced to adaptively regulate the impact of risk controller at each transaction by considering the strategy risk exposures and investor preferences, which balances the exploration in RL-based agents and exploitation in risk management. To be more specific, according to the near performance of trading strategies, a scaling factor \(\lambda_{t}\in[0,1]\) is given by a non-linear transformation to update the contributions of risk controllers to the final trading signals. The trading strategy with greater losses will be subject to tighter risk constraints to avoid aggressive investment decisions from RL-based agents. \[\lambda_{t}=\begin{cases}\left(m+G\right)^{(1-G)},&R_{s}-r_{f}<0,\\ m,&\text{Otherwise},\end{cases} \tag{19}\] where \(G=\min(\frac{|R_{s}-r_{f}|}{v},1)\), \(m\in[0,1]\) denotes the minimal impact of risk controllers to the overall trading strategy. The higher \(m\) represents more strict risk management in which the \(\lambda_{t}\) will be larger at the same loss of strategies. Note that, the risk requirements are strictly constrained at the whole trading period when \(m=1\) such that \(\lambda=1\). \(v\in(0,1]\) represents the risk appetite of investors. In terms of qualitative analysis, the lower \(v(v\to 0)\) has less tolerance on investment risk. It means that the small losses in a short-term period will trigger strict risk control. \(R_{s}\) is the recent performance of trading strategies. Specifically, \(R_{s}\) is given by the moving average of daily returns of trading strategies in this paper. Furthermore, the final trading decision at time t is revised as \[a_{t}=a_{t}^{RL}+\lambda_{t}a_{t}^{Ctrl}. \tag{20}\] where \(a_{t}^{RL}\) satisfies \(\sum_{i=1}^{N}a_{t,i}^{RL}=1\) and \(a_{t}^{Ctrl}\) satisfies \(\sum_{i=1}^{N}a_{t,i}^{Ctrl}=0\). Thus, the sum of \(a_{t}\) adds up to 1. ### Adaptive Risk Strategy As the strict risk management would lead to trading agents missing some potentially profitable opportunities in uptrend markets, the strength of risk constraint in the BF-based controller should be dynamic in terms of the risk preferences of investors and current financial market states in which investors expect lower investment risks in downtrend markets while allowing relatively high risks in uptrend markets for earning higher profits. Therefore, a simply yet efficiently adaptive risk strategy is introduced to enhance the adaptability of the proposed framework to the actual financial market. Considering the balance between expected returns and acceptable risks, the adaptive risk upper boundary \(\sigma_{s,t+1}\) for the BF-based risk constraint is shown below. \[\sigma_{s,t+1}=\begin{cases}\sigma_{s,\min},&\tilde{R}_{t+1}\in(-\infty,(1- \mu)r_{f})\,,\\ M\tilde{R}_{t+1}+b,&\tilde{R}_{t+1}\in[(1-\mu)r_{f},(1+\mu)r_{f}]\,,\\ \sigma_{s,\max},&\tilde{R}_{t+1}\in((1+\mu)r_{f},+\infty)\,,\end{cases} \tag{21}\] where \(M=\frac{\sigma_{s,\max}-\sigma_{s,\min}}{2\mu r_{f}}\), \(b=\frac{(1+\mu)\sigma_{s,\min}-(1-\mu)\sigma_{s,\max}}{2\mu}\), \(\tilde{R}_{t+1}\) is the expected returns, \(\sigma_{s,min}\) and \(\sigma_{s,max}\) are the minimum and maximum values of \(\sigma_{s,t+1}\), \(\mu\) is the user-defined factor representing investor's aversion to future risk. The smaller \(\mu\) would lead risk constraints to be more sensitive to the fluctuation of trading performance in which a more strict risk requirement is assigned to trading agents in downtrend markets. The linear transformation derivation of \(\sigma_{s}\) is described in the Appendix. Yet there may be no optimal solution to satisfy the strict risk constraint in terms of the current market situation. Thus, the risk constraint will be iteratively relaxed by a certain step size until the risk controller collects feasible solutions or meeting stop criteria. Figure 1: The Overview of RiPO. Experiments To carefully examine the performance of the proposed RiPO framework, the stock datasets with different market styles are selected to evaluate the RiPO and compared methods. There are three concerned questions in Section 5.2. ### Experimental Settings **Datasets**: To evaluate the performance of methods in the real financial market, the daily OLHCV data of constitute stocks of S&P500 index in the U.S. market is collected from _Yahoo Finance_. The top 10 stocks are selected to construct a portfolio in terms of the company capital of stocks. The top 10 stocks, accounting for over 26% of the S&P500's market capital, reflect U.S. market trends and provide high liquidity, satisfying the turnover rate assumption. Considering that most of the portfolio optimization methods may fail at the changes of market styles due to the data distribution shifts, all compared methods will be tested on two market style settings. As defined in Table 1, the three subsets of the MS-1 represent an uptrend financial market, aiming to compare the exploration ability of methods to search profitable strategies. On the other hand, the training set of the MS-2 depicts a relatively stationary market, but the validation data and test data are in the highly volatile and downtrend market due to the COVID-19 pandemic, which evaluates the risk management ability of methods when meeting unexpected crises. **Comparative Methods**: In terms of investment principles like follow-the-winner, pattern matching, and DL/RL, seven methods are selected to compare with the proposed framework in this paper. They are Constant Rebalanced Portfolio (CRP, Neutral) [7], Exponential Gradient (EG, Follow-the-Winner) [14], Online Moving Average Reversion (OLMAR, Follow-the-Loser) [17], Passive Aggressive Mean Reversion (PAMR, Follow-the-Loser) [20], Correlation-driven Nonparametric Learning Strategy (CORN, Pattern Matching) [19], Ensemble of Identical Independent Evaluators (EIIE, DL/RL) [16], portfolio policy network (PPN, DL/RL) [30], Relation-Aware Transformer (RAT, DL/RL) [27], and original Twin Delayed DDPG (TD3, DL/RL) [12]. **Metrics**: To evaluate the returns and risks of compared methods, there are three common metrics in both academia and industry applied to measure the performance of trading strategies: 1. Annual Return: \(\text{AR}=((1+\text{TR})^{\frac{252}{2}}-1)\), where \(\text{TR}\) is the returns over the trading period and \(T\) is the number of trading days. 2. Maximum Drawdown: \(\text{MDD}=\max\limits_{t_{1}<t_{2}}\frac{C_{t_{1}}-C_{t_{2}}}{C_{t_{1}}}\), where \(C_{t_{1}}\) and \(C_{t_{2}}\) are the portfolio value at time \(t_{1},t_{2}\). 3. Sharpe Ratio: \(\text{SR}=\frac{\text{AR}-r_{f}}{\sigma_{v}}\), where \(\sigma_{v}\) is the strategy volatility in terms of daily returns, and \(r_{f}\) is the risk-free rate. Particularly, the SR is assigned to 0 when the annual return is negative. **Implementation Details**: TD3 is one of the popular RL algorithms and is selected to train RL-based trading agents in the RiPO framework. The implementation of the TD3 algorithm are referred by [24] while the other baseline algorithms use default settings according to their papers. The detailed RiPO settings of experiments are given in the Appendix. Besides, for the fair comparison and avoiding future data leakage, the validation set are used to fine-tune the hyper-parameters while the test set is applied to compare the performance of methods. Furthermore, all experiments are repeatedly run for 10 times to avoid the randomness and bias of experiments caused by seeds. The average value and standard deviation of all methods are compared. Additionally, the Wilcoxon rank-sum test [8] is used to compare the statistical significance of the proposed framework against the compared approaches with a significance level at 0.05. ### Performance Comparison and Analysis **Q1: How does the RiPO framework perform in terms of profitability and risk exposure in uptrend and downtrend markets?** As shown in Table 2, there are seven approaches achieving positive returns in the MS-1 dataset, among which the returns of the RiPO framework are at least \(5\%\) higher than other methods while performing relatively low MDD at around \(22\%\). Although some baseline algorithms like PPN and RAT have lower MDD, their profitability is limited and some profitable strategies may be missed. The best SR achieved at \(0.72\) by the RiPO method demonstrates the capability to balance the profits and risk exposures of trading strategies. It reveals that the RiPO framework would dynamically relax the risk constraint within an acceptable range to pursue higher profits in uptrend markets. When testing in the MS-2 dataset, none of these baseline methods are profitable during the trading period. They may lose around \(13\%\) to \(37\%\) of the portfolio value for each year. Meanwhile, those methods have higher short-term risks in which investors may suffer a huge loss in a short period. Compared with baseline methods, the \begin{table} \begin{tabular}{l l l} \hline \hline Dataset & Training & Validation & Test \\ \hline MS-1 & @2015-@2017 & @2018 & @2019 \\ MS-2 & @2015-@2019 & @2020 & @2021-10/31/22 \\ \hline \hline \multicolumn{3}{c}{@: from \(01/01\) to \(12/31\) this year} \\ \hline \hline \end{tabular} \end{table} Table 1: Description of the Dataset Figure 3: The Short-term Risk Comparison in the MS-2 Dataset. Figure 2: The Portfolio Value Comparison in the MS-2 Dataset. RiPO framework only loses \(6.58\%\) for each year while the MDD can be significantly reduced to \(25.77\%\) against the MDD of other methods that are over \(48\%\). It demonstrates the outstanding ability of the RiPO to manage the risk in downtrend markets for avoiding great losses. Especially when encountering unexpected crises, the trading agents may not suggest reasonable trading signals where the risky investment should be strictly constrained. Besides, the significant test results \((better/equal/worse)\) are \(4/5/0\) and \(9/0/0\) in the MS-1 and MS-2, respectively. It is evident that the RiPO achieves remarkable performance in the MS-2 when compared to other baselines. More importantly, the RiPO framework integrating risk controllers and TD3 as trading agents outperforms the original TD3 approach both in the MS-1 and MS-2 markets in terms of AR, MDD, and SR. This clearly demonstrates the effectiveness of the proposed risk controller to manage the potential investment risks due to the aggressive trading strategies generated by RL-based agents. **Q2: Can the RiPO framework effectively reduce downside risk?** The downside risk is one of the most concerns for investors to manage portfolios. Fig. 2 and Fig. 3 show the portfolio value and short-term risk of compared methods in the test period. As highlighted in the red rectangular boxes, although the S&P500 (black dash line) has a significant decline, the risky investments of the RiPO framework (red line) are strictly constrained to avoid huge losses while the portfolio value of other frameworks suffers a great loss. Besides, the short-term risks of the RiPO are carefully managed at a very low level than that of other methods during the whole downtrend period (see the red rectangular in Fig. 3), which further proves the outstanding capability of the RiPO to manage investment risks. **Q3: How do hyper-parameters that reflect the risk appetite of investor affect the RiPO framework?** As described in the previous sections, there are three key hyper-parameters of the adaptive mechanisms DCM and ARS reflecting investors' preferences to balance the strength of risk management and the exploration of trading strategies. As shown in Table 3, the higher \(m\) performs better AR (from \(-9.43\%\) to \(-6.58\%\)) and lower MDD (from \(29.47\%\) to \(25.56\%\)) by enhancing the impact of risk controller to the RL-based agents in downtrend markets. However, there may miss some profitable opportunities when \(m\) is set to a high value (i.e., \(m=1\)). The AR decreases to \(-6.69\%\) as most risky investments are restricted. Similarly, the lower \(v\) represents the higher risk aversion, which reduces the MDD from \(32.63\%\) to \(25.77\%\) and avoids half of the loss when \(v=0.005\) by tightening risk exposures. Moreover, using the appropriate scaling factor \(\mu\) at \(3\) in the MS-2 encourages the RiPO framework to balance the expected returns and short-term risks by dynamically adjusting the strength of risk constraints. **Q4: How do the adaptive mechanisms DCM and ARS impact the RiPO framework?** To enhance the flexibility of risk management in different financial markets, the DCM and ARS are introduced to adjust the impact of risk controllers and the strength of risk constraints. Table 4 shows that the ARS significantly reduces the losses from \(23.32\%\) to \(6.69\%\) when without using the DCM and from \(18.76\%\) to \(6.58\%\) when integrating the DCM. Meanwhile, the risk exposure decreases to half of the setting without the ARS. On the other hand, the RiPO framework involving the DCM can capture more potential profitable opportunities to avoid greater losses in which the short-term risks are managed at lower or similar levels by considering the tradeoff between long-term returns and short-term risks. The experimental results clearly reveal that the two adaptive mechanisms encourage the RiPO model to dynamically adjust risk constraints for both managing risk exposures and exploring profitable strategies. ## 6 Conclusion In the paper, a novel risk-manageable portfolio optimization framework named RiPO is proposed explicitly to manage the short-term risks while expecting the long-term profits in different market styles. With the cooperation of RL approaches and barrier function based \begin{table} \begin{tabular}{c c c} \hline \hline Setting & **AR\((\%)\uparrow\)** & **MDD\((\%)\downarrow\)** \\ \hline w/o ARS \& w/o DCM & -23.32 (\(\pm 0.0550\)) & 48.85 (\(\pm 0.0727\)) \\ w/o ARS & -18.76 (\(\pm 0.0387\)) & 44.38 (\(\pm 0.0584\)) \\ w/o DCM & -6.69 (\(\pm 0.0054\)) & 25.56 (\(\pm 0.0081\)) \\ **RiPO** & -6.58 (\(\pm 0.0055\)) & 25.77 (\(\pm 0.0049\)) \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation Studies on the Contribution of DCM and ARS Modules \begin{table} \begin{tabular}{c c c c|c c c} \hline \hline & \multicolumn{3}{c|}{**MS-1 (Uptrend Market)**} & \multicolumn{3}{c}{**MS-2 (Downtrend Market)**} \\ **Method** & **AR\((\%)\uparrow\)** & **MDD\((\%)\downarrow\)** & **SR\(\uparrow\)** & **AR\((\%)\uparrow\)** & **MDD\((\%)\downarrow\)** & **SR\(\uparrow\)** \\ \hline CRP & 12.08 (\(\pm 0.0032\)) & 17.11 (\(\pm 0.0010\)) & 0.56 (\(\pm 0.0170\)) & -24.57 (\(\pm 0.0008\)) & 50.89 (\(\pm 0.0007\)) & 0 \\ EG & 11.75 (\(\pm 0.0027\)) & 17.10 (\(\pm 0.0010\)) & 0.55 (\(\pm 0.0147\)) & -24.63 (\(\pm 0.0019\)) & 51.00 (\(\pm 0.0014\)) & 0 \\ OLMAR & -16.24 (\(\pm 0.0030\)) & 55.08 (\(\pm 0.0018\)) & 0 & -13.88 (\(\pm 0.0055\)) & 56.74 (\(\pm 0.0035\)) & 0 \\ PAMR & -30.86 (\(\pm 0.0053\)) & 50.20 (\(\pm 0.0038\)) & 0 & -17.79 (\(\pm 0.0043\)) & 51.75 (\(\pm 0.0030\)) & 0 \\ CORN & -9.93 (\(\pm 0.0058\)) & 21.64 (\(\pm 0.0042\)) & 0 & -37.83 (\(\pm 0.0024\)) & 68.01 (\(\pm 0.0018\)) & 0 \\ EIIE & 9.87 (\(\pm 0.0155\)) & 15.86 (\(\pm 0.0024\)) & 0.48 (\(\pm 0.0848\)) & -26.33 (\(\pm 0.0081\)) & 50.24 (\(\pm 0.0068\)) & 0 \\ PPN & 12.48 (\(\pm 0.0188\)) & **15.14** (\(\pm 0.0081\)) & 0.59 (\(\pm 0.1010\)) & -23.98 (\(\pm 0.0038\)) & 48.76 (\(\pm 0.0064\)) & 0 \\ RAT & 12.26 (\(\pm 0.0315\)) & 15.56 (\(\pm 0.0124\)) & 0.60 (\(\pm 0.1651\)) & -27.15 (\(\pm 0.0144\)) & 51.41 (\(\pm 0.0192\)) & 0 \\ TD3 & 15.69 (\(\pm 0.1925\)) & 27.65 (\(\pm 0.0902\)) & 0.62 (\(\pm 0.3386\)) & -24.26 (\(\pm 0.0660\)) & 53.39 (\(\pm 0.0853\)) & 0 \\ **RiPO** & **20.15** (\(\pm 0.1398\)) & 22.58 (\(\pm 0.0416\)) & **0.72** (\(\pm 0.5138\)) & **-6.58** (\(\pm 0.0055\)) & **25.77** (\(\pm 0.0049\)) & 0 \\ \hline \hline \end{tabular} \(\bullet\) Average value (\(\pm\)Standard deviation) \end{table} Table 2: Performance Comparison in Two Market Styles \begin{table} \begin{tabular}{c c c} \hline \hline Parameter & Setting & **AR\((\%)\uparrow\)** & **MDD\((\%)\downarrow\)** \\ \hline & 0 & -9.43 (\(\pm 0.0174\)) & 29.47 (\(\pm 0.0270\)) \\ & 0.2 & -8.51 (\(\pm 0.0148\)) & 28.14 (\(\pm 0.0197\)) \\ \(m\) & 0.5 & -8.19 (\(\pm 0.0129\)) & 28.07 (\(\pm 0.0138\)) \\ & 0.8 & -6.58 (\(\pm 0.0055\)) & 25.77 (\(\pm 0.0049\)) \\ & 1.0 & -6.69 (\(\pm 0.0054\)) & 25.56 (\(\pm 0.0081\)) \\ \hline & 0.005 & -6.58 (\(\pm 0.0055\)) & 25.77 (\(\pm 0.0049\)) \\ \(v\) & 0.010 & -6.89 (\(\pm 0.0070\)) & 26.02 (\(\pm 0.0111\)) \\ & 0.100 & -7.11 (\(\pm 0.0160\)) & 25.93 (\(\pm 0.0211\)) \\ & 0.500 & -11.66 (\(\pm 0.0177\)) & 32.63 (\(\pm 0.0278\)) \\ \hline & 1 & -6.58 (\(\pm 0.0055\)) & 25.77 (\(\pm 0.0049\)) \\ \(\mu\) & 2 & -7.35 (\(\pm 0.0051\)) & 26.56 (\(\pm 0.0048\)) \\ & 3 & risk controllers, the RiPO performs strong exploration ability to optimize trading strategies under acceptable risk constraints. Besides, two dynamic modules are given to construct a flexible risk controller for adapting financial markets and investors' risk appetite. The experimental results indicate that the RiPO can gain higher profits in uptrend markets and manage downside risks in downtrend markets. In the future, the flexibility of risk controller can be further enhanced for adapting to different financial markets and handling more real market constraints.
2304.11008
Violation of the Landau-Yang theorem from Infrared Lorentz Symmetry Breaking
Lorentz symmetry forbids decays of massive spin-1 particle like the $Z^0$ into two massless photons, a result known as the Landau-Yang theorem. But it is known that infrared effects can break Lorentz invariance. Employing the construction of Mund et. al. \cite{MRS} which incorporated this Lorentz violation, we propose an interaction leading to the decay $Z^0 \rightarrow 2 \gamma$ and study the dependence of the decay on the parameter of this Lorentz violation.
M. Asorey, A. P. Balachandran, Arshad Momen, B. Qureshi
2023-04-20T04:08:57Z
http://arxiv.org/abs/2304.11008v2
# \(Z^{0}\to 2\gamma\) decay from Infrared Lorentz Symmetry Violation ###### Abstract Lorentz symmetry forbids decays of massive spin-1 particle like the \(Z^{0}\) into two massless photons, a result known as the Landau-Yang theorem. But it is known that infrared effects can break Lorentz invariance. Employing the construction of Mund et. al. [1] which incorporated this Lorentz violation, we propose an interaction leading to the decay \(Z^{0}\to 2\gamma\) and study the dependence of the decay on the parameter of this Lorentz violation. Introduction In Quantum Field Theories (QFTs) with a mass gap, it is assumed that the Poincare group can be implemented by a unitary group leaving the vacuum invariant. It is thus a'symmetry' of the theory acting on the local observables in a bounded spacetime region. Causality can also be formulated by requiring the local observables in spacelike complements to commute. But there are physically important theories with no mass gap, QED being a prime example. In QED, because of infrared effects, there are uncountably many superselection sectors, and Lorentz transformations map one such sector into another. Hence Lorentz symmetry is spontaneously broken[10], just like the U(1) gauge group which is spontaneously broken in superconductivity or ferromagnetism. There is an important theorem of Landau and Yang [2], [3] for Lorentz invariant theories : the decay of a massive spin 1 particle such as \(Z^{0}\) to two photons is forbidden. This theorem is remarkable: its proof does not rely on principles of quantum field theory (QFT) such as causality. This result is due to the fact that the the Clebsch-Gordan series for the tensor product of two irreducible Poincare group representations of photons, when Bose symmetrised, does not contain a massive spin one representation, as shown in Balachandran, Jo and Marmo [4]. There are generalisations of this result to other decays [5] shown there. For example, \(Z^{0}\) cannot decay into two massless neutrinos. Thus, the Landau-Yang and related theorems use only Poincare invariance and statistics of identical particles and no other principles of quantum field theory. There are experiments in non-linear optics which put limits on the rate of such processes [6, 7, 8]. The observation of any one of these decays will thus profoundly affect QFT. On the other hand, it has been known for a long time that infrared effects in QED create uncountably many superselection sectors and that the action of Lorentz transformations interchanges these sectors. Hence by defi nition, Lorentz symmetry is spontaneously broken. In a recent paper, Mund, Rehren and Schroer [1] have reviewed these results and developed a theory of 'infrafields' which can serve as order parameters for this symmetry breaking. Given these results and developments, it is natural to ask if a model for say \(Z^{0}\to 2\gamma\) decay can be formulated using the infrafields. In this paper, we argue that this can be done and an explicit decay rate can be derived. In the subsequent sections, we review the construction of infrafields and their response to gauge transformations. Then we formulate an interaction for \(Z^{0}\to 2\gamma\) which is invariant under "small" or Gauss law-generated gauge transformations which is not Lorentz invariant and compute the rate for the above decay. ## 2 The Infrafield Such fields first acquired a prominent role in the quantum field theory of massless 'continuous spin' particles which occur in the work of Wigner[11]. Their momenta are lightlike and future pointing so that no known physical principle forbids their existence. But Yngvason [9] first proved that local quantum fields do not exist for such particles. It was later shown that 'fields localised on cones' do exist for these particles. These fields were later adapted to QED for its formulation entirely on a Hilbert space, avoiding indefinite metric constructions altogether. They relied on the axial gauge and Dirac's construction of gauge invariant fields. The Wilson line from \(x\) to \(\infty\) in the spacelike direction \(\eta\) gives the infrafield \(\phi\), the above Wilson line for charge \(q\) being \(e^{iq\phi(x,\eta)}\). Then, under a \(U(1)\) gauge transformation \(e^{i\chi}\), the Wilson line \[W(x,\eta)=e^{iq\int_{0}^{\infty}A_{\mu}(x+\tau\eta)\eta^{\mu}d\tau}\equiv e^{iq \phi(x,\eta)}\] transforms as \[W(x,\eta)\to e^{iq\chi(\eta_{\infty})}W(x,\eta)e^{-iq\chi(x)},\qquad\eta_{ \infty}:=\lim_{\tau\to\infty}\tau\eta.\] This transformation suggests the interpretation that \(W\) carries a charge, say \(-q\) at \(x\) and \(q\) at \(\eta_{\infty}\). If \(\psi\) is a charge \(q\) field, then following Dirac[12] and Mandelstam[13], we see that \(W(x,\eta)\psi(x)\) is small gauge invariant, but still has a charge \(q\) blip at \(\eta_{\infty}\). If \(\psi\) is a field of charge \(q\), then as Dirac observed, the field \(e^{iq\phi(x,\eta)}\psi(x)\) is invariant under local or small gauge transformations. It can be smeared in \(x\) with test functions localised in a finite region containing \(x\) without spoiling small gauge invariance. But acting on the vacuum, it produces only a charge \(q\) blip at infinity. So it produces surface excitations at infinity. That is the case even if \(x\) is smeared. We want to choose \(\eta\) so that \(A_{\mu}\eta^{\mu}\) does not produce negative norm states acting on the vacuum. That requires that this field has only spacelike components in \(\mu\). Hence \(\eta^{\mu}\) is chosen to be along a spacelike direction. In the mostly minus metric convention, it follows that \(\eta.\eta=-\kappa^{2}\), with \(\kappa\) real. But unlike in the axial gauge, we do not fix the direction of \(\eta\). For fixed \(\kappa\), this is a deSitter space with a spherical boundary for finite \(\eta\). But \(W(x,\eta)\) is invariant under the scalings \(\eta\to c\eta,c>0\). So we can fix \(\kappa\) so that a compactification with a spherical boundary is a preferred choice of spacelike boundaries. Under standard Lorentz transformations \(\Lambda\) of \(A_{\mu}\), not only the argument, but also the index \(\mu\) of \(A_{\mu}\) gets transformed, so that \[\Lambda:\phi(x,\eta)\rightarrow\phi(\Lambda x,\Lambda\eta).\] Hence, \(\eta\) as well gets transformed in the "escort" field \(\phi(x,\eta)\). ## 3 A Model Interaction for \(Z^{0}\to 2\gamma\) We need the interaction to have the following properties : 1. It should be gauge invariant under small gauge transformations. 2. The net charge at infinity should add up to zero so that charge conservation is maintained. So we want to preserve global \(U(1)\) invariance. For the escort field of charge \(q\), the operator \[:e^{iq\phi(x,\eta)}:\ \ :e^{-iq\phi(x,\eta^{\prime})}:\] is invariant under small gauge transformations and has zero net charge. Also \(Z^{0}_{\mu}(x)\) is neutral under this \(U(1)\). Hence if \(f^{\mu}\) is a vector-valued test function, a Gauss law-invariant interaction with zero net charge is \[d\ f^{\mu}(x)Z^{0}_{\mu}(x):e^{iq\phi(x,\eta)}:\ \ :e^{-iq\phi(x,\eta^{\prime})}: \tag{1}\] with an interaction strength \(d\). A natural choice for \(f^{\mu}(x)\) is \(h^{\mu}\) where \(h^{\mu}\) is a chosen polarisation vector for \(Z^{\mu}\) in its rest system. This choice is suggested as we will calculate the decay in the \(Z^{\mu}\) rest system. We can also smear just \(Z^{0}_{\mu}\) with a scalar function \(f(x)\) and use a fixed vector \(h^{\mu}\) for this polarization vector. But our preferred choice in this paper is (1). Finally we have also the option of smearing the variables \(\eta\) and \(\eta^{\prime}\), but we will avoid it for now. The process which lets us avoid the Landau-Yang theorem by means of the escort fields is sumarized in Figure 1. ## 4 Calculation of the Amplitude using the Mode Expansion of the Escort Field We can obtain the mode expansion of \(e^{i\phi}\) from that of \(A_{\mu}\) ( following Mund et. al. [1]) : \[A_{\mu}(x)=\int\tilde{d}k\left[a_{\mu}(k)e^{-ik\cdot x}+a^{\dagger}_{\mu}(k)e ^{ik\cdot x}\right]\] with the Lorentz invariant phase space (LIPS) measure: \[\tilde{d}k=\frac{d^{3}k}{(2\pi)^{3}\ 2|{\bf k}|}.\] Although the operator \(A_{\mu}(x)\) is defined only on an indefinite metric (Krein) space, that is not the case for \(A_{\mu}\eta^{\mu}\) as \(\eta^{\mu}\) is spacelike. One then defines \[\phi(x,\eta)=\int_{0}^{\infty}d\tau A_{\mu}(x+\eta\tau)\eta^{\mu}d\tau.\] Expanding \(\phi(x)\) ( using (+,-,-,-) metric), one gets, using the "\(i\epsilon\)" prescription, \[\phi(x,\eta) = \int\tilde{d}k\eta^{\mu}\left[a_{\mu}(k)e^{-ik\cdot x}\int_{0}^{ \infty}d\tau\ e^{-i[(k.\eta)-i\epsilon]\tau}\right.\] \[+ \left.a_{\mu}^{\dagger}(k)e^{ik\cdot x}\int_{0}^{\infty}d\tau\ e^ {i[(k.\eta)+i\epsilon]\tau}\right]\] \[= i\int\tilde{d}k\ \eta^{\mu}\left[\frac{a_{\mu}(k)}{\eta\cdot k-i \epsilon}e^{-ik\cdot x}-\frac{a_{\mu}^{\dagger}(k)}{\eta\cdot k+i\epsilon}e^ {ik\cdot x}\right]\] \[= i\int\tilde{d}k\left[\chi_{k}(\eta)e^{-ik\cdot x}-\chi_{k}^{ \dagger}(\eta)e^{ik\cdot x}\right]\] where \[\chi_{k}(\eta)\equiv\frac{a(k)\cdot\eta}{k.e-i\epsilon}\] Figure 1: Diagram associated to the decay of \(Z^{0}\) into two \(\gamma\)s in the presence of escort fields. Therefore \[\phi(x,\eta) = i\left[\int\tilde{d}k\left(\chi_{k}(\eta)e^{-ik\cdot x}-\chi_{k}^{ \dagger}(\eta)e^{ik\cdot x}\right)\right]\] Now \[:e^{iq\phi(x,\eta)}:=e^{q\int\tilde{d}k\chi_{k}^{\dagger}e^{ik\cdot x}}e^{-q \int\tilde{d}k\chi_{k}e^{-ik\cdot x}}\] Accordingly \[:e^{iq\phi(x,\eta)} : : e^{-iq\phi(x,\eta^{\prime})}:=\] \[e^{q\int\tilde{d}k\chi_{k}^{\dagger}e^{ik\cdot x}}e^{-q\int \tilde{d}k\chi_{k}e^{-ik\cdot x}}e^{-q\int d\tilde{k}^{\prime}\chi_{k^{\prime}} ^{\dagger}e^{ik^{\prime}\cdot x}}e^{q\int d\tilde{k}^{\prime}\chi_{k^{\prime} }e^{-ik^{\prime}\cdot x}}\] As Baker-Campbell-Haussdorf formula gives us \[e^{M}e^{N}=e^{M+N+\frac{1}{2}[M,N]+\cdots}\Rightarrow e^{M}e^{N}=e^{N}e^{M}e^ {[M,N]}\] where in the last step we have assumed \([M,N]\) is a \(c\)-number, \[:e^{iq\phi(x,\eta)}::e^{-iq\phi(x,\eta^{\prime})}:=\] \[e^{q\int\tilde{d}k\chi_{k}^{\dagger}e^{ik\cdot x}}e^{-q\int d \tilde{k}^{\prime}\chi_{k^{\prime}}^{\dagger}e^{ik^{\prime}\cdot x}}e^{-q \int\tilde{d}k\chi_{k}e^{-ik\cdot x}}e^{q\int d\tilde{k}^{\prime}\chi_{k^{ \prime}}e^{-ik^{\prime}\cdot x}}e^{\Delta(\eta,\eta^{\prime},x)}. \tag{2}\] Now \[\Delta(\eta,\eta^{\prime},x)=q^{2}\int\int\tilde{d}kd\tilde{k}^{\prime}[\chi_ {k},\chi_{k^{\prime}}^{\dagger}]e^{-i(k-k^{\prime})\cdot x} \tag{3}\] But \[[\chi_{k},\chi_{k^{\prime}}^{\dagger}]=[\chi_{k}(\eta),\chi_{k^{\prime}}^{ \dagger}(\eta^{\prime})]=(e\cdot e^{\prime})\frac{(2\pi)^{3}2E_{k}\delta^{3}( k-k^{\prime})}{(k.\eta+i\epsilon)(k^{\prime}.\eta^{\prime}-i\epsilon)} \tag{4}\] Hence, \[\Delta(\eta,\eta^{\prime},x)= q^{2} (\eta\cdot\eta^{\prime})\times\] \[\left\{\int\int\tilde{d}k\tilde{d}k^{\prime}(2\pi)^{3}2E_{k} \delta^{3}(\vec{k}-\vec{k}^{\prime})\frac{e^{-i(k-k^{\prime})\cdot x}}{(k. \eta+i\epsilon)(k^{\prime}.\eta^{\prime}-i\epsilon)}\right\}\] \[=q^{2}\int\tilde{d}k\frac{1}{(k.\eta+i\epsilon)(k.\eta^{\prime}-i \epsilon)}\equiv q^{2}Q(\eta,\eta^{\prime})\] ends up being independent of \(x\). Note that in the last step, we have carried out the \(k^{\prime}\) integration. The integral \(Q(\eta,\eta^{\prime})\) can be rewritten ( using Feynman-Schwinger parameterization ) as \[Q(\eta,\eta^{\prime})=\int\tilde{d}k\int_{0}^{1}d\lambda\frac{( \eta\cdot\eta^{\prime})}{[\lambda(k\cdot\eta+i\epsilon)+(1-\lambda)(k\cdot\eta ^{\prime}-i\epsilon)]^{2}}\] \[=\int\tilde{d}k\int_{0}^{1}d\lambda\frac{(\eta\cdot\eta^{\prime} )}{[k\cdot\eta^{\prime}+\lambda k.(\eta-\eta^{\prime})-i\epsilon(1-2\lambda) )]^{2}}\] \[=\frac{(\eta\cdot\eta^{\prime})}{2(2\pi)^{3}}\int_{0}^{\infty} kdk\ \int_{0}^{1}d\lambda\int d\Omega_{k}\frac{1}{[k.(\eta^{\prime}+\lambda(\eta-\eta^{ \prime}))+i\epsilon(1-2\lambda)]^{2}}.\] Using the identity \[\int d\Omega_{\hat{n}}\frac{1}{(\hat{n}\cdot\vec{a}+b)^{2}}=\frac{4\pi}{(b^{2} -\vec{a}\cdot\vec{a})},\] we get \[Q(\eta,\eta^{\prime})=\frac{(\eta.\eta^{\prime})}{(2\pi)^{2}}\int_{0}^{\infty }\frac{1}{k}dk\ \int_{0}^{1}\frac{d\lambda}{[(2\lambda^{2}-2\lambda+1)+2\lambda(1-\lambda) \eta.\eta^{\prime}]^{2}}\] where the limit \(\epsilon\to 0_{+}\) has been employed. The \(k\) integral is logarithmically divergent and needs both an ultraviolet and infrared momentum cutoff, \(M_{UV}\) and \(m_{IR}\) respectively: \[Q(\eta,\eta^{\prime}) = \frac{(\eta.\eta^{\prime})}{(2\pi)^{2}}\ln\left(\frac{M_{UV}}{m_{ IR}}\right)\int_{0}^{1}\frac{d\lambda}{\left[(2\lambda^{2}-2\lambda+1)+2\lambda(1- \lambda)\eta.\eta^{\prime}\right]^{2}}\] \[\equiv \frac{\ln\left(\frac{M_{UV}}{m_{IR}}\right)}{(2\pi)^{2}}S(\eta \cdot\eta^{\prime})\] Note that the UV part of the logarithmic divergences of \(\int\frac{1}{k}dk\) can alternatively be absorbed by the renormalization of the electric charge: \[\Delta=\frac{3}{2}\left(1-\frac{q^{2}}{q_{R}^{2}}\right)S(\eta\cdot\eta^{ \prime})=\frac{3}{2}\left(1-\frac{4\pi}{137q_{R}^{2}}\right)S(\eta\cdot\eta^{ \prime})\] Thus the \(c\)-number factor appearing in (2) takes the form \[e^{\Delta(\eta,\eta^{\prime})}=e^{\frac{3}{2}\left(1-\frac{4\pi}{137q_{R}^{2}} \right)S(\eta\cdot\eta^{\prime})} \tag{5}\] where we have dropped the \(x\) argument in \(\Delta(\eta,\eta^{\prime},x)\) as it is independent of \(x\). A plot of the function \(e^{S(\eta,\eta^{\prime})}\) is shown in Figure 2 Note that when \(\eta=\eta^{\prime}\) so that \(\eta\cdot\eta^{\prime}=-1\), the factor \(e^{\Delta}\) vanishes. That is, as long as we remain in the same "superselection" sector, the Landau-Yang theorem is validated. But as long as \(\eta\neq\eta^{\prime}\) there is a none-zero probability for the decay of \(Z^{0}\) into two photons. Let us recall that the decay width of a particle of mass \(m_{A}\) at rest into two identical massless particles is given by the formula: \[\Gamma=\frac{1}{32\pi m_{A}^{2}}|{\cal A}|^{2}. \tag{6}\] Now we are interested in the product of matrix elements : \[\langle k_{1},\varepsilon_{1};k_{2},\varepsilon_{2}|:e^{iq\phi(x, \eta)}::e^{-iq\phi(x,\eta^{\prime})}:|0\rangle_{\gamma}\langle 0|f\cdot Z|Z,h\rangle_{Z}\] \[= -q^{2}e^{\Delta(\eta,\eta^{\prime})}\int\int\tilde{d}kd\tilde{k^{ \prime}}\langle k_{1},\varepsilon_{1};k_{2},\varepsilon_{2}|\chi_{k}^{\dagger }\chi_{k^{\prime}}^{\dagger}|0\rangle_{\gamma}\times\langle 0|f\cdot Z|Z,h \rangle\int d^{4}xe^{ik\cdot x}e^{ik^{\prime}\cdot x}e^{-iPx}\] \[= -q^{2}e^{\Delta(\eta,\eta^{\prime})}\left[\frac{(\eta\cdot \varepsilon_{1})}{(\eta\cdot k_{1}+i\epsilon)}\frac{(\eta^{\prime}\cdot \varepsilon_{2})}{(\eta^{\prime}\cdot k_{2}+i\epsilon)}+\frac{(\eta\cdot \varepsilon_{2})}{(\eta\cdot k_{2}+i\epsilon)}\frac{(\eta^{\prime}\cdot \varepsilon_{1})}{(\eta^{\prime}\cdot k_{1}+i\epsilon)}\right](f\cdot h)\] where \(\varepsilon_{1,2}\) are the polarization vectors of the photons with the momenta \(k_{1},k_{2}\) respectively, while \(h\) is the polarization/spin vector associated with the \(Z_{0}\) boson. The above result confirms the claim that escort field can allow decays which are forbidden in the Standard Model. In particular the decay of \(Z_{0}\) into two photons is now possible for any value of \(\eta\cdot\eta^{\prime}\) except for \(\eta\cdot\eta^{\prime}=-1\) where we recover the standard Landau-Yang result. The reason being that in this case the two branches of escort field cancel out. Violation of the Landau-Yang theorem by the inclusion of the escort field allows for study of other interesting physics phenomena. Not only one has the possibility of studying novel and rare decays in atomic physics but also its impact in the cosmology for instance in the mapping of the 21cm hydrogen line. We hope to address to some of this phenomena using this framework in the future. ## Appendix We give a closed expression for the integral encountered in the text : \[F(\eta\cdot\eta^{\prime}){:=}\int_{0}^{1}\frac{1}{\left(2(\eta\cdot\eta^{ \prime})(1-\lambda)\lambda+(2\lambda^{2}-2\lambda+1)\right)^{2}}\,d\lambda\] This integration can be done using the identity \[\int_{0}^{1}\frac{d\lambda}{[(2\lambda^{2}-2\lambda+1)+2\lambda(1-\lambda) \eta\cdot\eta^{\prime}]^{2}}=\frac{1}{1+\eta\cdot\eta^{\prime}}+\frac{2\tan^{ -1}\left(\sqrt{\frac{1-\eta\cdot\eta^{\prime}}{1+\eta\cdot\eta^{\prime}}} \right)}{(1+\eta\cdot\eta^{\prime})\sqrt{1-(\eta\cdot\eta^{\prime})^{2}}}\] Parameterizing \(A=\eta\cdot\eta^{\prime}=-\cos\Theta\) (as both of them are spacelike vectors), one gets a simplified expression \[F(\Theta)=\frac{1+\frac{\Theta}{\sin\Theta}}{1-\cos\Theta}\] " ## Acknowledgements M.A. is partially supported by Spanish MINECO/FEDER Grant PGC2022-126078NB-C21 funded by MCIN/AEI/10.13039/501100011033, DGA-FSE grant 2020-E21-17R and Plan de Recuperacion, Transformacion y Resiliencia - supported European Union - NextGenerationEU. We wish express our thanks for Amilcar Queiroz for discussions on the early stages of this investigation. --------------------------------------------------------------------------
2310.13958
Joseph Carter Corbin: Arkansas's "Profound Mathematician''
This is a historical article on J. C. Corbin, a nineteenth century mathematician and the founding president of the Historically Black University of Arkansas at Pine Bluff. This version omits the figures that appeared in the published edition. It also includes a lengthier bibliography and a new section ("Added after publication") which addresses some historical points. Updated the section "Added after publication" in response to feedback from Dr. Gladys Turner Finney.
Jesse Leo Kass
2023-10-21T09:42:14Z
http://arxiv.org/abs/2310.13958v2
# Joseph Carter Corbin: Arkansas's "Profound Mathematician" ###### Abstract While Joseph Carter Corbin (1833-1911) is a celebrated figure remembered for his impact in higher education and politics within the state of Arkansas,1 he is absent from many modern historical accounts of American mathematicians.2 However, during his lifetime Corbin was very much part of the mathematical community. He was described as a "Profound Mathematician" in _Men of Mark_, an influential anthology of accomplished Black men [27, pp. 829-832]. He regularly contributed to mathematical periodicals like the _American Mathematical Monthly_. Upon his death, the _Monthly_ commemorated him by publishing his obituary [12]. Footnote 1: footnotemark: Corbin's omission from historical accounts of American mathematicians is neither surprising nor unusual. Most mathematicians of his generation are omitted because accounts tend to focus on research activities. Only towards the end of Corbin's life did significant numbers of American math professors begin to do research. Certainly Corbin is worthy of inclusion in the history of American mathematics. His life was remarkable. The son of freed slaves, Corbin was the founding head of the University of Arkansas at Pine Bluff, Arkansas's public Historically Black University. While working there, for over three decades, he regularly published in mathematical periodicals. With the goal of introducing Corbin to modern mathematicians, we survey his life in this article. We focus on his experiences in higher education and mathematics and, in particular, give an overview of his mathematical publications. ## Early Life Corbin was born in Ohio in 1833. His parents had been enslaved in Virginia but had moved to Ohio about a decade before Corbin's birth. Little is known about their move, but it was a common one: most Black Ohioans3 had come from the bordering slave states of Virginia and Kentucky. Corbin's mother, Susan, had moved after being emancipated by her enslaver. It is unknown whether the father, William, was also emancipated or if he had fled enslavement. Footnote 3: footnotemark: Even though Corbin never personally experienced enslavement, his life was significantly constrained by state laws and social practice. For example, Black Ohioans were not allowed to allowed to attend public schools, and many private schools were racially segregated. Despite the obstacles facing the family, Corbin's parents were able to achieve considerable personal success; both were literate, and William worked as a "sewer of clothes" (a skilled profession).4 ## Education When Corbin was growing up, his family lived in Chillicothe, a mid-sized town of a few thousand people in southern Ohio which was a regional center for Black life (Blacks residents made up 10% of Chillicothe's population but only 1% of the state's). Corbin attended school in Chillicothe until he was fifteen years old (around 1848). He then moved to Louisville, Kentucky to take advantage of the greater educational opportunities afforded by a larger city. He returned to Ohio in 1850 to attend Ohio University in Athens. Black students had studied at OU since the 1820s, and Corbin was the third student to enroll. While the presence of Black students on college campuses was sometimes a source of tension, Corbin did not seem to have attracted any special attention. It is unclear whether Corbin was even regarded as Black by faculty and students. For example, an 1853 newspaper article on the annual commencement exercises at Ohio University mentions Corbin by name but makes no reference to his race, a standard journalistic practice at this time [53]. Similarly, an 1885 list of university alumni [15, p. 86] describes Corbin simply as a teacher in Louisville5 but describes another alumnus (John Newton Templeton) as "the only Alumnus.. of African descent" [15, p. 78]. Footnote 5: The description of Corbin as a teacher in Louisville suggests that his entry in the list was based on old information collected shortly after Corbin graduated (during the 1850s or 1860s, when he was in Louisville). At the time the list was published, Corbin had been living in Arkansas for over a decade. When Corbin arrived at OU, the university was functioning as a standard 19th century U.S. university. It offered a B.A. degree for students who completed a 4-year sequence of college courses. The university also maintained a Preparatory Department which offered a 2-year sequence designed to prepare students for the college courses.6 At the start of Corbin's first semester, enrollment stood at 64 students, divided more or less evenly between college and preparatory students. They were taught by five professors, each being responsible for all coursework in a given subject. The graduation requirements differed from those of a modern American university in that students did not select majors. Instead, all college students took the same fixed sequence of courses. Footnote 6: While uncommon today, many 19th century universities maintained similar departments. Indeed, doing so was often necessary as schooling along the lines of modern K–12 education was often only available on a limited basis. The required coursework heavily emphasized the study of Greek and Roman literature, but it also included a mathematics curriculum that covered algebra and elementary geometry in the first year; trigonometry, analytic geometry, and differential calculus in the second year; and integral calculus in the third year.7 Footnote 7: Corbin’s analytic geometry and calculus classes were taught using Albert E. Church’s books _Elements of the Differential and Integral Calculus_ and _Elements of Analytical Geometry_, while the algebra and geometry classes were taught using textbooks by Charles Davies [16, 17]. Algebra was taught from _Elements of Algebra: Translated from the French of M. Bourdon_. It is not entirely clear which geometry textbook were used as Davies wrote several. The assigned text is listed as “Davies’ Legendre” in the 1851-50 university catalogue and as “Davies” in 1852-53 university catalogue. The first reference is to _Elements of Geometry and Trigonometry from the works of A. M. Legendre_, but the second reference could either be to that book or to another one such as _Elements of Descriptive Geometry_. At OU, Corbin was taught by two different math professors, William J. Hoge and Addison Ballard. As was common for American math professors at the time, neither Hoge nor Ballard had any specialized training in math.8 Corbin's first math classes were taught by Hoge, an alumnus of the university who had graduated with an undergraduate degree in 1843 [15, p. 84]. Hoge left the university while Corbin was still a student (in 1851), and he was replaced by Ballard. Ballard had received a bachelor's degree from Williams College and had been serving as OU's Latin Professor since 1848 [15, pp. 213-214]. ### Career When Corbin graduated from Ohio University in 1853, his formal education came to an end,9 and he spent the next two decades working as a teacher, bank clerk, and newspaper editor in Louisville and Cincinnati. Footnote 9: Corbin would later receive two A.M. degrees from OU and a Ph.D. from an unknown Baptist college, but these were honorary degrees. In 1872, Corbin moved to the former slave state of Arkansas. This was not an uncommon move for talented Black men like Corbin, as the political changes brought about by the Civil War created unprecedented opportunities for change. After working briefly as a newspaper reporter and a post office clerk, Corbin was elected to oversee Arkansas's public education system as superintendent of public instruction.10 His election made Corbin the head of the board of trustees for the state's newly founded public university -- Arkansas Industrial University (now the University of Arkansas).11 Footnote 10: Corbin defeated the incumbent, Thomas Smith, a white Union army veteran from Pennsylvania. Both Smith and Corbin were Republicans, but they were aligned with rival groups within the party [12]. Footnote 11: Prior to the Civil War, Arkansas did not maintain a public university, although there were a few private universities. During the war, U.S. Congress encouraged state governments to create public universities by passing the Morrill Land-Grants Acts, which provided states with funding for public universities through land sales. Arkansas Industrial University was created to fulfill the terms of the act so that the state could receive funding. The university was technically "open to all, without regard to race, sex, or sect,"12 but Black students were not allowed to attend classes with white students. Instead, they received private tutoring.13 To address the issue of college education for Black Arkansas, the board of trustees, including Corbin, petitioned the state legislature to create a school for educating Black teachers. The legislature responded favorably to the petition and created the Branch Normal College (now the University of Arkansas at Pine Bluff).14 Footnote 13: It is unclear exactly how many Black students attended, although the numbers were certainly small. A 1910 history of the university states that only one Black student applied for admission [13, p. 97], but a 1922 letter by the interim president’s wife states that two or three applied [14, p. 3]. At least one Black student, James McGahee, attended the university from 1872-73. Corbin's term in office was cut short when Conservative Democrats, who were largely hostile towards Black Arkansas, gained control over the state government and vacated many political offices, including Corbin's. Corbin left the state and moved to Jefferson City, Missouri, where he taught at the newly founded Lincoln Institute (now University).15 However, after a brief time there, he returned to Arkansas to serve as the first principal of the Branch Normal College. Footnote 15: At the time, Lincoln was a small school of about 50 students, mostly the children of freedpersons [74, 75, 76]. Since the 1873 legislative act, no real progress had been made in establishing the college. Under state law, the Branch Normal College was supposed to provide the same education as the Normal Department at Arkansas Industrial University. A normal department or school was a common 19th-century institution that offered postsecondary education for school teachers which typically involved a truncated college curriculum along with specialized coursework in pedagogy. However, achieving parity with Arkansas Industrial University was an unrealistic goal for the Branch Normal College as many of the college's students were facing rural poverty and political disenfranchisement. To further complicate the situation, Corbin was given few resources -- the college had no permanent facilities and no faculty aside from Corbin, who was even responsible for menial tasks like cleaning classrooms. Nevertheless, Corbin was able to make major improvements. By the 1880s, a college building and dormitories had been constructed, and the faculty had increased to three. By 1883, student enrollment had reached over 200 students. The Branch Normal College offered an academic education through three different departments: (a) the Preparatory Department which offered remedial classes to prepare students for college work, (b) the Normal Department which awarded a Licentiate of Instruction (a teaching certificate), and (c) the Collegiate Department which awarded a Bachelors of Arts. The majority of students were enrolled in the Preparatory Department. The preparatory curriculum consisted of three years of courses on subjects such as grammar, geography, drawing, and math. The math education consisted of six terms of elementary arithmetic and one of algebra.16 Footnote 16: According to the 1882 college catalogue [111, p. 91-92], arithmetic was taught using books from Robinson’s Series of Mathematics, a popular series of textbooks. It is unclear precisely which books were used as they are only described as “Arithmetic, Robinson’s Shorter Course” and “Arithmetic, Robinson’s.” Algebra was taught from William G. Peck’s _Manual of Algebra_. Students in the Normal Department completed two years of remedial work followed by two years of college courses. In addition to classes on Latin, English grammar, and pedagogy, students studied algebra and geometry in their third year and then plane trigonometry in their fourth.17 Students in the Collegiate Department completed the courses taken by the normal students and then two additional years of college classes in subjects like foreign languages, English literature, and mathematics. The math classes offered in the last two years were a class on analytic geometry and an optional course on calculus.18 Footnote 17: The 1882 college catalogue [111, p. 91-92] states that geometry was taught from _Wentworth’s Solid Geometry_ by George Wentworth, and trigonometry from Edward Olney’s _Elements of Trigonometry, Plane and Spherical_. The algebra textbook is described as “Thompson’s Algebra,” but the present author has been unable to identify this book. The curriculum that Corbin promoted at the Branch Normal College came into conflict with an increasingly influential educational philosophy that focused on industrial education. This philosophy endorsed only a very limited academic education for Black students and instead emphasized the value of manual labor as a means to impart values like self-discipline and self-reliance. Many of those who promoted industrial education scorned Black students who saw academics as a way of obtaining personal advancement or fulfillment. Booker T. Washington, the most prominent promotor of industrial education, expressed the views of many in his autobiography. In an account of teaching in Alabama, he wrote that, "The students who came [to his school] first seemed to be fond of memorizing long and complicated 'rules' in grammar and mathematics, but had little thought or knowledge of applying these rules to the everyday affairs of their life" [111, p. 122]. To illustrate his point, he offered the example of a student whose fondness for math led him to study methods for computing cubic roots. Washington set students like him to activities that he deemed more appropriate: farm work and learning how to properly set a dinner-table. Seeking to implement industrial education in Arkansas, state legislators in the 1890s created a Department of Mechanical Arts at the Branch Normal and hired as department head William S. Harris, a white man from Virginia. He was effectively made head of the college by the board of trustees when they transferred key responsibilities19 from Corbin to Harris. Footnote 19: For example, Harris was made responsible for admitting students and collecting fees. Corbin nominally remained the principal of the Branch Normal for approximately another decade. In 1902, the board of trustees replaced him by hiring Isaac Fisher, a recent graduate of Booker T. Washington's Tuskegee Institute. Fisher's hire was part of a plan to expand industrial education at the Branch Normal, but Fisher's efforts to the transform the college flatered in the face of mixed support from trustees and strong opposition by the Black community. While Corbin's removal was a major set-back to efforts to run the Branch Normal as an academic institution, the college continued to offer advanced courses taught by Corbin's former student James C. Smith. After his removal from the college, Corbin remained in Pine Bluff20 and served as principal of Merrill Public School (the local Black high school) until his death in 1911, at the age of seventy-seven. Footnote 20: The _MAA_ did not exist during Corbin’s lifetime, and the _Monthly_ was privately published by the math professor Benjamin Finkel. ### Corbin's mathematical work Corbin first published in a mathematical periodical in 1882, when he was almost fifty years old. At this point in his life, Corbin had achieved remarkable success in politics and education, but this publication is the first record of his intellectual engagement with mathematics. This was a point of pride for Corbin, but it is unclear if these publications were the expression of a life-long interest or one developed late in life. His publication record shows both a level of sophistication and a depth of knowledge that extended beyond both what he'd been taught at university and what he was teaching at the Branch Normal College. For example, the calculus textbook he used as a student contained essentially no proofs and even omitted basic definitions like the definition of the definite integral as a limit of Riemann sums. Only the geometry courses he took went beyond mechanical work and included detailed proofs as part of a treatment of plane geometry. The periodicals that Corbin contributed to were similar to, and in fact included, the _American Mathematical Monthly_ which is currently published by the Mathematical Association of America.21 A typical periodical included a problems and solutions section together with book reviews and expository articles. Compared to today's research journals, these periodicals served a very different audience and performed a different function. They were not intended as long-term records of original mathematical research. Instead, they were published to stimulate activity among people who otherwise had few outlets for their mathematical interests. Most readers were individuals studying math in relative isolation, with only limited access to mathematical literature and other resources. They included both amateur mathematicians and math professors, who were often the sole mathematicians at their universities.22 In an era when mathematicians were only beginning to organize themselves into professional societies, these periodicals performed a similar function in fostering a mathematical community.23 Footnote 21: The MAA did not exist during Corbin’s lifetime, and the _Monthly_ was privately published by the math professor Benjamin Finkel. Footnote 22: For example, in addition to university professors, the contributors to the first volume of the _Monthly_ included a counsller at law (John Doman) and an examiner for the U.S. Civil Service Commission (Theodore L. DeLand). Footnote 23: While the MAA did not exist during Corbin’s lifetime, the American Mathematical Society was formed six years after Corbin’s first publication, in 1888. It was originally a New York-based society named the New York Mathematical Society, but it was given its current name and reorganized as a national society in 1894. Corbin was never an AMS member, and in general, the AMS was run as an elite and exclusive organization during his lifetime. Admission to the society required receiving a majority of ballots by members in an election after being proposed by two members and recommended by the AMS Council. In contrast, the periodicals Corbin published in were run in a democratic spirit. For example, in the introduction to the inaugural issue of the _Mathematical Visitor_, the editor wrote that he aimed to produce an “unpretending periodical” that inspires a love of mathematics, and he invited contributions from “professors, teachers, students, and all lovers of the ‘bewitching science’’ [17]’ Corbin's first published problem is the following one: Separate the fraction \(a/b\) into two parts, \(c/d\) and \(e/f\) such that \(c+e=d+f\). This problem appeared in a 1882 issue of _The Mathematical Magazine_[12]. The subsequent issue contained two methods for finding the solution \(c/d=(a-b-1)/(b-1)\) and \(e/f=(b^{2}+b-a)/(b^{2}-b)\).24 Close consideration of the solutions suggests that caution needs to be taken when assigning authorship. Each solution was attributed to two different men, but these men (four in total) probably did not write the published text describing the methods. Consider the first method.25 Although it is attributed to P. M. McKay and William E. Heal, it is unlikely that they jointly wrote the text (one lived in Arkansas, the other in Indiana; there is also no other record indicating collaboration). It is more likely the case that the editor simply summarized two submissions that he received. Footnote 25: The method is to solve for \(e/f\) as \(a/b-c/d=(ad-bc)/(ad)\), substitute this expression into the equation \(c+e=d+f\), and then solve for \(c/d\)[12, p. 65]. Even when authorship of a work is clear, it did not always represent original ideas. It was not uncommon to submit routine problems taken from textbooks. In the context of the intellectual environment of the 1890s, this is unsurprising. Textbook problems were interesting to readers as many had little or no access to the books themselves. For example, the following problem, which Corbin published in a 1898 issue of the _Monthly_[12], appears to have been taken from the textbook _Treatise of Ordinary and Partial Differential Equations26_ by William Woolsey Johnson: Footnote 26: Compare the text of Corbin’s problem with Example IX.7 on page 104 in the chapter “Linear Equations: Constant Coefficients” of Johnson’s book. This first observation was made by Walter Hugh Drane in his published solution [13]. Form the differential equation of the third order, of which \(y=c_{1}e^{2x}+c_{2}e^{-3x}+c_{3}e^{x}\) is the complete primitive. The differential equation is \(y^{(3)}-7y^{\prime}+6y=0\), and it can be solved using the techniques from Johnson's book -- standard techniques still taught to undergraduate students. Despite its routine nature, this problem appears to have interested readers (a number of solutions were submitted [13, 14]). Among those whose solutions were published was Princeton University professor Edgar Odell Lovett, who gave a complete solution even though he remarked that it was "a familiar one to students of differential equations." Lovett's solution is particularly notable as it demonstrates that, despite their elementary nature, Corbin's publications were even reaching American research mathematicians with an international reputation. Lovett had completed a doctorate in Germany, published in research journals, and was a member of several European professional societies. Corbin's differential equation problem must have been the product of significant self-study as the subject was not taught at Ohio University when Corbin was a student. Other problems drew on the plane geometry that Corbin had been taught at Ohio University. One such example is the following problem [12] which Corbin published in a 1907 issue of the _Monthly_: In triangle \(ABC\), the triangle \(DEF\) is formed by joining the feet of the medians and four \([\)_sic_\(]\) parallelograms are also formed, viz., \(AEFD\), \(BFED\), and \(CEDF\). Let \(a\), \(b\), \(c\); \(d\), \(e\), \(f\) represent the three medians of \(ABC\), and the three sides of \(DEF\). Then the sum of the squares of the six diagonals equals the sum of the squares of the twelve sides of the parallelograms, which are equal in sets of four. That is, \(a^{2}+b^{2}+c^{2}+d^{2}+e^{2}+f^{2}=4(d^{2}+e^{2}+f^{2})\) or \(a^{2}+b^{2}+c^{2}=3(d^{2}+e^{2}+f^{2})=3/4(AB^{2}+BC^{2}+CA^{2})\). The configuration of triangles is displayed in Figure 5. The problem is solved using the fact that, in a parallelogram, the sum of the squares of the lengths of the two diagonals equals twice the squares of the two side lengths. One of the most advanced topics treated in Corbin's published work is the theory of matrices and determinants, a topic that was not usually taught at American universities in the 19th century. This topic is the subject of Corbin's only expository article, "Note on Elimination." This 1896 _Monthly_ publication [12] is a short one-page note in which Corbin explains a method for solving a system of two linear equations in two variables, \(x\) and \(y\). The method is essentially Gaussian elimination, i.e. solve for one variable by eliminating the other. He summarizes the rule by: "The difference (sum) of the products containing \(x\) (\(y\)) is equal to the difference (sum) of the numerical products." While this method was certainly not original to Corbin, it appears that it was not well known to American mathematicians as the presents it as an alternative to the "determinant method" (perhaps a version of Cramer's rule). Corbin also published two problems on determinants. The first appeared in a 1895 issue [11] of the _Monthly_:27 Footnote 27: The expression we give corrects a typo in [11]. In the original printed version, the bottom right-most entry in the first matrix is \(s-a_{n}^{2}\), and we have changed this to \((s-a_{n})^{2}\). To see that the entry \(s-a_{n}^{2}\) is a typo, see the published solution in the April 1896 issue of the _Monthly_. Find the quotient of \[\begin{pmatrix}(s-a_{1})^{2}&a_{1}^{2}&\dots&a_{1}^{2}\\ a_{2}^{2}&(s-a_{2})^{2}&\dots&a_{2}^{2}\\ a_{3}^{2}&a_{3}^{2}&\dots&a_{3}^{2}\\ \vdots&\vdots&\vdots&\vdots\\ a_{n}^{2}&a_{n}^{2}&\dots&(s-a_{n})^{2}\end{pmatrix}\] \[\stackrel{{\cdot}}{{\dashv}}\begin{pmatrix}s-a_{1} &a_{1}&a_{1}&\dots&a_{1}\\ a_{2}&s-a_{2}&a_{2}&\dots&a_{2}\\ a_{3}&a_{3}&s-a_{3}&\dots&a_{3}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ a_{n}&a_{n}&a_{n}&\dots&s-a_{n}\end{pmatrix}\] The desired expression for the quotient is \[s^{n-1}\cdot\left(s+\sum\frac{a_{i}^{2}}{s-2a_{i}}\right)\cdot\frac{1}{1+\sum \frac{a_{i}}{s-2a_{i}}}.\] The solution is obtained by applying matrix manipulations that preserve the determinant. First, observe that the given ratio of determinants equals: \[\begin{pmatrix}1&0&0&\dots&0\\ 1&(s-a_{1})^{2}&a_{2}^{2}&\dots&a_{n}^{2}\\ 1&a_{1}^{2}&(s-a_{2})^{2}&\dots&a_{n}^{2}\\ 1&a_{1}^{2}&a_{2}^{2}&\dots&a_{n}^{2}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ 1&a_{1}^{2}&a_{2}^{2}&\dots&(s-a_{n})^{2}\end{pmatrix}\] \[\div\begin{pmatrix}1&0&0&0&\dots&0\\ 1&s-a_{1}&a_{2}&a_{3}&\dots&a_{n}\\ 1&a_{1}&s-a_{2}&a_{3}&\dots&a_{n}\\ 1&a_{1}&a_{2}&s-a_{3}&\dots&a_{n}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 1&a_{1}&a_{2}&a_{3}&\dots&s-a_{n}\end{pmatrix}\] Each column now has the property that almost all entries are the same, and thus it can be simplified by subtracting a suitable multiple of the first column. This yields: \[\begin{pmatrix}1&-a_{1}^{2}&-a_{2}^{2}&\dots&-a_{n}^{2}\\ 1&s(s-2a_{1})&0&\dots&0\\ 1&0&s(s-2a_{2})&\dots&0\\ 1&0&0&\dots&0\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ 1&0&0&\dots&s(s-2a_{n})\end{pmatrix}\] \[\div\begin{pmatrix}1&-a_{1}&-a_{2}&-a_{3}&\dots&-a_{n}\\ 1&s-2a_{1}&0&0&\dots&0\\ 1&0&s-2a_{2}&0&\dots&0\\ 1&0&0&s-2a_{3}&\dots&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 1&0&0&0&\dots&s-2a_{n}\end{pmatrix}\] Computing each determinant as the Laplace expansion along the first row, we get that the first determinant is \[s^{n}\prod(s-2a_{i})+a_{1}^{2}s^{n-1}\prod_{i\neq 1}(s-2a_{i})+\dots\] \[+a_{j}^{2}s^{n-1}\prod_{i\neq j}(s-2a_{i})+\dots+a_{n}^{2}s^{n-1} \prod_{i\neq n}(s-2a_{i}),\] and the second is \[\prod(s-2a_{i})+a_{1}\prod_{i\neq 1}(s-2a_{i})+a_{2}\prod_{i\neq 2}(s-2a_ {i})\] \[\quad+\dots+a_{j}\prod_{i\neq j}(s-2a_{i})+\dots+a_{n}\prod_{i \neq n}(s-2a_{i}).\] Factoring out \(\prod(s-2a_{i})\), we get the desired solution. Corbin's second published problem on determinants was his last publication. In [11], Corbin asked: Muir gives the following problem: Prove: \[\begin{vmatrix}1&a&a&a^{2}\\ 1&b&b&b^{2}\\ 1&c&c^{\prime}&cc^{\prime}\\ 1&d&d^{\prime}&dd^{\prime}\end{vmatrix}=(a-b)\begin{vmatrix}1&ab&a+b\\ 1&cd^{\prime}&c+d^{\prime}\\ 1&c^{\prime}d&c^{\prime}+d\end{vmatrix}\] which, of course, can be solved by finding the terms of both determinants. Is there any method of changing from one form to the other which is direct? The reference to "Muir" is a reference to Thomas Muir's book _A Treatise on the Theory of Determinants_. The problem appears as one of the exercises in the chapter "Determinants in general" on the general properties of the determinant.28 The solution is similar to the one for the previous problem. The first matrix is manipulated using operations that preserve the determinant until its determinant is visibly equal to the determinant of the second matrix. Footnote 28: Specifically, it appears on page 66 of Muir’s book as Exercise 27 in Exercise Set IX in Chapter II. ### Corbin's Legacy American mathematical culture underwent major changes around the time Corbin's career came to an end. Four years after his death, the Mathematical Association of America was formed, in large part to provide a permanent institution to support the _American Mathematical Monthly_. The _Monthly_ became an official journal of the MAA, and the loosely knit readership of math enthusiasts, which Corbin had been a part of, began to transform into an organized group of professionals. Black mathematicians continued to be members of this community. The inaugural charter members of the MAA included James T. Cater, a graduate of the HBCU Atlanta University and a math professor at Straight University (now part of Dillard University) [16]. At the Branch Normal College, events like Corbin's removal were major setbacks to efforts to run the college as an institute of higher education. However, they did not end those efforts. Faculty continued participate in national mathematical culture. In 1938, faculty at the college (then renamed the Arkansas Agricultural, Mechanical & Normal College) joined the ranks of the American Mathematical Society when college professor William Louis Fields was elected to membership [19].29 Fields joined the MAA three years later [41, p. 60].30 Footnote 29: Other Branch Normal faculty who were later elected to AMS membership include Morris Edward Mosley [18] and Willie E. Clark [19]. In 1903, a year after he was removed from the Branch Normal College, Corbin delivered an address at the annual meeting of the Arkansas Negro State Teachers' Association. The meeting was attended by over 100 school teachers and was reported in the press [03], so it provided Corbin an important forum for voicing his views on education. His speech reads as a rebuke of the educational philosophy represented by the decision to make Isaac Fischer principal of the Branch Normal College. Corbin said [A] need of the negro is a supply of men with the necessary equipment and capacity to carry on his vast national enterprises. It may be a surprising thing to many that I claim that the negro has any enterprises of this nature; but the claim is easily verified. As evidence for his claim, Corbin proceeded to give examples of achievements in professions like law, education, and farming. Were Corbin to give the speech today, one imagines that he would also include mathematics. ### Added after publication This article was originally published as [14]. The present version includes a more extensive bibliography. We also provide additional information on two different issues. First, in footnote 20, we observed that an obituary for Corbin states that he served as president of "the Ouachita Baptist College" in Camden, Arkansas, but we remarked that this appears to be an error as the college (now renamed Ouachita Baptist University) was a whites-only institute located in Arkadelphia, not Camden. It is likely that there was some confusion over the name of the institute Corbin taught at. Camden was home to a different school called the "Ouachita Academy" or "Ouachita Industrial Academy" that was maintained by the Black Baptist church. This is likely the school that Corbin worked at. See [10, p. 135] or [13, p. 159] for basic details about the academy. Second, we stated at the beginning of the section "Corbin's mathematical work" that Corbin's first mathematical publication appeared in a 1882 issue of the _Mathematical Magazine_. In fact, his name appears in a 1879 issue of the periodical _Educational Notes and Queries_[10]. There he is credited with submitting a solution to a problem [11] that had been posed earlier that year by Franklin Pierce Matz, a professor living a in King's Mountain, North Carolina. Matz asked the following, "Given \[m^{2}/x^{2}+a= m^{2}/y^{2}+b\] \[= \sqrt{1-m^{2}(1/x^{2}+1/y^{2})+m^{4}/x^{2}y^{2}}\] to find the values of \(x\) and \(y\) on the plan of quadratics." In the published solution, the answer is obtained by eliminating \(y\) and then solving for \(x\). This yields the answer \[x= \,\pm\,m\sqrt{(2+a+b)/(1+b-a-a^{2})},\] \[y= \,\pm\,m\sqrt{(2+a+b)/(1+a-b-b^{2})}.\]
2307.10646
On Enhancing Reliability in B5G NTNs with Packet Duplication via Multi-Connectivity
Non-Terrestrial Networks (NTNs) can be used to provide ubiquitous 5G and beyond services to un(der)served areas. To ensure reliable communication in such networks, packet duplication (PD) through multi-connectivity is a promising solution. However, the existing PD schemes developed for terrestrial environments may not be reactive enough for the NTN environment where propagation delays are significantly longer. This paper proposes a dynamic PD activation scheme for NTNs based on hybrid automatic repeat request feedback. The scheme aims to reduce the number of duplicated packets while maintaining high reliability. To evaluate the proposed scheme, simulations are conducted in a scenario with two transparent payload lowearth orbit satellites. The results show a significant reduction of 87.2% in the number of duplicated packets compared to blind duplication, with only marginal compromise in reliability.
Mikko Majamaa, Henrik Martikainen, Jani Puttonen, Timo Hämälainen
2023-07-20T07:16:05Z
http://arxiv.org/abs/2307.10646v2
# On Enhancing Reliability in B5G NTNs with Packet Duplication via Multi-Connectivity ###### Abstract Non-Terrestrial Networks (NTNs) can be used to provide ubiquitous 5G and beyond services to un(der)served areas. To ensure reliable communication in such networks, packet duplication (PD) through multi-connectivity is a promising solution. However, the existing PD schemes developed for terrestrial environments may not be reactive enough for the NTN environment where propagation delays are significantly longer. This paper proposes a dynamic PD activation scheme for NTNs based on hybrid automatic repeat request feedback. The scheme aims to reduce the number of duplicated packets while maintaining high reliability. To evaluate the proposed scheme, simulations are conducted in a scenario with two transparent payload low-earth orbit satellites. The results show a significant reduction of 87.2% in the number of duplicated packets compared to blind duplication, with only marginal compromise in reliability. 5G, beyond 5G, Low Earth Orbit (LEO) satellite, non-terrestrial networks, satellite network simulator ## I Introduction Non-Terrestrial Networks (NTNs) can be used to provide ubiquitous 5G and beyond services to un(der)served areas. 3GPP Release-17 (Rel-17) included basic functionalities to provide New Radio (NR), the air interface of 5G, access via NTNs. Rel-18 will enhance the functionalities by improving coverage for handheld devices, considering deployment to higher frequencies, and addressing mobility aspects [1]. A recent report by ITU [2] defines the requirements for satellite radio interfaces of IMT-2020. Whereas the terrestrial 5G has the Ultra-Reliable Low Latency Communications (URLLC) service class with a latency requirement of 1 ms and 99.999% reliability, the satellite equivalent service class is called High-Reliability Communications via satellite (HRC-s) with a reliability requirement of 99.9%. To achieve higher reliability, Packet Duplication (PD) is a promising solution. PD can be achieved through Multi-Connectivity (MC). In MC, a User Equipment (UE) can be connected to multiple base stations simultaneously. New Radio Dual Connectivity (NR-DC) [3] is a form of MC in which MC is achieved at the Packet Data Convergence Protocol (PDCP) layer. In NR-DC, a UE is connected to a Master Node (MN) and Secondary Node (SN) which both provide NR access. MC for TNs has been standardized by 3GPP. This is not the case for NTNs, but MC is one of the Rel-19 candidates. In the related literature, PD is typically performed either blindly (i.e., all the packets are duplicated) to all the users who have established a secondary connection, or the trigger to activate PD is based on a measure that is not reactive enough. The first issue is problematic because of the increased use of network resources. The latter may not be a problem in the TN environment where the propagation paths are relatively short. To this end, there is a need to research methods to quickly and dynamically react to the need to activate PD in the NTN environment. In this paper, a dynamic PD scheme is introduced in which PD is started as a precaution based on negative feedback from Hybrid Automatic Repeat reQuest (HARQ). The paper is organized as follows. In the next section, related work is reviewed. In Section III, PD and the proposed PD scheme are discussed. The proposed scheme is evaluated through system simulations in Section IV. The paper is concluded in Section V. ## II Related Work Reference [4] highlights MC's role as an enabler for reliable low latency communications, concluding that PD, along with load balancing and packet splitting, is one of the fundamental scheduling categories for meeting the requirements of such communications. The authors in [5] study Dual Connectivity (DC) for reliability enhancement through PD in TNs. For the SN addition, the A3 event is used, that is, the neighbor cell Reference Signal Received Power (RSRP) becomes a threshold value less than the MN RSRP. Further, a DC range parameter is introduced that corresponds to a negative threshold for SN addition. Through numerical simulations in small and macro cells with different DC range values and Block Error Rate (BLER) targets, it is concluded that DC is a promising technique for reliability enhancement, particularly in scenarios with relaxed latency requirements. In [6], DC is investigated for improved reliability by dynamically duplicating packets when link quality drops below a threshold and signaling UE to use two links in the uplink direction, resulting in more efficient resource utilization compared to blind duplication in scenarios with both 4G and 5G access. Reference [7] presents a detailed overview of MC architecture and introduces a dynamic algorithm that activates PD based on Signal-to-Noise Ratios (SNRs), transport block sizes, and reliability targets, leading to reduced resource usage, though with marginal savings in high SNR regimes and small packet sizes. In [8], two methods to enhance the performance of PD in TNs are proposed. The first involves the UE notifying the SN when a packet is successfully received, reducing the need for duplicate packet transmissions. However, this approach may not be sufficiently reactive in NTN environments due to long propagation delays. The second method is based on HARQ feedback, where the MN performs PD upon receiving a NACK as HARQ feedback. This method is similar to the one introduced in this paper, but instead of duplicating a single packet after a NACK, we propose starting a timer during which duplication is performed. This allows to better capture consecutive NACKs and adapt to the long propagation delays. It is worth noting that MC with PD for reliability enhancement has not been extensively researched in NTNs. In [9], PD was experimentally tested utilizing 5G TN and a connection to a Starlink Low Earth Orbit (LEO) satellite. The receiver used was a special type of antenna. Neither has research on MC in NTNs been extensively performed. The authors have previously investigated MC for throughput enhancement, for example, in [10] in which algorithms for MC activation and traffic splitting are introduced. Further research is needed to explore the potential of MC with PD for reliability enhancement in NTN scenarios where the existing solutions for TNs are not sufficient due to the longer propagation delays. ## III Packet Duplication ### _General_ In MC, a UE can be connected to multiple base stations simultaneously. NR-DC is a form of MC in which MC is achieved at the PDCP layer. In NR-DC, a UE is connected to a MN and SN which both provide NR access. The SN addition process is initiated by the current serving Next Generation Node B (gNB) of the UE. Based on some trigger, typically RSRP measurement received from the UE, it sends a request to a candidate SN. The MN and SN can exchange user and control plane data through the Xn interface. When a packet arrives at the PDCP layer of the MN, it can duplicate the packet, and send a copy of it to the SN. In this way, the data is transmitted through both the MN and SN to increase the probability of reception of the data at the UE side. PD is illustrated in Fig. 1. The UE hosts separate protocol stacks for MN and SN transmissions and combines them at the PDCP layer. The PDCP layer also detects and drops packets already received. ### _Dynamic Packet Duplication_ After activating MC to a UE, the MN can start sending duplicate packets for the SN to send to the UE. This can be done blindly, that is, all the packets are duplicated. However, this significantly increases the utilization of network resources, thus necessitating the exploration of more intelligent methods to activate PD. In this work, a solution to achieve this based on HARQ feedback is proposed. In HARQ, the UE sends feedback based on the data it receives (ACK) or doesn't (NACK). The UE knows to expect data from control channel signals. Commonly, the control channel is more robust than the data channel, so the control messages are less likely to get lost. In this work, PD is activated for a parametrizable period after a NACK is received as HARQ feedback in the primary connection. When a NACK is received, the gNB (i.e., the MN) starts a duplication timer for the corresponding UE. If the timer is running when a packet arrives at the MN's PDCP, it is duplicated. Each NACK resets and restarts the timer. Note that, after receiving a NACK as HARQ feedback, the gNB can retransmit data to the UE, which may be successfully received. However, PD activation serves as a precaution against possible successive failures due to factors like Non-Line-of-Sight (NLOS) conditions. Additionally, data already forwarded from the PDCP layer may be lost, but this could be mitigated by maintaining a buffer of packet copies at the PDCP that can be forwarded to the SN in case a NACK is received as HARQ feedback. ### _Delivery Options and Reordering in PDCP_ The receiving PDCP performs duplicate detection, deciphering, and integrity verification of received packets [11]. If any of these operations fail, the packet is dropped, which is indicated to the upper layer. If the packet is not dropped, PDCP chooses between two delivery options: i) out-of-order delivery and ii) in-order delivery with reordering. The first one is the simpler mode in which the received packets are delivered to the upper layer without reordering. In the second option, PDCP tries to deliver packets in-order to the upper layer based on the sequence numbers of the packets. When out-of-order packets are received, a timer is started. If the missing packets are received when the timer is running, the packets are delivered and the timer is stopped. If the timer expires, the packets with sequence numbers below the one that caused the timer to start are delivered. In addition, the consecutively associated packets (according to their sequence numbers) starting from the one that caused the timer to start are delivered. If all the packets in the buffer are still not forwarded, the timer is reset and started again. In-order delivery at PDCP is illustrated in Fig 2. Fig. 1: PD through MC illustrated. The MN and SN are connected through the Xn interface. The MN is connected to the core network through the Ng interface. Both the gNBs provide NR access to the UE through the Nr-U interface. In the figure, first, packet 0 is received from the lower layer and delivered up. After that, packet 3 is received from the lower layer, that is, packets 1 and 2 are missing (illustrated as grey color in the figure). This event triggers the reordering timer. While the timer is running, packets 2, 4, and 6 are received, but not the still-missing packet 1. Eventually, the reordering timer expires, and packets 2, 3, and 4 are delivered to the upper layer (if packet 1 is received from now on, it will be discarded because it is not expected anymore). Packet 6 cannot be forwarded, since packet 5 is missing so the reordering timer is reset. Then packet 5 arrives, packets 5 and 6 are delivered to the upper layer and the timer is stopped. The reordering window's length is configured by upper layers [12] and can vary from 0 ms (out-of-order delivery) to 3 s, or even set to infinity for forced in-order delivery. In NTN environments, the propagation delay differences between paths in MC can be significantly higher than in TN environments. However, the currently defined values for the reordering window's length should still be able to handle propagation delay differences, even in Geostationary Orbit (GEO) satellite cases. Although, an increase in UE's buffering requirement is expected in MC involving NTN compared to TN MC when in-order delivery is required. Further, if the propagation delays between the paths differ, the packets from the faster path require extra buffering in the receiver's PDCP before delivering up, thus increasing the packet delay of these packets as well. This needs to be considered in delay-sensitive applications. ## IV Evaluation ### _5G Non-Terrestrial Network Simulator_ The simulator used for the assessment is a 5G NTN System-Level Simulator (SLS) [13], which is built on top of Network Simulator 3 (ns-3) [14] and its 5G LENA module [15]. ns-3 is typically used for educational and research purposes, and users can add new modules to the simulator. The 5G LENA module is designed to simulate 5G networks, but it cannot simulate NTNs. Therefore, the necessary components to simulate NTNs were implemented in the 5G NTN SLS using 5G LENA as a starting point. The 5G LENA module implements NR Physical (PHY) and Medium Access Control (MAC) features, while the upper layers of the UE/gNB stack are reused from the ns-3 Long Term Evolution (LTE) module [16]. The ns-3 core provides the higher layers such as transport and network layers, while the link layer is abstracted with Link-to-System (L2S) mapper and Modulation and Coding (MODCOD)-specific SINR to BLER mapping curves. SINR is computed for each packet, and using the mapper, BLER is deduced. The simulator has been calibrated in the past using system-level calibration scenarios from 3GPP Technical Report (TR) 38.821 [17]. The calibration scenarios provide different assumptions, such as bands (S-band/Ka-band), terminal types (VSAT, handheld), and frequency reuse patterns (reuse 1, 3, 2+2), and they can be adjusted as needed. Additionally, hybrid TN-NTN scenarios can be studied. Channel and antenna/beam modeling from TR 38.811 [18] have also been implemented in the simulator. Channel modeling is elaborated on in the following subsection. The MC feature in the simulator was implemented following the specifications outlined in 3GPP Technical Specification (TS) 37.340 [3]. ### _Channel Model_ The considered NTN channel model is based on TR 38.811. In the NTN environment, several attenuation factors affect the propagation of the signal. The total path loss (in dBs) is defined as \[PL=PL_{b}+PL_{g}+PL_{s}, \tag{1}\] where \(PL_{b}\) is the basic path loss, \(PL_{g}\) is the attenuation due to atmospheric gasses, and \(PL_{s}\) is the attenuation due to scintillation. The basic path loss in dB units is modeled as \[PL_{b}=FSPL(d,f_{c})+SF+CL(\alpha,f_{c}), \tag{2}\] where \(FSPL(d,f_{c})\) is the Free Space Path Loss, \(d\) is the slant range between the satellite and UE, \(f_{c}\) is the carrier frequency, \(SF\) (\(\sim\)\(N(0,\sigma_{SF}^{2})\)) is the Shadow Fading loss, \(CL(\alpha,f_{c})\) is the Clutter Loss, and \(\alpha\) is the elevation angle. CL refers to the reduction in signal power due to the presence of nearby buildings and objects on the ground, which attenuates the signal. CL is negligible when the UE is in Line-of-Sight (LOS) conditions. The values of \(CL\) and \(\sigma_{SF}^{2}\) can be found in Tables 6.6.2-1 to 6.6.2-3 in TR 38.811 for different scenarios and elevation angles. The reduction in signal strength caused by the absorption of atmospheric gases is primarily influenced by frequency, elevation angle, altitude above sea level, and water vapor density (absolute humidity). Typically, at frequencies below 10 GHz, this attenuation can be disregarded. However, it is advisable to consider this calculation for frequencies above 1 GHz when dealing with elevation angles below 10 degrees. Further, rain and cloud attenuation are considered negligible for frequencies below 6 GHz. Now, the received signal power of a user can be computed as \[C=EIRP+G_{\text{Rx}}-PL, \tag{3}\] Fig. 2: Illustration of in-order delivery at PDCP. where EIRP is the Effective Isotropic Radiated Power from the satellite toward the user and \(G_{\text{Rx}}\) is the receiver antenna gain. For more detailed link budget analysis, refer to [19]. ### _Scenario and Assumptions_ In the considered scenario (see Fig. 3), there are two transparent payload satellites, that is, the gNBs are on the ground and the satellites repeat the Nr-U signal. The Xn delay is considered 2 ms. Out-of-order delivery in PDCP is considered. Each satellite has its center beam directed to the same target point, operating on separate frequencies to avoid interference. Only the center beams are considered for statistics collection. There are two additional layers of wraparound beams for each of the satellites that are used to introduce interference. There is one full buffer UE in each of the wraparound beams. Frequency Reuse Factor (FRF) 3 is considered. The satellites fly from east to west in longitudinal orbits around 7.56 km/s at 600 km orbit [17]. Ten UEs are uniformly placed around the target point of the center beams. These UEs use cell selection to connect to the strongest cell and require reliable transmission with low tolerance for errors. The UEs require Constant Bit Rate (CBR) traffic. Only user link is considered. MC is activated for a UE when the RSRP of a neighbor cell plus an offset reaches the level of the serving cell, that is, A3 event (different MC activation schemes can be utilized based on specific requirements and needs). With the chosen offset (10 dBm), all the UEs requiring high reliability get MC activated. The UEs have Doppler mobility, simulating speed without actual movement, which is used for correlating channel updates, such as shadowing. Actual user mobility has not been implemented. The channel condition from LOS to NLOS and vice versa changes dynamically based on the position of the serving satellite in relation to a UE. If the change in position exceeds a cube with side lengths of 3.5 km (chosen experimentally to account for changes to the LOS/NLOS conditions), a new LOS condition for the channel is randomly chosen based on [18, Table VI.6.1-1]. For example, in a rural scenario with an elevation angle of 60\({}^{\circ}\), the new LOS probability is 94%. While more complex models may be useful in the future, the current model is sufficient to evaluate the benefits of PD, and the conclusions hold with more complicated models. Simulations are run with three configurations: PD off, blind PD, and PD based on HARQ feedback. Each configuration is run 80 times with different RNG seeds, leading to variability in UE positions. Cumulative Distribution Function (CDF) statistics are combined from different RNG runs, while scalar statistics are averaged. The simulation time is 10 s with a warmup time of 0.5 s during which statistics are not collected. For PD based on HARQ feedback, the duplication duration after receiving NACK as HARQ feedback is 50 ms. In general, it should be set such that it captures consecutive NACKs, that is, at least the roundtrip time and processing delays. The simulation parameters are listed in Table I. ### _Results_ Figure 4 shows the CDF of the packet success rate for the users. For convenience, the statistics are summarized in Table II. When PD is not used, the mean success rate is 98.38%, whereas when PD is used it is 99.86% and 99.82% respectively for blind and HARQ-based duplications. Both PD schemes enhance the success percentage by around 1.5 percentage points. Further, the HARQ-based duplication scheme performs marginally worse than the blind duplication scheme. Figure 5. shows the total number of PDCP packets duplicated (i.e., for all the users, averaged over the RNG runs) for the different duplication schemes. The number of PDCP duplicates for the HARQ-based scheme is 12.78% of the duplicates in the blind scheme. Both PD schemes significantly improve reliability, with the blind scheme performing slightly better than the HARQ-based scheme. However, the blind scheme results in more wasted network resources due to the duplication of all PDCP packets and the subsequent dropping of dupli \begin{table} \begin{tabular}{l|l} \hline **Parameter** & **Value** \\ \hline Simulation time & 10.0 s \\ Warmup time & 0.5 s \\ Satellite payload & Transparent \\ Satellite mobility & Moving \\ UE mobility & Doppler (3 km/h) \\ Beam deployment & Quasi-Earth Fixed \\ Satellite starting positions & Lat: 62.38\({}^{\circ}\)/61.38\({}^{\circ}\), Lon: 20.00/20.00\({}^{\circ}\) \\ Center beam target points & Lat: 62.25\({}^{\circ}\), Lon: 25.74\({}^{\circ}\) \\ NTN channel condition & Dynamic \\ UEs per center beam & 10 \\ Wraparound & Two layers of beams with one full \\ & buffer UE in each \\ NTN scenario & Rural \\ Bandwidth per NTN beam & 10 Mhz \\ FRF & 3 \\ NTN carrier frequency & 2 GHz (S-band) \\ Satellite orbit & LEO 600 km \\ Satellite parameter set & Set 1 [17, Table VI.6.1.1-1] \\ UE antenna type & Omnidirectional \\ Traffic & CBR with UDP \\ UDP Packet Size & 32 B \\ UDP Packet Interval per UE & 20 ms \\ Offset for SN addition & 10 dBm \\ Duplication time after NACK & 50 ms \\ received as HARQ feedback & 2 ms \\ Xn delay & 2 ms \\ HARQ & Enabled with one retransmission \\ Scheduler & Round Robin \\ RNG Runs & 80 \\ \hline \end{tabular} \end{table} TABLE I: Simulation parameters. Fig. 3: Simulation scenario at the beginning of a simulation. The lines depict the relations of the satellites toward the target points of the beams. ## V Conclusion In this paper, reliability through PD in NTNs was researched. Through PD, reliability can be increased by sending the same data over different paths. To decrease the number of excess duplicates, a dynamic PD scheme was proposed. By system-level simulations, the proposed scheme was evaluated in a scenario with two transparent payload LEO satellites. The number of duplicated PDCP packets was reduced by 87.22% compared to blind duplication while only marginally affecting the reliability. In the future, PD activation could be enhanced by considering multiple HARQ feedbacks or error rates over a period of time. Currently, packets below the PDCP layer cannot be duplicated after PD activation, but this limitation could be overcome through cross-layer designs, such as buffering packets at the PDCP layer until feedback is received. Additionally, while the 99.9% reliability requirement of HRC-s was nearly met, further work is needed to closely address constellation and scenario design to fully achieve this requirement. ## Acknowledgment This work has been partially funded by the European Union Horizon-2020 Project DYNASAT (Dynamic Spectrum Sharing and Bandwidth-Efficient Techniques for High-Throughput MIMO Satellite Systems) under Grant Agreement 101004145. The views expressed are those of the authors and do not necessarily represent the project. The Commission is not liable for any use that may be made of any of the information contained therein.
2303.13860
Generalized Sparse Regression Codes for Short Block Lengths
Sparse regression codes (SPARC) connect the sparse signal recovery framework of compressive sensing with error control coding techniques. SPARC encoding produces codewords which are \emph{sparse} linear combinations of columns of a dictionary matrix. SPARC decoding is accomplished using sparse signal recovery algorithms. We construct dictionary matrices using Gold codes and mutually unbiased bases and develop suitable generalizations of SPARC (GSPARC). We develop a greedy decoder, referred as match and decode (MAD) algorithm and provide its analytical noiseless recovery guarantees. We propose a parallel greedy search technique, referred as parallel MAD (PMAD), to improve the performance. We describe the applicability of GSPARC with PMAD decoder for multi-user channels, providing a non-orthogonal multiple access scheme. We present numerical results comparing the block error rate (BLER) performance of the proposed algorithms for GSPARC in AWGN channels, in the short block length regime. The PMAD decoder gives better BLER than the approximate message passing decoder for SPARC. GSPARC with PMAD gives comparable and competitive BLER performance, when compared to other existing codes. In multi-user channels, GSPARC with PMAD decoder outperforms the sphere packing lower bounds of an orthogonal multiple access scheme, which has the same spectral efficiency.
Madhusudan Kumar Sinha, Arun Pachai Kannu
2023-03-24T08:48:59Z
http://arxiv.org/abs/2303.13860v1
# Generalized Sparse Regression Codes ###### Abstract Sparse regression codes (SPARC) connect the sparse signal recovery framework of compressive sensing with error control coding techniques. SPARC encoding produces codewords which are _sparse_ linear combinations of columns of a dictionary matrix. SPARC decoding is accomplished using sparse signal recovery algorithms. We construct dictionary matrices using Gold codes and mutually unbiased bases and develop suitable generalizations of SPARC (GSPARC). We develop a greedy decoder, referred as match and decode (MAD) algorithm and provide its analytical noiseless recovery guarantees. We propose a parallel greedy search technique, referred as parallel MAD (PMAD), to improve the performance. We describe the applicability of GSPARC with PMAD decoder for multi-user channels, providing a non-orthogonal multiple access scheme. We present numerical results comparing the block error rate (BLER) performance of the proposed algorithms for GSPARC in AWGN channels, in the short block length regime. The PMAD decoder gives better BLER than the approximate message passing decoder for SPARC. GSPARC with PMAD gives comparable and competitive BLER performance, when compared to other existing codes. In multi-user channels, GSPARC with PMAD decoder outperforms the sphere packing lower bounds of an orthogonal multiple access scheme, which has the same spectral efficiency. sparse signal recovery, error control coding, greedy algorithm, parallel search, multi-user channels, non-orthogonal multiple access ## I Introduction Shannon's seminal paper on information theory established the existence of information encoding and decoding techniques that guarantee almost error-free communications across noisy channels [1]. Extensive work has been carried out to develop such efficient error control coding techniques for additive white Gaussian noise (AWGN) channels [2]. Error correcting codes such as turbo codes and LDPC codes are widely used in various communication systems today, which have very small block error rates for large block lengths. In our work, we consider error control coding in the short block length regime. In many communication systems, the control channel information is typically sent over short block lengths. Several use cases in the fifth generation (5G) of mobile networks such as ultra-reliable low-latency communication (URLLC) and massive machine type communications in IoT applications require short block lengths. For instance, in URLLC scenarios such as industrial automation, autonomous vehicles and augmented/virtual reality, short length codes are required to meet the low latency requirements. A comprehensive study on the performance of existing codes in the short block length regime has been done in [3, 4]. Sparse regression codes (SPARC) [5, 6] connect the sparse signal recovery framework of compressive sensing [7] with error control coding techniques. In SPARC, a dictionary matrix (design matrix) \(\mathbf{A}\) of size \(N\times L\) with \(L>>N\) is partitioned into \(K\) equal sub-blocks (sections), with each sub-block having \(\frac{L}{K}\) columns. Based on the information bits, one column is chosen from each block and the codeword for transmission is obtained as the sum of chosen columns. We can represent the codeword as \(\mathbf{s}=\mathbf{A}\mathbf{x}\), where \(\mathbf{x}\) is a \(K\)-sparse signal with exactly \(K\) non-zero entries. The non-zero entries of \(\mathbf{x}\) are fixed and known in the standard SPARC [8] and they are chosen from a PSK constellation (based on information bits) in the modulated SPARC [9]. In [5, 6], the SPARC codes using Gaussian dictionary matrices were proven to achieve channel capacity for AWGN channels, as the block lengths approach infinity. Several power allocation (across the sub-blocks) and spatial coupling techniques have been developed for SPARC [6, 8, 9, 10, 11, 12, 13, 14] to improve the empirical performance of SPARC codes. In [8, 9] approximate message passing (AMP) decoders have been developed for standard and modulated SPARC, which guarantee that sub-block error rate goes to zero for all rates below capacity in AWGN channels. It has been shown empirically that the AMP decoder derived for Gaussian dictionary matrices work well with other dictionary matrices like the ones based on Hadamard matrices [8, 13] and fast Fourier transform (FFT) matrices [9]. Clipping and generalized AMP are discussed in [15], in order to improve finite block length performance of SPARC at low to medium code rates (bits per channel use). Iterative power allocation techniques are given in [11], which improve the performance of SPARC in high code rates. In our work, we consider SPARC for short block lengths, with \(N\leq 128\) and code rate \(\approx 0.5\) bits per channel use (bpcu), a regime which has gained sufficient interest in the recent years [3, 4] and where the methods to improve finite length performance of SPARC doesn't work [11, 15]. We construct good deterministic dictionary matrices, utilizing the existing literature on generating a large set of sequences with good correlation properties. Specifically, we use Gold codes [16] from the CDMA literature and mutually unbiased bases (MUB) [17] from the quantum information theory. With these constructions, the number of columns in \(\mathbf{A}\) is \(L\approx N^{2}\), where \(N\) is the length of each column and the maximum (normalized) cross correlations among the columns (usually referred as mutual coherence in compressive sensing) is approximately \(\frac{1}{\sqrt{N}}\). We choose the sparsity level \(K\) close to \(\sqrt{N}\), which typically ensures that the bpcu falls in the regime of interest when \(N\leq 128\). The contributions of our work are summarized below. We construct dictionary matrices using Gold codes and mutually unbiased bases, which have not been previously used in SPARC. We generalize the SPARC by possibly allowing sub-blocks of different sizes, which we refer as sub-block structure encoding (SSE) scheme. We also allow modulation of the \(K\) selected columns using information symbols from finite alphabet constellations. We give an algorithm to partition the given dictionary matrix with a total of \(L\) columns into \(K\) sub-blocks so that size of each sub-block is power of 2 and the number of information bits carried by SSE is maximized. We also generalize SPARC by entirely eliminating the sub-block structure, which we refer as sub-block free encoding (SFE) scheme. In SFE, we allow choosing _any_\(K\) columns from a total of \(L\) columns from the dictionary matrix. We give an iterative procedure which uniquely maps the information bits to one of the \(\binom{L}{K}\) combinations. Our iterative procedure is quite efficient and eliminates the need for any look-up tables. We develop a simple greedy algorithm for the generalized SPARC (GSPARC), which we refer as match and decode (MAD) algorithm, to recover the sparse signal (and subsequently the information bits) from the noisy observation of the codeword. Our MAD algorithm inherently exploits the finite alphabet nature of the modulation symbols and performs better than the conventional orthogonal matching pursuit (OMP) algorithm in AWGN channels. We give analytical recovery guarantees of the MAD decoder, in terms of the coherence of the dictionary matrix and the coherence parameter of the modulating constellation symbols. We improve the MAD algorithm by introducing a parallel search mechanism, which we refer as parallel MAD (PMAD) algorithm. Our MAD and PMAD decoders do not require the knowledge of the channel noise variance. Using numerical simulations, we show that our PMAD algorithm performs better than the AMP algorithm [9] in AWGN channels, for short block lengths. We also show that our PMAD with GSPARC provides competing block error rate performance in the short block lengths, when compared with several existing codes [3, 4]. In addition, we also show that SSE can be used in multi-user channels, such as multiple-access, broadcast and interference channels. For some combinations of code rates, block lengths and number of users, we show that SSE with PMAD decoder outperforms the sphere packing lower bounds of an orthogonal multiple access scheme. The paper is organized as follows: In Section II, we provide the details of the encoding techniques for GSPARC. In Section III, we give the details of dictionary matrix construction. In Section IV, we describe the decoding algorithms and their analytical performance guarantees. In Section V, we discuss on how SSE and PMAD can be employed in multi-user communication channels. In Section VI, we present block error rate performance comparison in AWGN channels. In Section VII, we present conclusions and give directions for future work. ## II Generalized SPARC Encoding Procedure ### _Sub-block Structure Encoding_ Consider a dictionary matrix \(\mathbf{A}\) of size \(N\times L\), with unit norm columns and \(L\geq N\). The codewords for messages are obtained using _sparse_ linear combinations of columns of the matrix \(\mathbf{A}\). In SSE, we fix the _sparsity_ level as \(K\) with \(K\leq N\) and partition the dictionary matrix \(\mathbf{A}\) into \(K\) subblocks (also referred as sections) such that \(\mathbf{A}=[\mathbf{A}_{1}\cdots\mathbf{A}_{K}]\) with \(k^{th}\) sub-block \(\mathbf{A}_{k}\) having a size of \(N\times L_{k}\) and \(\sum\limits_{k=1}^{K}L_{k}=L\). We assume that the number of columns in each sub-block is a power of 2 and the sub-blocks can possibly have unequal sizes. Based on the information bits, one column from each sub-block is selected and transmit codeword is obtained as a linear combination \[\mathbf{s}=\sum\limits_{k=1}^{K}\beta_{k}\mathbf{a}_{\alpha_{k}}, \tag{1}\] where \(\mathbf{a}_{\alpha_{k}}\) is a column from sub-block \(\mathbf{A}_{k}\) and the _modulation symbol_\(\beta_{k}\) is chosen from an \(M\)-ary constellation \(\mathcal{M}\). Now, the codeword in (1) can be represented as \[\mathbf{s}=\mathbf{A}\mathbf{x}, \tag{2}\] where \(\mathbf{x}\) of size \(L\times 1\) is a sparse signal with only \(K\) non-zero entries from the constellation \(\mathcal{M}\). We also allow the special case of \(M=1\), for which \(\beta_{k}=+1,\forall k\). Let \(\mathcal{S}\) denote the support of \(\mathbf{x}\). Since the symbol \(\beta_{k}\) carries \(\log M\) bits (assuming \(M\) is a power of 2) and the columns of sub-block \(\mathbf{A}_{k}\) are indexed using \(\log L_{k}\) bits, the total number of information bits encoded in the codeword in (1) is \[N_{b}=K\log M+\sum\limits_{k=1}^{K}\log L_{k}. \tag{3}\] We have used base 2 for \(\log\) throughout the paper. We define the _code rate_ of the encoding scheme in units of bits per real channel use (bpcu) as the number of bits transmitted per real dimension utilized. If \(\mathbf{A}\) is a real matrix and the constellation \(\mathcal{M}\) is real (such as PAM, BPSK), the code rate is \(\frac{N_{b}}{N}\) bpcu. On the other hand, if \(\mathbf{A}\) is a complex matrix and/or the constellation symbols are complex, the code rate is \(\frac{N_{b}}{2N}\) bpcu. SPARC encoding in [8, 9] mandates all the sub-blocks to be of equal sizes. Since we allow for unequal sub-block sizes, SSE scheme (1) is a generalization of the SPARC encoder. ### _Sub-block Partitioning Algorithm_ Deterministic construction of sequences with good correlation properties exist in the literature of CDMA [16, 18, 19] and quantum information theory [17, 20]. Dictionary matrices based on these deterministic constructions are good candidates due to their small coherence values. However, these constructions exist only for certain values of \(N\) and \(L\). In these cases, we need to have a proper sub-block partitioning algorithm such that the number of information bits (3) conveyed through the codeword (1) is maximized for a given \(K\). Given the total number of columns \(L\) in the dictionary matrix and the required number of partitions \(K\), we want to optimize \(\sum_{k=1}^{K}\log(L_{k})\) with the constrain that each \(L_{k}\) is a power of \(2\) and \(\sum_{k=1}^{K}L_{k}\leq L\). If we allow \(L_{k}\) to take any real value, the solution for the above optimization problem is readily obtained as \(L_{1}=L_{2}=\cdots=L_{K}=\frac{L}{K}\), that is, all the sub-blocks should be of equal size. To meet the power of 2 constraint, we set the size of the smallest sub-block as \(L_{1}=2^{\lfloor\log\frac{L}{K}\rfloor}\), the largest power of 2 number which is less than or equal to \(\frac{L}{K}\). After this step, the problem reduces to divide \(L-L_{1}\) columns into \(K-1\) sub-blocks. Proceeding in the same manner iteratively, the optimal partitioning sizes are obtained as \[L_{k}=2^{\lfloor\log\frac{L-\sum_{k=1}^{k-1}L_{m}}{K-k+1}\rfloor},\ \ k=1,\cdots,K. \tag{4}\] For example, if \(L=23\) and \(K=3\), the optimal partition sizes are \(L_{1}=4\), \(L_{2}=L_{3}=8\) and the remaining \(3\) columns are unused/discarded. From the above procedure, it also follows that the size of the largest sub-block can be at most twice the size of the smallest sub-block, that is, \(L_{K}\leq 2L_{1}\). ### _Sub-block Free Encoding_ In SFE, we eliminate the sub-block structure and choose _any_ subset of \(K\) columns from a total of \(L\) columns and modulate the chosen columns using symbols from an \(M\)-ary constellation. SFE scheme is another generalization of the SPARC encoding scheme from [8, 9]. The number of bits encoded by SFE scheme will be \[N_{b}=K\log M+\lfloor\log\binom{L}{K}\rfloor, \tag{5}\] which will be larger than or equal to that of the SSE scheme (3). Unlike SSE scheme, mapping bits into a subset of \(K\) columns is not straightforward. We provide an iterative scheme to achieve this feat without using any look-up table. Specifically, we provide a one-to-one mapping between non-negative integers and combinations of \(K\) objects out of \(L\) objects. In our SFE-GSPARC, a sequence of \(N_{b}\) bits (representing a non-negative integer) is mapped to a unique combination of \(K\) columns out of the total \(L\). We use lexicographic ordering within each combinations to form a unique ordered set (word) representing a combination. We then use lexicographic ordering over all possible words to list all possible combinations. This ordering allows us to get a one-to-one mapping between bits and object combinations. We provide a numerically efficient method to map bits to combinations and vice versa, without generating and storing the actual list. Let \(\mathcal{B}=\{0,1,2,3,...,L-1\}\) be a set of \(L\) distinct objects, where we use the first \(L\) non-negative integers as an abstraction for a set of \(L\) different objects. The total number of combinations of \(K\) objects out of \(L\) objects is given by \(\binom{L}{K}\). Let a combination be represented uniquely by the ordered set \(\mathbf{b}=(b_{0},...,b_{K-1})\) such that \(0\leq b_{0}<b_{1}<...<b_{K-1}\leq L-1\). The lexicographic ordering on the ordered set representation of the combinations allow us to list the combinations against non-negative integers. For example, with \(L=5\) and \(K=3\), there are \(\binom{5}{3}=10\) unique combinations. The lexicographic listing of these combinations against non-negative integers is given in Table I. We note that, out of \(\binom{L}{K}\) combinations, \(\binom{L-i}{K}-\binom{L-(i+1)}{K}=\binom{L-(i+1)}{K-1}\) combinations start with object \(i\) where \(i\in\{0,1,...,L-K\}\). The decimal indices of combinations starting with object \(i\) starts at \(\binom{L}{K}-\binom{L-i}{K}\) and ends at \(\binom{L}{K}-\binom{L-(i+1)}{K}-1\). For the decimal index \(d\in\{0,\cdots,\binom{L}{K}-1\}\) represented by the combination \(\mathbf{b}=(b_{0},b_{1},...,b_{K-1})\), we will have \(b_{0}=i\), if \[\binom{L}{K}-\binom{L-i}{K}\leq d<\binom{L}{K}-\binom{L-(i+1)}{K}. \tag{6}\] In Appendix A, we show that, non-negative integer \(i\) satisfying above constraint is upper bounded as \[i\leq\left\lfloor\left(L-\frac{(K-1)}{2}\right)\left\{1-\left( 1-\frac{d}{\binom{L}{K}}\right)^{\frac{1}{k}}\right\}\right\rfloor\ \ :=\widetilde{i}(L,K). \tag{7}\] Given \(L\) and \(K\), we first find \(\widetilde{i}(L,K)\) from (7) and check if \(\widetilde{i}\) satisfies (6) for the given \(d\). If not, we keep decrementing \(\widetilde{i}\) by one, until we find the integer \(i\) satisfying the constraint (6). Once the first object \(b_{0}\) is chosen, the problem is reduced to choosing \(K-1\) objects out of \(L-b_{0}\) objects corresponding to the decimal index \(d-\left[\binom{L}{K}-\binom{L-b_{0}}{K}\right]\). Hence, the same procedure can be recursively applied until the last object \(b_{K-1}\) is chosen. In our simulations, we find that either \(\widetilde{i}\) in (7) or \(\widetilde{i}-1\) always satisfies the requirement in (6). Using the same counting argument, given the set of \(K\) objects \(\{b_{0},\cdots,b_{K-1}\}\) (from a total of \(L\)), we can find the decimal index corresponding to the lexicographic ordering as \[d =\binom{L}{K}-\sum_{k=0}^{K-2}\left[\binom{L-b_{k}}{K-k}-\binom{L -b_{k}-1}{K-k-1}\right]-\binom{L-b_{K-1}}{1},\] \[=\binom{L}{K}-\left[\sum_{k=0}^{K-2}\binom{L-b_{k}-1}{K-k} \right]-\binom{L-b_{K-1}}{1}.\] ## III Dictionary Matrix Construction The choice of the dictionary matrix \(\mathbf{A}\) plays a vital role in the block error performance. It is desirable that the dictionary matrix has a large number of columns (as the number of information bits increases with \(L\), for fixed \(N\) and \(K\)) with small correlation among the columns (for good sparse signal recovery performance), which is characterized by the the mutual coherence of the dictionary matrix \(\mathbf{A}\), defined as, \[\mu(\mathbf{A})=\max_{p\neq q}\frac{|\langle\mathbf{a}_{p},\mathbf{a}_{q}\rangle|}{\|\mathbf{a} _{p}\|\|\mathbf{a}_{q}\|}. \tag{8}\] In this paper, we consider dictionary matrix constructions using Gold code sequences from CDMA literature and mutually unbiased bases from quantum information theory, which have small coherence values. ### _Gold Codes_ Gold codes are binary sequences with alphabets \(\{\pm 1\}\). Considering lengths of the form \(N=2^{n}-1\), where \(n\) is any positive integer, there are \(2^{n}+1\) Gold sequences. By considering all the circular shifts of these sequences, we get \(2^{2n}-1\) sequences. When dictionary matrix columns are constructed with these \(2^{2n}-1\) sequences normalized to unit norm, the resulting cross-correlation between any two columns of the dictionary matrix matrix takes only three possible values given as \(\frac{-1}{N}\), \(\frac{-t(n)}{N}\) and \(\frac{t(n)-2}{N}\) where \(t(n)\) is given by [16], \[t(n)=\begin{cases}1+2^{\frac{n+1}{n}},&n\text{ is odd,}\\ 1+2^{\frac{n+2}{2}},&n\text{ is even.}\end{cases} \tag{9}\] The mutual coherence of the gold code dictionary matrix is thus given by \[\mu=\frac{t(n)}{N} \tag{10}\] and we note that odd value of \(n\) leads to smaller values of mutual coherence. We can add any column of the identity matrix to the Gold code dictionary matrix, to get a total of \(L=2^{2n}\) columns (which is a power of 2), with the mutual coherence same as (10). Storing such a dictionary matrix will require \(N(N+1)^{2}\) bits. ### _Mutually Unbiased Bases_ Two orthonormal bases \(\mathcal{U}_{1}\) and \(\mathcal{U}_{2}\) of the \(N\)-dimensional inner product space \(\mathbb{C}^{N}\) are called mutually unbiased if and only if \(|\langle\mathbf{x},\mathbf{y}\rangle|=\frac{1}{\sqrt{N}}\) for any \(\mathbf{x}\in\mathcal{U}_{1}\) and \(\mathbf{y}\in\mathcal{U}_{2}\). A collection of orthonormal bases of \(\mathbb{C}^{N}\) is called mutually unbiased if all bases in the collection are pairwise mutually unbiased. Let \(Q(N)\) denote the maximum number of orthonormal bases of \(\mathbb{C}^{N}\), which are pairwise mutually unbiased. In [17], it has been shown that \(Q(N)\leq N\) (excluding the standard basis), with equality if \(N\) is a prime power. Explicit constructions are also given in [17] for getting \(N\) MUB in \(N\)-dimensional complex vector space \(\mathbb{C}^{N}\), if \(N\) is a prime power. When \(N=2^{n}\), with \(n\geq 2\), the \(N\) MUB unitary matrices \(\{\mathbf{U}_{1},\cdots,\mathbf{U}_{N}\}\) have the following properties. * The entries in all the \(N\) unitary matrices belong to the set \(\{\frac{1}{\sqrt{N}},\frac{-1}{\sqrt{N}},\frac{+j}{\sqrt{N}},\frac{-j}{\sqrt{ N}}\}\). This follows from the construction of MUB given in [17]. Storing all these \(N\) unitary matrices will require \(2N^{3}\) bits. * For \(N\) up to 512, we find that the inner products \(\langle\mathbf{x},\mathbf{y}\rangle\) between \(\mathbf{x}\in\mathbf{U}_{i}\) and \(\mathbf{y}\in\mathbf{U}_{m}\) for \(i\neq m\) is given by \[\langle\mathbf{x},\mathbf{y}\rangle\in\begin{cases}\{\frac{1}{\sqrt{N}}\,\frac{e^{i \frac{-2\pi}{4}}}{N}:m=0,...,7\},&n\text{ is odd,}\\ \{\frac{1}{\sqrt{N}}e^{\frac{-2\pi}{4}}:m=0,...,3\},&n\text{ is even.}\end{cases}\] (11) We conjecture that this property holds true when \(N\) is any higher power of 2. We construct dictionary matrix using \(N\) MUB as \(\mathbf{A}=[\mathbf{U}_{1}\cdots\mathbf{U}_{N}]\). In this case, \(L=N^{2}\) and the corresponding mutual coherence \(\mu\) is \(\frac{1}{\sqrt{N}}\). In addition, when \(N\) is a power of \(2\), we can always partition \(\mathbf{A}\) into \(K\) sub-blocks with size of each sub-block \(L_{k}\) being a power of \(2\) and each \(L_{k}\geq\frac{N^{2}}{2K}\). ## IV Decoding Algorithms ### _Match and Decode Algorithm_ The received signal is modeled as \[\mathbf{y} = \mathbf{s}+\mathbf{v}, \tag{12}\] \[= \mathbf{A}\mathbf{x}+\mathbf{v},\] where \(\mathbf{v}\) is additive noise. Information bits can be retrieved by recovering the sparse signal \(\mathbf{x}\) from the observation \(\mathbf{y}\). Conventional sparse signal recovery can be done using greedy techniques [21, 22, 23] or convex programming based techniques [24] or iterative message passing techniques [25, 26]. However, with our SPARC encoding, \(\mathbf{x}\) has special structure. The non-zero entries of \(\mathbf{x}\) are from a finite alphabet constellation. In addition, for the SSE scheme, there is exactly one non-zero entry corresponding to each sub-block. Such structures need to be utilized in order to provide good error performance. Approximate message passing decoders which exploit the structure of the SPARC signal \(\mathbf{x}\) are developed in [8, 9], and their sub-block error rate asymptotically (as \(N\) and \(L\) grow to \(\infty\)) converges to zero for AWGN channels, for all rates below channel capacity. In this paper, we develop a simple greedy decoder, referred as match and decode algorithm, which utilizes the structure in the SPARC signal \(\mathbf{x}\). MAD algorithm for SSE and SFE is described in Algorithm 1. We would like to emphasize that our MAD algorithm does not need to know any noise statistics, such as its variance. MAD algorithm takes the dictionary matrix \(\mathbf{A}\), the observation \(\mathbf{y}\), sparsity level \(K\) as inputs and produce an estimate \(\hat{\mathbf{x}}^{(K)}\) of the sparse signal \(\mathbf{x}\). It is ensured that the estimate \(\hat{\mathbf{x}}^{(K)}\) (of size \(L\)) has exactly \(K\) non-zero entries from the constellation set \(\mathcal{M}\). Any sparse signal \(\hat{\mathbf{x}}\) (of size \(L\)) with at most \(K\) non-zero entries from the set \(\mathcal{M}\) can also be given as partial information to the MAD algorithm. If no partial information is available, \(\hat{\mathbf{x}}\) is set as \(\mathbf{0}\). Main computationally intensive step in MAD involves computing the correlation between the observation (or residual) and the columns of the dictionary matrix in (13), which amounts to computing the matrix multiplication \(\mathbf{A}^{*}\mathbf{y}\). When \(\mathbf{A}\) is constructed using Gold codes (with scaled entries \(\{\pm 1\}\), or power of 2 MUB matrices (with scaled entries \(\{\pm 1,\pm j\}\)), the matrix multiplication \(\mathbf{A}^{*}\mathbf{y}\) can be simply computed using only additions (and subtractions). We also note that, the correlation of residual with the columns of dictionary matrix in (13) needs to be computed only for the first iteration. For the subsequent iterations, from (15), we have the recursion, \(\langle\mathbf{r}^{(t+1)},\mathbf{a}_{i}\rangle=\langle\mathbf{r}^{(t)},\mathbf{a}_{i}\rangle-b _{\hat{m}}\langle\mathbf{a}_{\hat{i}},\mathbf{a}_{i}\rangle\), where \(b_{\hat{m}}\) and \(\hat{i}\) denote the symbol and the active column detected in the previous iteration. We can store the symmetric gram matrix \(\mathbf{A}^{*}\mathbf{A}\), to get the values of \(\langle\mathbf{a}_{\hat{i}},\mathbf{a}_{i}\rangle\) needed in the recursion. For power of 2 MUB matrices, based on the conjecture in Section III, the entries of the gram matrix will be \(0\) or \(1\) or from the set given in (11). ### _Performance Guarantees of MAD algorithm_ Now, we establish some properties of MAD algorithm for SPARC codes. **Theorem 1**.: _For the AWGN channel, MAD algorithm coincides with the maximum likelihood decoder of SPARC codes, when sparsity \(K=1\)._ Proof.: Maximum likelihood (ML) detector for AWGN finds the codeword which is the closest to the given observation, among all the possible codewords [27]. For \(K=1\) SPARC code, the ML detector outputs the column index \(\hat{i}\) and the modulation symbol \(\hat{b}\) as \[(\hat{i},\hat{b}) =\arg\min_{b\in\mathcal{M},1\leq i\leq L}\|\mathbf{y}-b\mathbf{a}_{i}\|^{ 2},\] \[=\arg\min_{b\in\mathcal{M},1\leq i\leq L}\|\mathbf{y}\|^{2}-2\mathfrak{ Rel}\{\langle\mathbf{y},b\mathbf{a}_{i}\rangle\}+\|b\mathbf{a}_{i}\|^{2},\] \[=\arg\max_{b\in\mathcal{M},1\leq i\leq L}\mathfrak{Rel}\{b^{*} \langle\mathbf{y},\mathbf{a}_{i}\rangle\}-\frac{|b|^{2}}{2},\] since \(\|\mathbf{a}_{i}\|^{2}=1,\forall i\). Clearly, this ML output coincides with the output of the MAD decoder (without any partial information, that is, \(\hat{\mathbf{x}}=\mathbf{0}\)). Now, we consider the recovery guarantee of the MAD decoder, in the absence of noise. Towards that, we restrict our attention to PSK constellations, \(\mathcal{M}=\{b_{1},\cdots,b_{M}\}\) with \(|b_{m}|=1,\forall m\). We define the _coherence_ of the PSK constellation as \[\gamma=\max_{i\neq m}\mathfrak{Rel}\{b_{i}^{*}b_{m}\}. \tag{16}\] Based on the above definition, the coherence \(\gamma\) for a constellation can be negative as well. Also, the coherence is not affected when a constant phase \(e^{j\theta}\) is multiplied to all the symbols of the constellation. Note that, coherence \(\gamma=-1\) for the BPSK constellation \(\{1,-1\}\) and coherence \(\gamma=0\) for the QPSK constellation \(\{1,-1,j,-j\}\) (or any other rotation of the QPSK constellation). It easily follows that, the minimum distance of the PSK constellation \(d_{\min}=\min_{i\neq m}|b_{i}-b_{m}|\) can be written in terms of its coherence as \(d_{\min}=\sqrt{2-2\gamma}\). **Theorem 2**.: _For SPARC codes with dictionary matrix having mutual coherence \(\mu\) and modulation symbols chosen from a PSK constellation having coherence \(\gamma\), the MAD decoder recovers the support and modulation symbols perfectly from the noiseless observation, if the following condition is met,_ \[K<\min\left\{\frac{1+\mu}{2\mu},\frac{1+2\mu-\gamma}{2\mu}\right\}. \tag{17}\] Proof.: Note that, in the first iteration, the correlation values \(p_{i,m}\) computed in (14) are identical to \(p_{i,m}=\mathfrak{Rel}\{\langle\mathbf{y},b_{m}\mathbf{a}_{i}\rangle=\mathfrak{Rel} \{b_{m}^{*}\mathbf{a}_{i}^{*}\mathbf{y}\}\). When the above conditions in (17) are met, we want to show that the metric corresponding to the the correct constellation symbol and the correct column (which participated in the linear combination to generate the given codeword as in (1)) will be higher than that of all the incorrect cases (wrong constellation symbol and/or wrong column). Without loss of generality (WLOG), let the codeword be generated using the first \(K\) columns \(\{\mathbf{a}_{1},\cdots,\mathbf{a}_{K}\}\) columns of the dictionary matrix. For some specific column \(\mathbf{a}_{\ell}\) with \(1\leq\ell\leq K\), WLOG, let the modulation symbol be \(b_{1}\in\mathcal{M}\), such that, the noiseless observation is \[\mathbf{y}=b_{1}\mathbf{a}_{\ell}+\sum_{1\leq k\leq K,k\neq\ell}\beta_{k}\mathbf{a}_{k},\] where \(\beta_{k}\)'s are arbitrary modulation symbols from \(\mathcal{M}\). The metric \(p_{\ell,1}\) corresponding to an active column with correct modulation symbol is bounded as \[p_{\ell,1} =\mathfrak{Rel}(b_{1}\mathbf{a}_{\ell}+\sum_{1\leq k\leq K,k\neq\ell} \beta_{k}\mathbf{a}_{k},b_{1}\mathbf{a}_{\ell}),\] \[=\langle b_{1}\mathbf{a}_{\ell},b_{1}\mathbf{a}_{\ell}\rangle+\mathfrak{ Ptcal}(\sum_{1\leq k\leq K,k\neq\ell}\beta_{k}\mathbf{a}_{k},b_{1}\mathbf{a}_{\ell}),\] \[=1+\mathfrak{Rel}\sum_{1\leq k\leq K,k\neq\ell}b_{1}^{*}\beta_{k }\mathbf{a}_{\ell}^{*}\mathbf{a}_{k},\] \[\geq 1-(K-1)\mu, \tag{18}\] since \(|\mathbf{a}_{\ell}^{*}\mathbf{a}_{k}|\leq\mu\) and \(|b_{1}|=|\beta_{k}|=1\). Now, the correlation corresponding to the correct column but wrong modulation symbol \(p_{\ell,m}\) with \(m\neq 1\) can be bounded as, \[p_{\ell,m} =\mathfrak{Rel}(b_{m}\mathbf{a}_{\ell}+\sum_{1\leq k\leq K,k\neq\ell }\beta_{k}\mathbf{a}_{k},b_{1}\mathbf{a}_{\ell}),\] \[=\mathfrak{Rel}\{b_{1}^{*}b_{m}\}+\mathfrak{Ptcal}\sum_{1\leq k \leq K,k\neq\ell}b_{1}^{*}\beta_{k}\mathbf{a}_{\ell}^{*}\mathbf{a}_{k},\] \[\leq\gamma+(K-1)\mu, \tag{19}\] since \(\mathfrak{Rel}\{b_{1}^{*}b_{m}\}\leq\gamma\). Similarly, the correlation corresponding to the wrong column \(p_{i,m}\) with \(i>K\) can be bounded as \[p_{i,m}\leq K\mu,\forall i>K,\forall m. \tag{20}\] Since \(1\leq\ell\leq K\) is an arbitrary active column, when (17) is met, metric of an active column with correct symbol in (18) will be higher than the metrics of all the incorrect cases (19) and (20). Hence MAD will find the correct column and symbol in the first iteration. After the cancellation of detected column, the problem boils down to detecting \(K-1\) active columns and the corresponding symbols. Since the number of active columns has decreased (\(K-1\) will also be less than the right hand side of the condition in (17)), the subsequent iterations will also be successful. For BPSK and QPSK constellations for which \(\gamma\leq 0\), condition in (17) simplifies as \(K<\frac{1+\mu}{2\mu}\). Interestingly, this recovery condition coincides with that of the orthogonal matching pursuit for \(K\)-sparse signals [23]. With MUB dictionary matrices, this recovery condition becomes \(K<1+\frac{\sqrt{N}}{2}\). Hence, when the sparsity level is of the order of \(\sqrt{N}\), greedy algorithms can give good recovery performance. ### _Parallel MAD algorithm_ Intuitively, the first iteration of the MAD algorithm is the most error prone, since it faces the _interference_ from all the undetected columns. To improve on MAD performance, we consider a variation, referred as parallel MAD. In the first iteration, we choose \(T\) candidates for the active column, by taking the top \(T\) metrics (14), and perform MAD decoding for each of these \(T\) candidates, resulting in \(T\) different estimates for the sparse signal. Among these \(T\) estimates, we select the one with the smallest Euclidean distance to the observation, inspired by the ML decoder for white Gaussian noise. The mathematical details are described in Algorithm 2 for completeness. The PMAD decoder is similar to parallel greedy search given in [28] with the notable difference that the alphabet size is discrete and the exact sparsity level is known in the current work. ``` 1: Given the dictionary matrix \(\mathbf{A}\) and the observation vector \(\mathbf{y}\), compute \(c_{i}=\langle\mathbf{y},\mathbf{a}_{i}\rangle,\ i=1,\cdots,L\) and \(p_{i,m}=\mathfrak{Rel}\{c_{i}b_{m}^{*}\}-\frac{|b_{m}|^{2}}{2},\ b_{m}\in \mathcal{M}\). 2: Initialize parallel path index \(n=1\); Initialize \(\mathcal{D}=\emptyset\). 3: Choose a candidate for active column and the corresponding non-zero entry: \((\hat{i}_{n},\hat{m}_{n})=\arg\max_{(i\notin\mathcal{D},m)}p_{i,m}\). 4: Run MAD algorithm with inputs \((\mathbf{A},\mathbf{y},K)\) and prior information on sparse signal \(\hat{\mathbf{x}}=b_{\hat{m}_{n},c_{\hat{m}_{i}}}\). Denote the recovered sparse signal output of MAD as \(\hat{\mathbf{x}}_{n}\). 5: Update \(\mathcal{D}=\mathcal{D}\cup\hat{i}_{n}\) and \(n=n+1\); If \(n\leq T\), go back to Step 3. 6: Final output \(\hat{\mathbf{x}}=\arg\min_{\hat{\mathbf{x}}_{n};1\leq n\leq T}\|\mathbf{y}-\mathbf{A}\hat{\bm {x}}_{n}\|\). ``` **Algorithm 2** Parallel Match and Decode Algorithm ## V Application to Multi-User Channels In this Section, we discuss how the SSE and PMAD can be used in multi-user scenarios. Using the SSE scheme with sparsity level \(K\) described in Section II-A, we can support \(P\)-user multiple access channel, or \(P\)-user broadcast channel or \(P\)-user interference channel [29, 30], for any \(P\leq K\). First, we illustrate how the SSE scheme can be employed to generate the codeword of each user based on the user's information bits. As before, we partition the dictionary matrix \(\mathbf{A}\) into \(K\) sub-blocks, with sub-block \(\mathbf{A}_{k}\) having \(L_{k}\) number of columns. These \(K\) sub-blocks are divided among \(P\) users, with \(\mathcal{A}_{i}=\{\mathbf{A}_{i,1},\cdots,\mathbf{A}_{i,K_{i}}\}\subset\{\mathbf{A}_{1}, \cdots,\mathbf{A}_{K}\}\) denoting the ordered set of \(K_{i}\) sub-blocks assigned to user-\(i\). Note that \(\mathcal{A}_{i}\cap\mathcal{A}_{j}=\emptyset\) if \(i\neq j\) and \(\sum_{i=1}^{K}K_{i}=K\). The codeword for user-\(i\) is obtained as \[\mathbf{s}_{i}=\sum_{k=1}^{K_{i}}\beta_{i,k}\mathbf{a}_{i,k} \tag{21}\] where symbols \(\{\beta_{i,1},\cdots,\beta_{i,K_{i}}\}\) are chosen from \(M_{i}\)-ary constellation and the column \(\mathbf{a}_{i,k}\) is chosen from the sub-block \(\mathbf{A}_{i,k}\), for \(1\leq k\leq K_{i}\). Denoting the number of columns in \(\mathbf{A}_{i,k}\) as \(L_{i,k}\), the total number of bits that can be conveyed for user-\(i\) is \[N_{b_{i}}=K_{i}\log M_{i}+\sum_{i=1}^{K_{i}}\log L_{i,k}. \tag{22}\] In the multiple access channel (MAC), which is equivalent to an uplink scenario in a cellular network, the encoding is done independently by each user, which coincides with the SSE based procedure in (21). The observation at the receiver is \[\mathbf{y} = \sum_{i=1}^{P}\mathbf{s}_{i}+\mathbf{v}\ =\ \sum_{i=1}^{P}\sum_{k=1}^{K_{i}} \beta_{i,k}\mathbf{a}_{i,k}+\mathbf{v}. \tag{23}\] The decoding is done jointly at the receiver, which can be done using the MAD or PMAD algorithm, which recovers the support of the active columns \(\{\mathbf{a}_{i,k}\}\) and the corresponding modulation symbols \(\beta_{i,k}\), for each user. In the broadcast channel, which is similar to the downlink scenario in a cellular network, encoding is done jointly at the base station and the decoding is done by each user separately. The transmitter sends the sum of all the users' codewords as \(\mathbf{s}=\sum_{i=1}^{P}\mathbf{s}_{i}\). Received signal at the user-\(i\) is given by \(\mathbf{y}_{i}=\mathbf{s}+\mathbf{n}_{i}\), where \(\mathbf{n}_{i}\) is the noise at the user-\(i\). MAD or PMAD decoding can be employed by each user, which recovers the active columns present in \(\mathbf{s}\) and the corresponding modulation symbols. Hence, in this approach, users recover the information sent to the other users, in addition to their own information. If the users are grouped such that their noise statistics are similar, then their error performance will be similar. We also note that SSE can be applied in two-way relay channel, which has a multiple access phase followed by a broadcast phase. In the interference channel, there are \(P\) transmitters and \(P\) receivers. Each transmitter sends information to a corresponding intended receiver. With \(i^{th}\) transmitter generating codeword as in (21), the received signal at the \(i^{th}\) receiver is given by \[\mathbf{y}_{i}=\mathbf{s}_{i}+\sum_{j\neq i}h_{i,j}\mathbf{s}_{j}+\mathbf{n}_{i}, \tag{24}\] where \(h_{i,j}\) denotes the channel gain from \(j^{th}\) transmitter to the \(i^{th}\) receiver. Without loss of generality, we have taken \(h_{i,i}=1\). MAD or PMAD decoding employed at the \(i^{th}\) receiver recovers the codewords of all the transmitters. If \(|h_{i,j}|=1,\forall i,j\), and the noise statistics are identical across all the receivers, then the decoding performance (successful recovery of all the codewords) of all the receivers will coincide with the corresponding single user case. ## VI Simulation Results We study the performance in terms of the block error rate (BLER), also referred as codeword error rate, for the proposed encoding and decoding schemes in additive white Gaussian noise channels. For the complex MUB dictionary matrix, the non-zero entries of the sparse signal are chosen from the QPSK constellation. For real Gold code dictionary matrix, we consider BPSK constellation. When the non-zero entries in the \(K\)-sparse signal \(\mathbf{x}\) are uncorrelated, it easily follows that the expected energy of the codeword \(\mathbf{s}=\mathbf{A}\mathbf{x}\) is \(E_{s}=K\), with the columns of dictionary matrix being unit norm. Energy per bit \(E_{b}\) is obtained by dividing \(E_{s}\) by the total number of bits \(N_{b}\) conveyed by the sparse signal \(\mathbf{x}\). With \(\frac{N_{b}}{2}\) denoting the variance of the Gaussian noise per real dimension, we study the BLER versus \(E_{b}/N_{0}\) (in dB) of the proposed schemes. An error control code conveying \(N_{b}\) bits using \(N\) real channel uses is represented by the pair \((N,N_{b})\), with the code rate of \(\frac{N_{b}}{N}\) bits per real channel use. A complex code of length \(N\) can be represented by a real code of length \(2N\) by concatenating real and imaginary part of the code.In this paper, a complex code of length \(N\) supporting \(N_{b}\) bits of information is equivalent to a \((2N,N_{b})\) real code, with code rate of \(\frac{N_{b}}{2N}\) bits per real channel use. #### Vi-1 MAD vs. OMP Orthogonal Matching Pursuit (OMP) decoder is a well studied greedy decoder [23] for sparse signal recovery, which is similar in computational cost to that of MAD decoder. In Figure 1, we compare the BLER performance of both decoders for complex MUB dictionary of size \(64\times 4096\) and sparsity \(K=6\) giving rise to a \((128,68)\) SSE-GSPARC code. We run the OMP algorithm for \(K\) iterations and quantize the non-zero entries of the recovered \(K\)-sparse signal (obtained using least squares method) to the nearest nearest constellation points. On the other hand, MAD algorithm utilizes the finite alphabet size of the non-zero entries in every iteration, by jointly decoding the active column and the corresponding constellation point. In addition, when OMP projects the residuals onto the orthogonal complement of the detected columns, there will be a reduction of signal components from the yet-to-be detected active columns. On the other hand, MAD simply subtracts out the detected columns without affecting the yet-to-be detected active columns. Due to these reasons, the proposed MAD decoder provides better BLER performance than the OMP algorithm. In addition to the standard QPSK constellation \(\{+1,-1,+j,-j\}\), we also consider offset QPSK constellations. Specifically, the modulating symbol for \(k^{th}\) sub-block for \(k\in\{1,...,K\}\) is chosen from a rotated QPSK constellation, obtained by counter-clockwise rotation of the standard constellation by \(\frac{(k-1)\pi}{2K}\) radians. The motivation for introducing phase offset to different sub-blocks is based on the following reasoning. With \(i\) being an index of one of the active columns from the sparse signal support set \(\mathcal{S}\), consider the inner product \(\langle\mathbf{y},\mathbf{a}_{i}\rangle=\beta_{i}+\sum_{k\in\mathcal{S},k\neq i}\beta_ {k}\langle\mathbf{a}_{k},\mathbf{a}_{i}\rangle+\langle\mathbf{v},\mathbf{a}_{i}\rangle\). MAD decoder is prone to error when the net interference from other active columns has high magnitude. For the complex MUB dictionary matrix with \(N=64\), from (11), the inner product between any two non-orthogonal columns belong to the set \(\{\frac{+1}{\sqrt{N}},\frac{-1}{\sqrt{N}},\frac{+j}{\sqrt{N}},\frac{-j}{\sqrt{ N}}\}\). Due to this property, there are many possible support sets \(\mathcal{S}\), for which the interference terms can add coherently to result in a high magnitude. To mitigate this constructive addition of interfering terms, we introduce a phase offset to each sub-block of the dictionary matrix. From the results in Fig. 1, we see that MAD algorithm performs better with offset QPSK constellations. In all the remaining plots for SSE schemes with complex MUB dictionary matrix, we have used offset QPSK as the default modulation scheme. #### Vi-2 MAD vs. PMAD The performance of MAD decoder can be improved by running multiple MAD decoders in parallel and selecting the best solution based on minimum distance decoding rule. In Figure 2, we compare MAD decoder with PMAD decoder, for different number of parallel paths. The simulation parameters are the same as in Fig. 1, with offset QPSK constellations. We denote PMAD with \(T\) parallel paths as \(T-\)PMAD. We observe that the PMAD improves the BLER performance of MAD decoder significantly. 16-PMAD and 100-PMAD has roughly 4.5 dB and 5 dB gain respectively over MAD decoder for BLER of \(10^{-4}\). This shows that the gains from parallel search starts to diminish as we increase the number of parallel paths. This allows us to use a small number of parallel paths for our PMAD decoders. #### Iv-B3 PMAD vs. AMP Approximate Message Passing decoders have been developed for standard SPARC and modulated SPARC for random Gaussian dictionary matrices [9] and have been shown empirically to work with other dictionary matrices. In Figure 3, we compare the BLER performances of PMAD decoder with online AMP decoder [9]. We use equal power allocation for all the sub-blocks, because the power allocation techniques to improve the performance of AMP decoders do not work in the small code length \((N\leq 128)\) and low code rate \((R\approx 0.5)\) regime [11, 15]. We compare PMAD algorithm with \(T\) parallel paths with the AMP with \(T\) iterations, which is referred as \(T-\)AMP in the plot. The AMP algorithm [9] computes non-linear MMSE estimate of each entry of the sparse signal \(\boldsymbol{x}\) in each iteration. We note that the \(T-\)PMAD requires significantly less computations than \(T-\)AMP. We consider two scenarios, one with equal sub-block sizes and the other with unequal size sub-blocks. A complex MUB dictionary matrix of size \(64\times 512\) with \(K=8\) has equal size sub-blocks, resulting in a \((128,64)\) code. With complex MUB dictionary matrix of size \(64\times 4096\), running the sub-block partitioning algorithm from Section II-B with \(K=6\), we get a \((128,68)\) code with unequal sub-block sizes. Since AMP in [9] is designed for equal size sub-blocks, we use a generalization of the AMP to accommodate unequal sub-block sizes. From Fig. 3, we find that \(T\)-PMAD performs better than \(T\)-AMP, in the short block length regime. For the unequal size sub-blocks, AMP performs poorly for large values of \(E_{b}/N_{0}\). However, PMAD algorithm works well for both equal and unequal sub-block sizes. We also note that, the lower sparsity case \(K=6\) with code rate \(\frac{68}{128}\) performs better than the higher sparsity case \(K=8\) with code rate \(0.5\), emphasizing that sparsity is a key parameter for SPARC. #### Iv-B4 SSE vs. SFE In Figure 4, we study the BLER performance of the encoding schemes with and without sub-block structure, for a complex MUB dictionary matrix of size \(64\times 4096\), using 100-PMAD decoder. SSE with \(K=5\) and \(K=6\) gives \((128,58)\) and \((128,68)\) codes respectively, while SFE with \(K=5\) gives \((128,63)\) code. Since SFE transmits more bits than SSE for the same sparsity \(K\), the noise level in SFE will be smaller than that of the SSE scheme. On the other hand, the search space for each iteration of the greedy decoder for SFE will be larger than that of the SSE scheme. Due to these counteracting effects, SFE has nearly same BLER performance as SSE, at high \(E_{b}/N_{0}\) values, while achieving higher code rate. #### Iv-B5 Very short length codes In Figure 5, we GSPARC using MAD/PMAD decoding for very short lengths, with \((20,11)\) and \((20,8)\) Golay codes considered for 5G-NR [3], using ML decoding. Complex MUB dictionary matrix of size \(8\times 64\) Fig. 1: Comparison of OMP and MAD decoder for SSE schemes. Fig. 4: Comparison of BLER performance of SSE and SFE schemes. Fig. 3: Comparison of PMAD decoder with online-AMP decoder. Fig. 2: Effects of number of parallel paths on the BLER of PMAD decoder. with \(K=1\) gives \((16,8)\) code, for which MAD decoder is used. Complex MUB dictionary matrix of size \(16\times 257\) with SFE \(K=2\) scheme gives \((32,19)\) code, for which 16-PMAD decoder is used. We find that GSPARC codes give comparable performance to Golay codes of very short lengths. #### Vi-B6 Short length codes In Figure 6, we compare our \((127,63)\) SSE scheme (Gold code dictionary matrix of size \(127\times 128^{2}\) with \(K=5\)) with some of the existing \((128,64)\) error control codes [4]: binary LDPC codes used in the CCSDS standard, LDPC codes (base graph 2) considered for 5G-NR standard and Turbo code with 16 states. More details about these existing codes are given in [4]. SSE with PMAD decoder performs better than LDPC code from the CCSDS standard. We also note that some codes like tail-biting convolutional code with constraint length 14 [4] and polarization adjusted convolutional codes [31] perform very close to sphere packing bounds (shown in Fig. 6 with legend 'SPB') for the given code length and code rate. #### Vi-B7 Multi-user channels As explained in Section V, SSE with sparsity \(K\) can support up to \(K\) users in multi-user channels. For illustration, we consider a multiple-access channel (23). An SSE scheme with sparsity level \(K\) resulting in a \((N_{1},N_{b})\) code utilizes \(N_{1}\) real channel uses and communicates either \(\lfloor N_{b}/K\rfloor\) or \(\lceil N_{b}/K\rceil\) bits from each user, based on the optimal sub-block partitioning algorithm from Section II-B. For comparison, we consider a \(K\)-user orthogonal multiple access scheme, where each user is assigned a dedicated time/frequency resource and each user employs a single user Golay code with approximate parameters \((N_{1}/K,N_{b}/K)\). We also find the lower bound for the \(K\)-user orthogonal multiple access using sphere packing bounds for code parameters \((N_{1}/K,N_{b}/K)\). In Figure 7, we study the probability that at least one user is decoded in error. For \(K=6\) users, using SSE with Gold code dictionary matrix resulting in \((127,74)\) code (communicating 12 or 13 bits for each user) outperforms the sphere packing bounds of the orthogonal multiple access scheme using \((23,12)\) codes, and also provides higher spectral efficiency. Similar results hold true for MUB dictionary matrices as well. We see that, SSE with PMAD provides a multi-user error control coding scheme, offering significant gains over orthogonal multiple access schemes, for short block lengths. The gains can be understood from the fact that SSE encodes the information from the users over block length of \(N_{1}\), while orthogonal multiple access schemes use codes of smaller block lengths \(N_{1}/K\). SSE provides a neat way of pooling the resources of users together such that the overall error performance of all the users is improved. ## VII Conclusions In this paper, we developed two generalizations of SPARC, an SSE scheme, which allows unequal sub-block sizes and an SFE scheme, which eliminates the sub-block structure altogether. For both SSE and SFE schemes, we developed a greedy approach based decoder, referred as MAD algorithm and introduced a parallel greedy search mechanism to improve its performance. Using Gold codes and mutually unbiased bases to construct the dictionary matrices, we study block error rate performance in AWGN channels, for short block lengths. We showed that our proposed PMAD outperforms the AMP decoder for SPARC and performs comparably and competitively with widely used codes. We also described that SSE scheme can be used in various multi-user channel settings. In multiple access channels, we showed that SSE with PMAD decoder outperforms the sphere packing lower bounds of an orthogonal multiple access scheme, having the same spectral efficiency. Developing greedy decoders for GSPARC which work well for moderate to large block lengths can be explored Fig. 5: Comparison of BLER performance of very short length codes. Fig. 6: Comparison with existing \((128,64)\) codes. Fig. 7: Comparison of BLER performance in the multiple access channel. in a future work. Studying GSPARC in multi-user channels with asymmetric power and rate conditions can be explored in the future. ### _Indexing the ordered set of \(K\) objects out of \(L\) objects_ Given \(L\), \(K\) and \(d\), our goal is to find a good estimate for \(i\), which satisfies the condition (6). The condition can be rewritten as \[\begin{pmatrix}L\\ K\end{pmatrix}\left(1-\frac{\binom{L-i}{K}}{\binom{L}{K}}\right)\leq d<\binom{L} {K}\left(1-\frac{\binom{L-(i+1)}{K}}{\binom{L}{K}}\right),\\ \implies\frac{\binom{L-(i+1)}{K}}{\binom{L}{K}}<1-\frac{d}{ \binom{L}{K}}\leq\frac{\binom{L-i}{K}}{\binom{L}{K}}.\end{pmatrix}\] First, noting that \[\frac{L-m-p}{L-p}=1-\frac{m}{L-p}\ \ \leq 1-\frac{m}{L}\ \ =\frac{L-m}{L}, \tag{25}\] for \(p\in\{0,1,...,K-1\}\), we get the following bounds \[\begin{split}\left(\frac{L-(m+1)-(K-1)}{L-(K-1)}\right)^{K}& \leq\frac{\binom{L-(m+1)}{K}}{\binom{L}{K}},\\ \frac{\binom{L-m}{K}}{\binom{L}{K}}&\leq\left(\frac{L-m}{L }\right)^{K}\end{split} \tag{26}\] Setting \(\bar{L}=L-\frac{K-1}{2}\), we have \[\frac{\binom{L-m}{K}}{\binom{L}{K}} =\frac{(\bar{L}+\frac{K-1}{2}-m)\cdots(\bar{L}+\frac{K-1}{2}-m-( K-1))}{(\bar{L}+\frac{K-1}{2})\cdots(\bar{L}+\frac{K-1}{2}-(K-1))}\] \[=\left\{\begin{array}{ll}\frac{L-m}{L}\times\prod_{p=1}^{\bar{ K}-1}\frac{(L-m)^{2}-p^{2}}{L-p^{2}}&\text{for odd }K,\\ \prod_{p=1}^{\bar{K}}\frac{(L-m)^{2}-\left(\frac{2-m^{2}}{L-p^{2}}\right)^{2} }{L^{2}-\left(\frac{2m^{2}}{L-p^{2}}\right)^{2}}&\text{for even }K.\end{array}\right.\] For both odd and even \(K\), we have \[\left(\frac{(\bar{L}-m)^{2}-\left(\frac{K-1}{2}\right)^{2}}{\bar{L}^{2}-\left( \frac{K-1}{2}\right)^{2}}\right)^{\frac{K}{2}}\leq\frac{\binom{L-m}{K}}{\binom {L}{K}}\leq\left(\frac{\bar{L}-m}{\bar{L}}\right)^{K} \tag{27}\] Following the arguments of equation 25, it is easy to show the following inequality: \[\left(\frac{L-m-(L-1)}{L-(K-1)}\right)^{K}\leq\left(\frac{(\bar{L }-m)^{2}-\left(\frac{K-1}{2}\right)^{2}}{\bar{L}^{2}-\left(\frac{K-1}{2} \right)^{2}}\right)^{\frac{K}{2}}\] \[\leq\frac{\binom{L-m}{K}}{\binom{K}{K}}\leq\left(\frac{\bar{L}-m} {\bar{L}}\right)^{K}\leq\left(\frac{L-m}{L}\right)^{K}. \tag{28}\] Combining inequalities from (26) and (28), we get \[\left(\frac{(\bar{L}-(i+1))^{2}-\left(\frac{K-1}{2}\right)^{2}}{\bar{L}^{2}- \left(\frac{K-1}{2}\right)^{2}}\right)^{\frac{K}{2}}<1-\frac{d}{\binom{L}{K}} \leq\left(\frac{\bar{L}-i}{\bar{L}}\right)^{K},\] from which, we get the upper bound for \(i\) in (7).
2307.00455
Connecting the Dots: A Comprehensive Literature Review on Low and Medium-Voltage Cables, Fault Types, and Digital Signal Processing Techniques for Fault Location
The review begins with an exploration of acceptable cable types guided by local standards. It then investigates typical cable faults, including insulation degradation, conductor faults, and ground faults, providing insights into their characteristics, causes, and detection methods. Furthermore, the manuscript surveys the latest publications and standards on DSP techniques in fault location spanning various algorithms used. This review provides a comprehensive understanding of low and medium-voltage cables, fault types, and DSP techniques. The findings contribute to improved fault diagnosis and localization methods, facilitating more accurate and efficient cable fault management strategies
Shankar Ramharack, Sanjay Bahadoorsingh
2023-07-02T02:30:57Z
http://arxiv.org/abs/2307.00455v1
Connecting the Dots: A Comprehensive Literature Review on Low and Medium-Voltage Cables, Fault Types, and Digital Signal Processing Techniques for Fault Location ###### Abstract The review begins with an exploration of acceptable cable types guided by local standards. It then investigates typical cable faults, including insulation degradation, conductor faults, and ground faults, providing insights into their characteristics, causes, and detection methods. Furthermore, the manuscript surveys the latest publications and standards on DSP techniques in fault location spanning various algorithms used. This review provides a comprehensive understanding of low and medium-voltage cables, fault types, and DSP techniques. The findings contribute to improved fault diagnosis and localization methods, facilitating more accurate and efficient cable fault management strategies Keywords:distribution, fault location, reflectometry ## 1 Background Cable fault diagnostics play a crucial role in ensuring the safe and reliable operation of electrical power systems[1]. Low and medium-voltage cables are vital components of power distribution networks, supplying electricity to residential, commercial, and industrial consumers. However, over time, these cables can experience various types of faults, such as insulation degradation, conductor faults, and ground faults. These faults can disrupt power supply, lead to equipment failures, and pose safety risks. Identifying and understanding acceptable cable types enables engineers, contractors, and installers to make informed decisions during cable selection and installation processes. Similarly, conducting a literature review on technical publications and standards concerning typical cable faults on low and medium-voltage cables is of great significance as it is often neglected in fault location work[2]. By reviewing technical publications and standards, researchers and practitioners gain access to collective knowledge and experiences in cable fault diagnostics. This knowledge aids in developing effective maintenance strategies, reducing downtime, and improving the overall reliability of power networks. Furthermore, the literature review on modern digital signal processing (DSP) techniques used in reflectometry and cable fault location addresses the need for advanced and accurate fault detection methodologies. DSP techniques, such as time-domain reflectometry (TDR), offer powerful tools for analyzing cable faults and determining their locations[3]. By reviewing the literature on these techniques, researchers can identify the latest advancements, algorithms, and methodologies employed in fault detection and localization. This knowledge contributes to the development of more precise and efficient cable fault location systems, minimizing repair time, reducing costs, and enhancing the overall reliability of power distribution networks. ### Objectives The objectives of this work are as follows 1. To perform a literature review on types of low and medium-voltage cables that are acceptable for installations as guided by the local and international standards. 2. To perform a literature review of technical publications and standards on typical cable faults on low and medium-voltage cables. 3. To perform a literature review of modern digital signal processing techniques used in reflectometry and cable fault location. ## 2 Low and Medium-Voltage Cables: Acceptable Types as Guided by International and Local Standards The most widely used standards guiding LV and MV Cable Installations in North America and the Caribbean are those issued by: 1. The Aluminum Association (AA) 2. American National Standards Institute (ANSI) 3. American Society for Testing and Materials (ASTM) 4. Canadian Standards Association (CSA) 5. Insulated Cable Engineers Association (ICEA) 6. National Electrical Manufacturers Association (NEMA) 7. Association of Edison Illuminating Companies (AEIC) 8. Rural Utilities Service (RUS) 9. Underwriter's Laboratories (UL) 10. National Electrical Code (NEC) Aerial cable is used occasionally for primary conductors in special situations where clearances are too close for open-wire construction or where adequate tree trimming is not practical. The type of construction more frequently used consists of covered conductors (nonshielded) supported from the messenger by insulating spacers of plastic or ceramic material[4]. The conductor insulation, usually a solid dielectric such as polyethylene, has a thickness of about 150 mils for a 15-kV class circuit and is capable of supporting momentary contacts with tree branches, birds, and animals without puncturing[5]. The conductor sizes most commonly used in underground primary distribution vary from No. 4 AWG to 1000 kcmil[6]. Four-wire main feeders may employ 3- or 4-conductor cables, but single conductor concentric-neutral cables are more popular for this purpose. The latter usually employ crosslinked polyethylene insulation, and often have a concentric neutral of one-half or one-third of the main conductor cross-sectional area. The smaller-sized cables used in lateral circuits of Underground Residential Distributions(URD) systems are nearly always single-conductor, concentric-neutral, crosslinked polyethylene-insulated, and usually directly buried in the earth. Insulation thickness is on the order of 175 mils for 15-kV-class cables and 345 mils for 35-kV class with 100% insulation level[6]. Stranded or solid aluminum conductors have virtually supplanted copper for new construction, except where existing duct sizes are restrictive. With the solid-dielectric construction, to limit voltage gradient at the surface of the conductor within acceptable limits, a minimum conductor size of No. 2 AWG is common for 15-kV-class cables, and No. 1/0 AWG for 35-kV class. Primary voltage circuits(5-35kV) use paper-insulated, lead-covered (PILC) three-conductor cables extensively. Single-conductor secondary cables with rubber insulation and neoprene jacket are common. More recently, single-conductor polyethylene-insulated(PE) cables are being used for both primary and secondary[7]. Copper conductors predominated in the past, but aluminum has nearly displaced copper in new installations, except where existing duct space is limiting. In residential and suburban areas, new underground distribution systems to serve commercial loads often employ direct-buried cables[7]; conduits may be provided in locations where subsequent excavation would be excessively expensive or inconvenient. Aluminum conductors are almost universal. For primary cables, solid dielectric insulation is used almost exclusively, with cross-linked polyethylene(XLPE) and ethylene-propylene rubber(EPR) insulations [5]. Concentric-neutral wires are common. Secondary cables in these systems generally have aluminum conductors and solid-dielectric insulation, with cross-linked polyethylene being the most common. The secondary neutral is usually an insulated conductor, although there is some use of bare copper neutrals. Electric supply cables are insulated with a range of materials depending on voltage ratings, type of service, installation conditions etc. The following are commonly used: 1. Rubber and rubber-like for 0 to 35kV 2. Varnished cambric for 0 to 28kV 3. Impregnated paper of the solid type for voltages up to 69kV and with pressurized gas or oil up to 345kV or higher For most distribution circuits in the 5-kV class or higher, the cables employ a shielded construction[8]. Shielding is used on the outer surface of the cable insulation or directly over the main conductor, or both. Outside shielding, often in the form of metallic tapes, metallic sheaths, or concentric wires, must be effectively grounded. The aforementioned insulation systems usually require a sheath or suitable jacket to prevent infiltration of moisture, loss of oil, gas, or impregnate and to provide protection against corrosion and electrolysis. In some cases, an armor overlay is used to provide mechanical protection. Single conductor cables are used in single-phase primary systems and frequently used in 3-phase direct buried primary systems. The 3 conductor primary cables are often used in duct systems. At the present time,solid-dielectric insulating materials such as tree retardant, cross-linked polyethylene and EPR are receiving the widest application in Underground Distribution Systems(UDS)[9, 5], both direct buried and duct systems. From the Electric distribution handbook[5], cables used in underground systems may either be concentric neutral cables or power cables(for utilities). The jacket is usually made of Linear low-density polyethylene (LLDPE), PE or Semiconductors. The insulation most used in the industry is PE, XLPE, PILC, TR-XLPE & EPR. For URD applications, aluminum is the choice of conductor. Caribbean countries and areas outside US and Europe follow similar installation practices and cable selections such as in Trinidad. During a consultation with the Trinidad and Tobago Electrical Commission (T&TEC), Shielded Polyvinyl Chloride (PVC) and shielded Cross Linked Polyethene (XLPE) cables are most used in public transmission systems, however, in private distributions, shielded EPR has been recently adopted[8]. This is supported by [10] who performed a survey of the cables used in LV and MV installations. Furthermore, the cables used by LV and MV installations usually follow the guidelines of the British Standards Institution. The standards recommend PVC Jacket, Aluminium-Armoured XLPE insulated cables with stranded copper cores. Other standards such as the IEC utilize similar cable configurations with slight differences in the installation environment guidelines, thermal requirements, and conductor sizing (British Standards Institute 2007). The same shielding and sheathing practices that are done internationally are done locally in Trinidad and Tobago per the TTS standards[11, 12] which build upon the NEC Standards. For URD applications, copper is the choice of conductor while aluminum is used for the shield. ## 3 Typical Cable Faults in Low and Medium-Voltage Cables Numerous studies have shed light on the types of faults typically found in low and medium-voltage cables, as well as their underlying causes. The authors of [13] identified insulation breakdown as a prevalent fault type, often caused by aging, thermal stress, or manufacturing defects. Mechanical stresses, such as bending or crushing, were found to be a significant cause of faults in medium-voltage cables [14]. Another common fault type is moisture ingress, resulting from damaged cable sheaths or inadequate sealing, as highlighted in the investigation by [15]. Furthermore, [16] discussed short circuits and open circuits as faults in low-voltage cables, which can arise due to insulation damage, conductor breakage, or loose connections. Other studies have also explored specific fault types, such as insulation material aging [17], weather-related faults [18], manufacturing defects [19], and partial discharge [20]. These findings emphasize the importance of understanding and addressing the diverse causes of cable faults to enhance the reliability and performance of low and medium-voltage cable systems. After interviewing the fault location personnel at T&TEC they revealed that the most common faults found locally are high resistance faults and single line-to-ground (SLG) faults. It was reported in [21, 22] that SLG faults and Series faults are the most common in underground power systems. Series faults are often caused by a cable layer losing continuity due to a force above ground such as a collision. A SLG fault occurs when the insulation of one or more conductors fails. These faults are often permanent faults. Intermittent faults are not considered in most fault locator designs since most transmission systems utilize monitoring equipment to briefly cut transmission and allow the fault to clear themselves. Furthermore, [23] state that up to 90% of cable faults are SLG faults and hence intermittent faults fall into a minority. Resistive faults are classified as high resistive if it is beyond 200\(\Omega\). TDR cannot locate faults greater than this but as mentioned, they are rare. Hence it is useful to limit fault resistance to 200 \(\Omega\) in the test cases. ## 4 Digital Signal Processing Techniques in Reflectometry and Fault Location The authors of [24] performed a state-of-the art review of cable fault location methods showing that the most common methods of fault distance location are impedance-based methods, differential equation-based methods and travelling wave methods. Impedance-based methods are shown to be cheap and simple to perform such as the Murray and Varley Loop method, but they are sensitive to the fault resistance. It is shown in [25, 26] that the loop methods perform accurate for open and short circuit faults, however, for extreme low and high resistance faults, the bridges are difficult to stabilize. Box's optimization algorithm is proposed in [27] to automate bridge stabilization to balance the bridges of the loop methods. The optimization methods perform very favourable in real life testing with a 0.8% error rate. it is suggested in [28] that capacitance bridges should be used for open faults, resistance bridges for short circuit faults but HV bridges should be used to locate insulation faults and high impedance faults. According to [24], differential equation models are costly and do not perform well for long lines. In addition, both ends of the line may need to be accessed requiring more manpower and resources than a single ended approach. It is reported in [29] the distributed line model, the characteristic method has the merit of being suitable for short, long, transposed and untransposed lines and can be modified for three-terminal systems. The accuracy is sensitive to the choice of the time window and limited by the sampling rate. This may not be practical for low-cost location for distribution lines. Travelling wave (TW) methods have also been widely mentioned in the literature as covered in [30, 26, 31, 32, 33, 34]reports the TW method performs poorly for 6-35kV networks which indicates it may not be feasible to use in this project. Measurement of TW is made more difficult when there are taps on the distribution line adding to the complexity. The problems of TW fault location(TWFL) is the attenuation of waves when sent through underground cables. TWFL can be done either single ended or on both ends. It is reported in [32] that it is possible to achieve greater accuracy with the multi-end methods compared to the traditional fault location methods. It is recommended by Megger, KEP, [26, 35, 28] that reflectometry methods be used for low resistance faults. For higher resistance faults(where fault resistance is? 200\(\Omega\)), MIM, or ICE techniques on a surge wave generator(SWG) should be used. Decay or HV-Bridge methods should be used for locating high impedance or sheath faults. For tapped cables, differential multiple impulse response methods can be used to determine the fault locations. It is suggested in [35] to use of decay methods for intermittent fault location. After meeting with T&TEC, the local fault location professionals reported that TDR is mostly used for fault location. There are many reflectometry methods used in fault location. Reflectometry methods are distinguished by their EM test signal and method of the reflectogram analysis[36]. The most common reflectometry methods are Time Domain Reflectometry (TDR), Frequency Domain Reflectometry (FDR), Time-Frequency Domain Reflectometry (TFDR) and Spectrum Time Domain Reflectometry (STDR) (Shi and Kanoun 2014). Each method is suited to a particular application. Reflectometry works well for most low and medium voltage applications[37]). Reflectometry methods work on the principle of applying an excitation signal to a cable or transmission line and analysing the reflected trace. It is a form of RADAR and relies on the change of the characteristic impedance of the line to generate a reflection. The crux of reflectometry is the reflectogram analysis. There have been several analyses of the reflectograms generated. Time domain reflectometry methods usually involve sending a step signal or pulsed waveform on a cable and sampling the input where the signal was applied for a reflection[38]. The sampled reflectogram is denoised via a matched filter or wavelet denoising algorithm then the travel time between signal inflexions are read off from a screen or graph and used to calculate the fault distance. Common methods to automate this are bubble sort to determine the wavefront points and peak detection[39]. Work has also been done on automated TDR systems which utilize reflection detection via MCU capture interrupts to time the echo for the first reflection of a travelling wave [40, 41]. There has been no work found attempting to use time series segmentation to treat the TDR problem as a CPD problem possibly due to the significant waveform distortion present on the waves. Frequency domain reflectometry is rarely seen in the literature for medium voltage applications hence it is not explored. Furthermore, it does not perform well on cables longer than 2km and expensive directional couplers are required to sample the system and obtain the reflectogram[38]. Time-Frequency domain reflectometry is much more common and attempts to address the shortcomings of both time and frequency domain reflectometry. In this a waveform with good time-frequency localization is incident on the cable. The reflected wave is sampled and cross-correlated with the incident signal to determine the fault location [42]. In [43], the authors model a chafe in an aircraft cable using scattering parameters [44]. The transfer function of the cable is then derived in terms of a parameter set, \(\theta\). The parameter set is estimated using a statistical method of probability inversion using the reflected trace as target and a initial parameter combination. The initial parameter is updated in a Bayesian approach until it reaches a distribution similar to the reflected trace. The final parameters will have information about the fault distance, type and dimension. Similar to [43], most TFDR work utilize the Gaussian Envelope Linear Chirp (GELC) signal as the incident signal due to its good time-frequency localization [45]. However, they differ by their means of signal processing. In [42, 46] a dictionary is created of all possible transformations of the parameterized incident signal in terms of phase shift, amplitude and frequency. The reflected signal is projected onto the dictionary to find the closest match. The closest match's time shift parameter is used to determine the fault location. Other works utilize matched filters to eliminate noise and time correlators to determine the fault distance [47]. In [48] a statistical model-based detection and frequency identification method is employed to calculate the fault distance. The GELC is used a the incident signal. A Likelihood Ratio test (LRT) is used to detect the reflection and a Hidden Markov Model Hang Over Scheme is used to avoid the LRT from cutting the tail of the GELC which can happen in cases of severe attenuation. The Hilbert transform is used to determine the instantaneous phase and consequently the signal's instantaneous frequency. The instantaneous frequency is obtained as linear combination of the GELC carrier sinusoid and the angular frequency sweep rate. Ambient noise is removed via a constrained Kalman filter (CKF). The filtered signal is the incident and reflected wavefronts which are used to calculate the distance by multiplying half the delay by the velocity of propagation (VOP). Variations on the frequency estimation and noise handling was done in [49, 50]. The test signal in STDR is a PN binary code [51] that is launched from the test device and encounters partial reflection and transmission at each impedance discontinuity in the system being tested. The STDR response is created by cross-correlating the reflections that return to the test point with a delayed copy of the event PN code. SSTDR generates a sine-like correlated reflection signature using a square- or sine-wave modulated PN code as the test signal. The reflected signal will be dispersed and attenuated if the system is frequency-dependent or lossy. Furthermore, the method is robust to noise and can be applied on a electrical apparatus [52], underwater applications [53] and power cables [37]. Changepoint detection methods in [54] have been used in RADAR and can be used to analyse the reflectogram automatically and determine the fault distance without the need for reflectogram interpreters [55] or commercial TDR equipment. At the time at writing, the author has not found any work done on the application of changepoint detection in time domain reflectometry for power cable applications. The crux of reflectometry-based fault location is the algorithm used. Hence, this work explores the DSP methods used to perform fault location. Furthermore, a guide to the development of a prototype that uses the method explored is shown in the Discussion. This guide will account for instrumentation noise (which can be addressed with a LPF) and recommendations on things to be done to improve the accuracy of the algorithm for the application domain. ## 5 Discussion The literature review has provided valuable insights into acceptable cable types, typical faults, and digital signal processing (DSP) techniques used in fault location for low and medium-voltage cables. The review identified that common cable types include polyethylene-insulated solid dielectric cables, concentric-neutral cables, and shielded constructions. Additionally, aluminum conductors have become the preferred choice over copper due to cost and practical considerations. Regarding typical cable faults, the review highlighted that insulation breakdown, mechanical stresses, moisture ingress, short circuits, and open circuits are among the common faults encountered in low and medium-voltage cables. Understanding the root causes of these faults is crucial for effective fault diagnostics and maintenance strategies. The literature revealed high resistance, hard faults are the most common. Furthermore, and single line-to-ground (SLG) faults are prevalent in the local power distribution systems. The literature review on DSP techniques for cable fault location revealed the prominence of impedance-based methods, differential equation-based methods, and traveling wave (TW) methods. Impedance-based methods, such as the Murray and Varley Loop method, are simple and cost-effective but sensitive to fault resistance. Differential equation models are more accurate but costly and may require access to both ends of the cable. TW methods, while widely mentioned in the literature, have limitations in accuracy and may not be practical for low-cost fault location in distribution networks. The use of modern digital signal processing techniques, such as Box's optimization algorithm, has shown promise in automating bridge stabilization and improving the accuracy of impedance-based methods. ## 6 Conclusion This comprehensive literature review has provided valuable insights into the types of low and medium-voltage cables acceptable for installations as guided by local and international standards. Typical cable faults in low and medium-voltage cables were explored, including insulation breakdown, mechanical stresses, moisture ingress, short circuits, and open circuits. Knowledge of these fault types and their underlying causes is critical for effective fault diagnostics and maintenance strategies, enabling power utilities to minimize downtime and enhance the overall reliability of power networks. The review of digital signal processing techniques used in cable fault location highlighted the prominence of impedance-based methods, differential equation-based methods, and traveling wave (TW) methods. While impedance-based methods offer simplicity and cost-effectiveness, modern DSP techniques like Box's optimization algorithm show promise in improving accuracy. However, TW methods may not be practical for low-cost fault location in distribution networks. The findings of this review contribute to improved fault diagnosis and localization methods. Further research and development in DSP techniques hold the potential for enhancing cable fault location systems, reducing repair time, lowering costs, and ultimately improving the overall reliability of power distribution networks. ## 7 Availability of data and material There materials surrounding the manuscript can be obtained by contacting the authors. ## 8 Competing interests The authors declare no competing interests ## 9 Funding This work was funded by the University of the West Indies St. Augustine Campus. ## 10 Authors' contributions The authors confirm contribution to the paper as follows: study conception and design: Shankar Ramharack, Sanjay Bahadoorsingh; analysis and interpretation of results: Shankar Ramharack; draft manuscript preparation: Shankar Ramharack.All authors reviewed the results and approved the final version of the manuscript ## 11 Acknowledgements The authors would like to thank Mr. Veeresh Ramnarine for their guidance during the project. Furthermore, the authors would like to thank Mr. Anil Rambarhat and Mr. Varma Ratan for their insights into fault location within Trinidad and Tobago. Lastly, the author would like to thank Dr. Letitia Addison for their assistance in exploring changepoint detection.
2306.17200
Residual Feature Pyramid Network for Enhancement of Vascular Patterns
The accuracy of finger vein recognition systems gets degraded due to low and uneven contrast between veins and surroundings, often resulting in poor detection of vein patterns. We propose a finger-vein enhancement technique, ResFPN (Residual Feature Pyramid Network), as a generic preprocessing method agnostic to the recognition pipeline. A bottom-up pyramidal architecture using the novel Structure Detection block (SDBlock) facilitates extraction of veins of varied widths. Using a feature aggregation module (FAM), we combine these vein-structures, and train the proposed ResFPN for detection of veins across scales. With enhanced presentations, our experiments indicate a reduction upto 5% in the average recognition errors for commonly used recognition pipeline over two publicly available datasets. These improvements are persistent even in cross-dataset scenario where the dataset used to train the ResFPN is different from the one used for recognition.
Ketan Kotwal, Sebastien Marcel
2023-06-29T09:14:42Z
http://arxiv.org/abs/2306.17200v1
# Residual Feature Pyramid Network for Enhancement of Vascular Patterns ###### Abstract The accuracy of finger vein recognition systems gets degraded due to low and uneven contrast between veins and surroundings, often resulting in poor detection of vein patterns. We propose a finger-vein enhancement technique, ResFPN (_Residual Feature Pyramid Network_), as a generic preprocessing method agnostic to the recognition pipeline. A bottom-up pyramidal architecture using the novel Structure Detection block (SDBlock) facilitates extraction of veins of varied widths. Using a feature aggregation module (FAM), we combine these vein-structures, and train the proposed ResFPN for detection of veins across scales. With enhanced presentations, our experiments indicate a reduction upto 5% in the average recognition errors for commonly used recognition pipeline over two publicly available datasets. These improvements are persistent even in cross-dataset scenario where the dataset used to train the ResFPN is different from the one used for recognition. ## 1 Introduction Use of vascular patterns as the biometric recognition trait is becoming more prevalent due to its distinctive advantages such as high recognition accuracy, difficulty in spoofing, and less interference of external factors. Typically, veins of finger(s), palm, and wrist are popular biometric modalities. In this work, we consider only finger vein (FV) as the biometric modality. The reflection-based FV scanners can be constructed in a contactless manner--which makes them an attractive biometric modality offering a better user experience and alleviating hygiene concerns (that may occur in enclosure or touch-based vein scanners). The performance of FV recognition pipeline is strongly correlated to the quality of the FV presentation acquired by the near-infrared (NIR) sensor (_i.e._ camera). These blood vessels lie beneath the skin of the subject and therefore do not always appear prominent in the acquired presentation. Figure 1 shows (see top row) some samples of FV presentations where the vein structures are not clearly visible across the region. Due to lack of contrast and uneven illumination, these presentations often suffer from poor feature extraction, and subsequently result in low and incorrect matching scores impeding the performance of the overall FV recognition system. In this work, we propose a deep learning (DL)-based technique for enhancement of vein structures in the presentations acquired in the NIR spectra. The proposed technique is independent module that can be plugged into an existing FV recognition pipeline at the preprocessing stage. An overall FV recognition pipeline can be built from conventional image processing techniques or it can be based on a deep convolutional neural network (CNN). Typically, in both cases, the NIR presentation is first preprocessed for cropping, resizing, and orientation correction. The conventional processing pipeline employs feature extraction block to generate a feature descriptor (it acts as reference or template for enrolment data), followed by the matching block that computes similarity metric between feature descriptor of the test sample (also known as probe) and predefined templates. The DL-based FV recognition pipelines usually combine both blocks by modeling the recognition task as an \(n\)-class classification problem. A cascade of convolutional and pooling layers learns vein-related features which are then transformed into class probabilities by one or more fully connected layers. For any pipeline, conventional or DL-based, efficient extraction or learning of relevant features from input presentations is the key to build a highly accurate recognition system. Popular feature extraction methods such as repeated line tracking [3], wide line detector [4], and maximum curvature (MC) [5] are based on computation of the local gradient or cross-sectional profile of pixels as the first step. The efficacy of these quantities (gradient or profile) is directly proportional to the contrast in the image. The deep CNNs, as well, are susceptible to distortion in the quality of input images such as noise, blur, and contrast [6; 7; 8]. This essentially reinforces the importance of good contrast (between vein structures and surroundings) in designing a highly accurate FV recognition pipeline. It may be noted that the publicly available FV datasets are relatively much smaller (in the range of 2000-3000 total presentations), furthermore, only a fraction of entire dataset is used for training purposes. Since training deep CNNs with small amount of data is challenging, improving the quality of the input presentations- by enhancing the vein structures- can be of significant assistance in training as well as inference. From the existing literature, it appears that the problem of enhancement of vein structures, particularly using learning-based methods, has not received much attention despite its apparent usefulness. Using the presentations as captured by the NIR sensor without the aforementioned enhancement has two serious shortcomings: (1) Due to variable width of blood vessels and variable local contrast (because of presence of tissues around vessels), the feature extraction may detect fewer vein-structures from the presentation. The subtle vein structures- that may carry subject-specific discriminatory information- may remain undetected. Alternatively, one has to extensively experiment with parameters of feature extractor or CNN to obtain good recognition accuracy. (2) Since the parameters of pipeline are tuned for specific dataset or sensor, the FV recognition system can suffer from poor generalization across different datasets. In case of change of NIR sensor, which is a quite common real life use-case, one has to rely on expensive and time-consuming solution of capturing new dataset to tune the parameters or train the CNN. To address these concerns, we propose a deep CNN-based method for enhancement of vein structures from the FV presentations acquired in NIR channel. Our network accepts an NIR presentation in the form of single channel image; and generates an image consisting of vein-_like_ structures. This result is combined with the input to obtain the enhanced presentation which can then be processed by any FV recognition pipeline. Samples of enhanced images obtained from our work are shown in (the bottom row of) Figure 1 where the appearance of veins is much sharper, clearer, and visibly darker as compared to their unprocessed/ original versions. With good contrast around veins, these presentations are less sensitive to the parameters of feature extraction method or model. We train our model using the vein annotations (manually generated binary labels) as the target. As vein structures exhibit variable width or thickness, the choice of spatial resolution (or scale) is crucial in designing the enhancement network. Our network consists of structure detection blocks at multiple resolutions akin to the feature pyramid networks (FPNs)[9]. The vein structures (or their parts) detected at each level are combined through a feature aggregation module (FAM) to get a fused output. We design a structure detection block (SDBlock) as the basic unit of our network--that detects vein structures and also generates a set of feature maps, at reduced resolution, for processing by subsequent blocks. Through residual architecture, our network is able to extract FV structures across scales and fuse those to obtain an enhanced FV presentation. The contributions of our work can be summarized as follows: * We have designed a fully convolutional Residual FPN (**ResFPN**) for enhancement of vein structures. This architecture, consisting of only 600\(k\) parameters, efficiently detects vein structures of varied thickness without need for any specific tuning. Figure 1: The top row shows examples of original (acquired by sensor) FV presentations: two each from SDUMLA [1] and UTFVP [2] datasets. The corresponding images from the bottom row are the results of the proposed vein enhancement technique. * We have introduced a novel unit for structure detection, SDBlock. Through the SDBlock, we are able to achieve two objectives simultaneously: extraction of vein structures and generation of input for next layers/blocks. * Through indirect assessment of work, we demonstrate the efficacy of the proposed enhancement technique: the average error rate of FV recognition performance on publicly available datasets reduced upto 5% after enhancing the presentations by ResFPN. This improvement has been validated in intra- and cross-dataset testing scenarios. In Section 2, we briefly describe existing works related to FV enhancement. The proposed ResFPN is described in Section 3. We provide experimental results along with details of datasets and evaluations measures in Section 4. Finally, Section 5 provides concluding remarks. ## 2 Related Work Kumar and Zhou [10] generated an average background image for a sub-block of input FV presentation, followed by local histogram equalization. A combination of edge preserving filtering, elliptic highpass filtering and histogram equalization was proposed by Pi _et al._[11]. The contrast limited adaptive histogram equalization (CLAHE) has been considered towards enhancement of vein region by several works [10; 12; 13]. The use of Gabor wavelets at various scales and orientations for enhancement of venous regions has been proposed by Yang and Shi [14]. They have also devised a scattering removal method for better visibility of the acquired presentation. Methods in [15; 16] also advocate the use of Gabor filters for enhancement of FV presentations. Peng _et al._ proposed a non-local means (NLM)-based technique for enhancement of veins in the NIR presentation [17]. Their work is based on the availability of several local patches with similar vein structures. These multiple patches have been exploited to enhance the vein-structures. A recent work by Zhang _et al._ combines the guided filter and tri-Gaussian model for FV image enhancement [18]. All aforementioned approaches for enhancement of vein regions are based on conventional image processing techniques. Despite success of deep CNNs in enhancement or restoration of images, very few works have studied DL-based approaches for this task. In [19], a fully convolutional network (FCN) has been developed to enhance the vein patterns, more specifically to recover the missing segments within vein patterns. The training data were created by randomly cropping some pixels from the FV images, and the corresponding FCN was trained using MSE (mean square error) loss between the output of the FCN and original image. Recently, Bros _et al._ proposed a deep autoencoder-based method for enhancement of FV presentations [20]. They used presentations enhanced with vein-annotations to train their network by reducing the MSE loss. ## 3 ResFPN for Vein Enhancement In this section, we first describe the architecture of the proposed ResFPN for FV enhancement along with our rationale in designing its building blocks. Subsequently we provide details of training procedure and formulation of loss function. ### Network Architecture Learning features at all scales from a combination of bottom-up pathway, top-down pathway, and lateral connections- also known as feature pyramid network (FPN)- has been shown to be efficient generic feature extractor [9; 21]. When analyzed locally, vein pattern is a structure of variable thickness; and extracting such a structure would require learning a set of convolutional filters at different spatial resolutions or scales. Based on the idea of FPN [9], we construct a multi-level bottom-up pathway to extract vein features at different scales. Figure 1(a) shows the overall architecture of the proposed ResFPN. We call each unit of this pathway as the _structure detection_ block (SDBlock)--which will be described later in this section. At each successive level, the SDBlock extracts vein-_like_ structures from larger receptive fields as the spatial dimensions (resolution) of the corresponding feature maps gradually reduce. Vein structures so-obtained from each SDBlock are then combined by a feature aggregation module (FAM) which normalizes them in terms of resolution and number of feature maps. As each SDBlock extracts only a part of overall vein pattern (depending on its width or thickness), we combine the normalized outputs of FAM into a single channel representation of the extracted structures. The vein-enhanced presentation is obtained by a linear combination of the output of our network and the original input presentation. #### Structure Detection Block (SDBlock): The architecture of SDBlock- the fundamental unit for extraction of vein-like structures- is shown in Figure 1(b). Detection of thin and subtle vein-structures is accomplished by learning a set of convolutional kernels, followed by a non-linear activation such as ReLU. (In Figure 1(b) corresponding convolutional and ReLU layers are represented as \(C_{F}\) and \(R_{F}\), respectively.) The size of kernels can be calculated by analyzing the nominal width of vein structures at original resolution. We employ a stride of 2 across the \(C_{F}\) layer which implicitly reduces the spatial dimensions, while no explicit spatial-level feature pooling is used. The outputs of \(R_{F}\) are structure features (\(\mathbf{s}_{L}\)) extracted by the \(L\)-th SDBlock. Given the nature of vein structures, the feature extraction process is akin to learning a set of _bandpass_ filters. The output of such filters strongly suppresses or removes the information content beyond their effective bandwidth. Therefore, using these outputs (\(\mathbf{s}_{L}\)) directly for detection of structures (that are predominantly present in the possibly suppressed frequency bands) is likely to render ineffective results. Therefore, we propose to implement the shortcut (using the ResNet terminology) by adding the input of the SDBlock to the output of \(R_{F}\). The input is passed through a convolutional layer \(C_{I}\) to align its dimensions (spatial dimensions and number of feature maps) to those of structure features, \(\mathbf{s}_{L}\). After the addition of shortcut, the corresponding output (\(\mathbf{x}_{L+1}\)) is normalized at batch level (\(\mathrm{BN}\)) which may then be fed to the next SDBlock. If \(\mathbf{x}_{L}\) is the input to the SDBlock, which could be the FV presentation (\(\mathbf{I}\)) or feature maps generated by previous SDBlock, the functioning of the \(L\)-th SDBlock is summarized below. \[\mathbf{s}_{L}=R_{F}\big{(}C_{F}(\mathbf{x}_{L})\big{)} \tag{1}\] \[\mathbf{x}_{L+1}=\mathrm{BN}\big{(}C_{I}(\mathbf{x}_{L})+\mathbf{ s}_{L}\big{)} \tag{2}\] The SDBlock, thus, accepts feature maps (or the input presentation), and generates two outputs: (1) the residual structure features- to be processed by the FAM for output, and (2) normalized feature maps- to be processed by the SDBlock at next level. The process of detection of \(\mathbf{s}_{L}\) features operates in different frequency bands for each SDBlock. The proposed architecture simplifies these objectives using shortcut connections: the residual component is trained to learn structures in the feature maps; while the combined/ summed component, boosted with detected features, is suitable for similar processing at the next scale. #### Feature Aggregation Module (FAM): The FAM receives structure features, \(\mathbf{s}_{L}\), from each SDBlock; and as the first step normalizes them through upsampling and \(1\times 1\) convolutions. For each SDBlock, the structure features are computed on successively reduced scale (spatial resolution) of feature maps. We use nearest neighbor-based interpolation to upsample the structure features to the scale of original input presentations. Thus, no learning parameters are involved at upsampling stage. Using \(1\times 1\) convolutions, we convert the feature maps of each SDBlock into the same channel-dimension, say \(n_{\mathrm{ch}}\), and refer to them as \(\widehat{\mathbf{s}}_{L}\). Figure 2: Architectures of the proposed finger vein enhancement technique: (a) ResFPN and (b) SDBlock. The blue dotted lines in (a) represent FAM. As each \(\widehat{\mathbf{s}}_{L}\) is upsampled to the resolution of input presentation, the upsampling factor of \(\up_{L}\) is determined accordingly. If the network consists of \(k\) SDBlocks, we obtain a composite feature map with \(k\,n_{\mathrm{ch}}\) channels whose spatial dimensions are same as that of the input presentation. This composite feature map, \(\mathbf{f}_{\mathrm{comp}}\), represents the aggregation of vein features learnt across multiple scales. We fuse the individual feature maps of \(\mathbf{f}_{\mathrm{comp}}\) into a single channel output, \(\widehat{\mathbf{y}}\), using two layers of convolutional layers with an intermittent ReLU activation. The final output, \(\mathbf{y}\), is obtained through sigmoidal activation of \(\widehat{\mathbf{y}}\). The functions of the FAM are summarized below. \[\widehat{\mathbf{s}}_{L}=C_{\mathrm{L}}\big{(}\up_{L}(\mathbf{s} _{L})\big{)} \tag{3}\] \[\widehat{\mathbf{y}}=C_{2}\Big{(}R_{1}\big{(}C_{1}(\texttt{concat }\{\hat{\mathbf{s}}_{L}\})\big{)}\Big{)}\] (4) \[\mathbf{y}=\texttt{Sigmoid}(\widehat{\mathbf{y}})\] The enhanced presentation, \(\mathbf{I}_{E}\), is obtained by linear combination of the output, \(\mathbf{y}\), and the input presentation, \(\mathbf{I}\) using a predefined weight \(\alpha\in(0,1)\) as \(\mathbf{I}_{E}=\alpha\mathbf{y}+(1-\alpha)\mathbf{I}\). ### Loss Function We formulate the problem of detection of vein structures as a binary classification problem that assigns a probability of being a (part) of vein to each pixel. The loss function, therefore, is defined as the binary cross entropy (BCE) between the vein-annotations (a binary image with vein marking) and the output, \(\mathbf{y}\), of the ResFPN. The outputs of each SDBlock post dimensional normalization, are expected to have extracted parts of vein pattern. Therefore, we also propose to calculate loss over each of normalized feature maps, \(\widehat{\mathbf{s}}_{L}\). These feature maps are passed through a sigmoidal activation, the BCE loss is computed for each of \(n_{\mathrm{ch}}\) feature maps, and then averaged to yield a scalar value. The overall loss function, \(\mathcal{L}\), is defined as summation of losses computed over each of \(L\) levels, and the loss computed on final output. If we denote the vein-annotations as \(\mathbf{y}_{\mathrm{target}}\), then the expression for overall loss function is provided by Equation 5. \[\mathcal{L}=\mathcal{L}_{\mathrm{BCE}}(\mathbf{y}_{\mathrm{target}},\,\mathbf{ y})+\sum_{k=1}^{L}\mathcal{L}_{\mathrm{BCE}}(\mathbf{y}_{\mathrm{target}},\, \texttt{Sigmoid}(\widehat{\mathbf{s}}_{k})) \tag{5}\] ### Training Procedure A small size of vein dataset and cumbersome task of manual annotation of vein structures drastically limit the scope of training large deep networks. In addition to designing a deep network with relatively fewer parameters, we have incorporated data augmentation by flipping it along horizontal and vertical axes. Each input presentation generates 4 samples (2 by horizontal flip and 2 by vertical flip) which are then shuffled during training. Note that each presentation is flipped to create \(4\times\) data contrary to typical augmentation strategies where either original or flipped data are considered (flipping takes place randomly). The input presentations are rescaled to a fixed size (\(320\times 240\) in our case) to ensure consistency across different datasets. The vein-annotations, acting as targets, were also processed in the same manner. For training the ResFPN, we have chosen the Adam optimizer with a learning rate of 1.0e-4. To generate the enhanced presentation, we have used \(\alpha=0.10\) to combine the vein-structures with input. ## 4 Experiments and Results We begin this section with details of the FV datasets and the protocols designed for our experiments. Since there are no direct methods to access the performance of enhancement, we have considered indirect assessment of our work by measuring the difference in the performance of overall FV recognition without and with application of our enhancement technique. We employ a conventional FV recognition pipeline that consists of preprocessing functions (cropping, orientation correction, resizing, etc.), followed feature extraction using Maximum Curvature (MC) technique [5]. The Miura Matching technique [3] is used to compute the similarity or matching score between the probe and model. We calculate the performance of FV recognition using the measures described in Section 4.2. Then we have provided results of our experiments on conventional FV recognition pipeline. The python code to reproduce the experimental results will be released publicly.1 ### Datasets and Protocols For experiments, we have used two publicly available datasets: SDUMLA [1] and UTFVP [2]. The SDUMLA dataset consists of FV images of 6 fingers (3 finger of each hand) from 106 individuals. This collection has been repeated 6 times (called as sessions) to obtain a total of 3,816 FV presentations with \(320\times 240\) pixels in size. As we consider each finger as a separate entity for our experiments, the SDUMLA dataset is considered to have \(106\times 6=636\) clients. It should be also noted that vein annotations are available only for session-I. We require this dataset for two tasks: (1) to train and validate the ResFPN for enhancement; and (2) to validate the overall FV recognition pipeline. The first task requires a split of presentations to train the CNN, and to validate its performance over training epochs. The second task requires two disjoint sets of data: one to obtain score-related thresholds (dev), and another to evaluate the performance of FV recognition using these score thresholds (eval). In each subset, a further split of samples is required to enroll (_i.e._, to build models), and samples to probe. We have created a _Nom_ (Normal Operative Mode) protocol where both tasks and their subtasks are allocated samples without any overlap of samples or clients. The data from session-I has been used for training and validating the ResFPN by splitting in the ratio of 0.8:0.2. Thus, 508 FV presentations from session-I of SDUMLA were used to train the ResFPN, while remaining 128 presentations were used to evaluate the performance of the ResFPN over each training epoch. Hereafter we do not use the presentations from first 80% clients as these have been _seen_ by the network. The remaining 20% data is split into equal halves for dev (development set) and eval (evaluation or test set). In either case, the presentations from sessions II and III are used for enrollment and those from sessions IV, V and VI are used for probing. The protocol is summarized in Table 1. The UTFVP dataset consists of 6 fingers (3 for each hand) from 60 individuals captured twice in 2 sessions. The dataset, thus, consists of 1,440 FV presentations with \(672\times 380\) pixels. Considering each finger as a separate identity, we have a total of \(60\times 6=360\) unique fingers in the UTFVP dataset. For experiments, we consider the _Nom_ (Normal \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Identities & \multicolumn{3}{c|}{Sessions} \\ \cline{2-3} & I & II / III & IV / V / VI \\ \hline 1 & A training preprocessing & \multicolumn{1}{c|}{unused} \\ \cline{2-3} 508 (80\%) & & C & D \\ 572 (10\%) & B validation preprocessing & development development problem & \multicolumn{1}{c|}{development probes} \\ \cline{2-3} 636 (10\%) & & E & F \\ & & evaluation environment & \multicolumn{1}{c|}{evaluation probes} \\ \hline \end{tabular} \end{table} Table 1: Experimental _Nom_ protocol for the SDUMLA database. Figure 3: Receiver Operating Characteristics (ROC) curve and score histogram for the FV recognition on the Nom protocol of the SDUMLA dataset. Operative Mode) protocol which is similar to the one implemented for the SDUMILA dataset.2 Here, the unique fingers from first 10 clients are considered towards training the ResFPN. Due to small size of training set, we do not split it further for validation, and rather the performance of model is evaluated on the training data itself (no cross-validation for ResFPN). FV presentations from clients 11-28 constitute the dev set, and remaining presentations from clients 29-60 are included in the eval set. We omit further details of this protocol for the brevity of space. Footnote 2: The details of Nom protocol as devised by Idiap Research Institute: [https://www.idiap.ch/software/bob/docs/bob/bob.db.utfwp/master](https://www.idiap.ch/software/bob/docs/bob/bob.db.utfwp/master) ### Evaluation Measures We have reported the performance of the overall FV recognition pipeline using False Match Rate (FMR) and False Non-Match Rate (FNMR). The FMR is the ratio of number of impostor attempts incorrectly classified as genuine matches to the total number of impostor attempts. The FNMR is defined as the percentage of genuine matches that are incorrectly rejected. We used the equal error rate (EER) on the dev set to compute the score threshold, where FMR \(\approx\) FNMR. The Half-Total Error Rate (HTER)- average of FMR and FNMR on the eval set- is also reported. ### Results **Baselines:** The recognition performances of eval sets of SDUMILA as well as UTFVP datasets without applying the proposed enhancement technique are considered as the baselines for each dataset. For SDUMILA dataset, we obtained 12.1% EER on its dev set, and 13.4% HTER on the eval set. For the UTFVP dataset, these numbers were 1.3% and 2.4%, respectively. The results are summarized in Table 2. The Receiver Operating Characteristics (ROC) plots for SDUMIA and UTFVP baselines are shown in Figures 3 and 4, respectively (indicated by blue lines). **Experiments on SDUMIA dataset:** The ResFPN trained on (the train set of) SDUMIA dataset was used to enhance the presentations of dev and eval sets of the SDUMIA dataset. This intra-dataset experiment, however, does not have any overlapped samples or clients across partitions. On enhanced FV presentations, we obtained the EER of 7.2% on the dev set where 4,140 matches out of 57,132 were incorrectly classified. For the eval set, the HTER was 8.4% with 4,343 incorrect results out of 52,272 matches. The reduction in the overall classification error on the dev as well as eval set is around 5% after applying the vein-enhancement at preprocessing stage. The number of falsely matched impostors reduced from 5,922 to 4,309 (_i.e._, nearly 27% less) on the eval set of the SDUMIA. This improvement is particularly important since it was observed on the subset of the data that was unseen by the ResFPN and FV recognition system. For the cross-dataset testing, we have enhanced the FV presentations from SDUMIA using the ResFPN trained on the UTFVP dataset. Compared to the baseline, we observed an average improvement of 3% on the dev set for this experiment. For the eval set, the number of falsely matched impostors reduced by nearly 500 samples, and the number of incorrectly rejected genuine matches reduced to 34 from 61. In terms of HTER, the use of vein-enhancement resulted in an improvement of 4.9% over the baseline. For both experiments, the performance measures are provided in Table 2 and ROC plots are shown in Figure 3. It may be observed from the ROCs that the performance of the FV recognition using enhanced presentations is consistently better than the baseline (without enhancement) over a complete range of FMR. This relative improvement is highlighted even more on the ROC of eval set at lower values of FMR. For enhanced presentations, the score histograms of the eval set \begin{table} \begin{tabular}{|l||l||l||l|l||l|l|} \hline **Measure** & \multicolumn{2}{l||}{**Baseline (No Enhancement)**} & \multicolumn{2}{l||}{**Enhanced with ResFPN (SDUMIA)**} & \multicolumn{2}{l|}{**Enhanced with ResFPN (UTFVP)**} \\ \cline{2-7} & dev & eval & dev & eval & dev & eval \\ \hline \hline \multicolumn{7}{|l|}{**Test dataset: SDUMIA**} \\ \hline **FMR** & 12.1 (6856/56718) & 11.4 (5922/51876) & 7.2 (4110/56718) & 8.3 (4309/51876) & 9.2 (5206/56718) & 10.4 (5401/51876) \\ **FNMR** & 12.1 (50/414) & 15.4 (61/396) & 7.2 (30/414) & 8.6 (34/396) & 9.2 (38/414) & 8.6 (34/396) \\ **HTER** & 12.1 & 13.4 & 7.2 & 8.4 & 9.2 & 9.5 \\ \hline \hline \multicolumn{7}{|l|}{**Test dataset: UTFVP**} \\ \hline **FMR** & 1.2 (274/23112) & 1.1 (807/73344) & 0.5 (107/23112) & 0.3 (209/73344) & 0.5 (107/23112) & 0.3 (218/73344) \\ **FNMR** & 1.4 (3/216) & 3.6 (14/384) & 0.5 (1/216) & 2.3 (9/384) & 0.5 (1/216) & 2.9 (11/384) \\ **HTER** & 1.3 & 2.4 & 0.5 & 1.3 & 0.5 & 1.6 \\ \hline \end{tabular} \end{table} Table 2: Performance evaluation of the proposed ResFPN for FV enhancement on the SDUMIA and UTFVP datasets along with baselines. All measure rates are in %. The numbers in parenthesis indicate the number of incorrectly classified samples for total samples in the given class. (Figure (c)c) indicate a better separation between scores of both classes, and also lowered mean and lesser variance of the scores of the impostor comparisons. **Experiments on UTFVP dataset:** The Nom protocol of the UTFVP dataset comprises 73,344 impostor comparisons and 384 genuine comparisons on the eval set. When the presentations were enhanced using the proposed ResFPN (trained on the SDUMLA, _i.e._ cross-dataset), 209 impostor comparisons were incorrectly classified as genuine. This number is approximately 1/4-th of the same metric obtained for non-enhanced version of same presentations. On the dev set, we observed 40% reduction for this metric with respect to the baseline. The FNMR and, thus, HTER on both sets of the UTFVP dataset also improved by 0.8-1.3% when the performance of FV recognition was evaluated in the cross-dataset scenario as detailed in Table 2. Interestingly, when the FV presentations were enhanced using ResFPN trained on the other (disjoint) partition of UTFVP, we observed the improvement, in terms of FMR/FNMR, to be similar to the aforementioned cross-dataset experiment. The total number of misclassifications on the dev set of the UTFVP reduced from 277 (in baseline) to 108 for both experiments of vein enhancement. This improvement was even better for the eval set where misclassifications reduced from 821 to 218-229 after enhancement of the input. While it may appear that the ResFPN trained on the subset of UTFVP has performed relatively poorer than the network trained on the SDUMLA dataset, it may be noted that the train set of UTFVP consists of only 388 presentations, which is much smaller than its SDUMLA counterpart. The ROC plots for the dev set are near-perfect for both enhancement experiments as indicated by almost horizontal curves in Figure (a)a. While the performance of baseline experiment slowly degrades for FMR \(<10^{-3}\), the recognition of the enhanced presentations remains consistently accurate. On the eval set, one can observe that the improvement in FV recognition, brought by the ResFPN, is similar for models trained on SDUMLA as well as UTFVP. Figure (c)c shows the overall increase in the genuine scores of the enhanced presentations which improves separability of genuine comparisons from the impostor attempts. ## 5 Conclusions In this work, we have proposed a ResFPN (Residual Feature Pyramid Network) for enhancement of vascular patterns in the FV presentations acquired in NIR. This network can be integrated into a standard recognition pipeline as a part of preprocessing module. With its peculiar SDBlock and FAM architectures, the proposed network is able to detect vein-structures at various scales and combine them efficiently to generate an enhanced presentation. With usage of enhanced data, the performance of recognition system has improved in terms of FMR, FNMR, and HTER- over different datasets as demonstrated by our results. Thus, the resultant recognition systems are more accurate and secure. We have introduced a novel network architecture for detection of vein-structures. Further work in this direction mainly includes better generalization across variety of recognition methods, and efficiently processing size-independent presentations. Figure 4: Receiver Operating Characteristics (ROC) curve and score histogram for the FV recognition on the Nom protocol of the UTFVP dataset. ## Acknowledgement The authors would like to thank InnoSuisse for the project CANDY, and the Swiss Center for Biometrics Research and Testing for their support.
2310.13780
A Modular Framework for Implicit 3D-0D Coupling in Cardiac Mechanics
In numerical simulations of cardiac mechanics, coupling the heart to a model of the circulatory system is essential for capturing physiological cardiac behavior. A popular and efficient technique is to use an electrical circuit analogy, known as a lumped parameter network or zero-dimensional (0D) fluid model, to represent blood flow throughout the cardiovascular system. Due to the strong physical interaction between the heart and the blood circulation, developing accurate and efficient numerical coupling methods remains an active area of research. In this work, we present a modular framework for implicitly coupling three-dimensional (3D) finite element simulations of cardiac mechanics to 0D models of blood circulation. The framework is modular in that the circulation model can be modified independently of the 3D finite element solver, and vice versa. The numerical scheme builds upon a previous work that combines 3D blood flow models with 0D circulation models (3D fluid - 0D fluid). Here, we extend it to couple 3D cardiac tissue mechanics models with 0D circulation models (3D structure - 0D fluid), showing that both mathematical problems can be solved within a unified coupling scheme. The effectiveness, temporal convergence, and computational cost of the algorithm are assessed through multiple examples relevant to the cardiovascular modeling community. Importantly, in an idealized left ventricle example, we show that the coupled model yields physiological pressure-volume loops and naturally recapitulates the isovolumic contraction and relaxation phases of the cardiac cycle without any additional numerical techniques. Furthermore, we provide a new derivation of the scheme inspired by the Approximate Newton Method of Chan (1985), explaining how the proposed numerical scheme combines the stability of monolithic approaches with the modularity and flexibility of partitioned approaches.
Aaron L. Brown, Matteo Salvador, Lei Shi, Martin R. Pfaller, Zinan Hu, Kaitlin E. Harold, Tzung Hsiai, Vijay Vedula, Alison L. Marsden
2023-10-20T19:25:19Z
http://arxiv.org/abs/2310.13780v1
# A Modular Framework for Implicit 3D-0D Coupling in Cardiac Mechanics ###### Abstract In numerical simulations of cardiac mechanics, coupling the heart to a model of the circulatory system is essential for capturing physiological cardiac behavior. A popular and efficient technique is to use an electrical circuit analogy, known as a lumped parameter network or zero-dimensional (0D) fluid model, to represent blood flow throughout the cardiovascular system. Due to the strong _physical_ interaction between the heart and the blood circulation, developing accurate and efficient _numerical_ coupling methods remains an active area of research. In this work, we present a modular framework for implicitly coupling three-dimensional (3D) finite element simulations of cardiac mechanics to 0D models of blood circulation. The framework is modular in that the circulation model can be modified independently of the 3D finite element solver, and vice versa. The numerical scheme builds upon a previous work that combines 3D blood flow models with 0D circulation models (3D fluid - 0D fluid). Here, we extend it to couple 3D cardiac tissue mechanics models with 0D circulation models (3D structure - 0D fluid), showing that both mathematical problems can be solved within a unified coupling scheme. The effectiveness, temporal convergence, and computational cost of the algorithm are assessed through multiple examples relevant to the cardiovascular modeling community. Importantly, in an idealized left ventricle example, we show that the coupled model yields physiological pressure-volume loops and naturally recapitulates the isovolumic contraction and relaxation phases of the cardiac cycle without any additional numerical techniques. Furthermore, we provide a new derivation of the scheme inspired by the Approximate Newton Method of Chan (1985), explaining how the proposed numerical scheme combines the stability of monolithic approaches with the modularity and flexibility of partitioned approaches. Cardiovascular modeling cardiac mechanics 3D-0D coupling multi-domain modeling Approximate Newton Method ## 1 Introduction Numerical simulations have long been used to investigate the cardiovascular system in both health and disease [1]. These efforts have primarily applied computational fluid dynamics (CFD) to study blood flow in the heart and vasculature [2, 3, 4, 5, 6, 7], computational solid dynamics (CSD) to simulate tissue mechanics in the heart and vasculature [8, 9, 10, 11, 12], and fluid-structure interaction (FSI) for coupled problems [13, 14, 15, 16]. Because accounting for the entire 3D circulatory system is typically infeasible due to limited imaging domains and a vast range of scales (micro- to macro-vessels), it is common to model parts of this system with a lumped parameter network (LPN), which can be thought of as a 0D model of blood flow. This treats blood flow in the circulatory system analogously to the flow of current in an electrical circuit [17] and allows one to quantify bulk quantities - pressure and flow rate - at various locations in the system, at a fraction of the cost of fully-resolved 3D CFD simulations. Representing some parts of the cardiovascular system - for example, the heart or specific blood vessels - with 3D structural and/or fluid models, while modeling the remainder using a 0D LPN constitutes a multi-domain approach [18, 19]. The 0D LPN acts as a boundary condition on the 3D model that recapitulates physiological effects not captured by other, simpler conditions (e.g., zero-pressure). Such a 3D-0D coupled problem is the focus of this work. Developing accurate and efficient numerical methods for 3D-0D coupling remains an active area of research [20, 21, 22, 23]. Previous works have coupled 3D blood flow in large vessels to 0D models of the downstream vasculature [18, 19, 17], while other groups have coupled 3D finite element models of the heart to 0D models of the systemic and pulmonary circulation [24, 25, 21, 22, 26]. While these two problems, 3D fluid - 0D fluid and 3D structure - 0D fluid, are related, to the best of our knowledge, none have previously treated them in a unified manner. Prior works have taken a variety of approaches to solving the coupled problem, which can be broadly categorized into monolithic [18, 20, 21] and partitioned [27, 25, 22, 23]. Monolithic schemes are robust and generally exhibit better convergence properties, but are not conducive to modularity. We define modular implementations as those in which the 3D and 0D equations are solved separately by independent codes optimized for their respective problems, and those codes exchange information as needed to couple the two sets of equations. Partitioned approaches are typically modular, but may suffer from numerical stability issues. For the problem of coupling 3D heart models to 0D circulation models, partitioned schemes suffer from a particular issue known as the balloon dilemma [22], which originates from the cardiac valves. During the two isovolume phases of the cardiac cycle, both the inlet and outlet valves of the left ventricle (LV), for example, are closed, and the LV volume is nearly constant, while the LV pressure increases or decreases greatly due the contraction or relaxation of the heart muscle. In this situation, partitioned schemes that alternate between structure and fluid solvers typically fail because the structure solver is not aware of the constant-volume constraint. Previous works have avoided this by choosing monolithic approaches, or by using special iterative methods, time-staggered schemes, or additional stabilization terms. In [19], the authors developed a hybrid approach to the 3D fluid - 0D fluid coupling problem, incorporating the advantages of monolithic and partitioned approaches. This method was implemented in the open source multiphysics finite element solver svFSI ([https://github.com/SimVascular/svFSI](https://github.com/SimVascular/svFSI)) [28] and has been used extensively in blood flow simulations where the solution in the 3D domain is strongly influenced by the surrounding vascular system [29, 30, 31]. In this work, we describe a modular numerical scheme to implicitly couple 3D fluid and/or structural mechanics models to 0D LPNs of the cardiovascular system. The algorithm was originally described in [19] for only the 3D fluid - 0D fluid problem. Here, we extend it to solve the 3D structure - 0D fluid problem, showing that these two problems can be treated under a unified coupling framework. Applying this coupling to an idealized left ventricle model, we demonstrate the method produces a realistic pressure-volume loop and naturally captures the isovolume cardiac phases and opening and closing of valves without additional numerical treatment to solve the balloon dilemma. We further derive the coupling scheme as a modification to the monolithic Newton approach, inspired by the Approximate Newton Method (ANM) of Chan [32], revealing a firm mathematical foundation. This connection to ANM also makes clear how the present coupling retains the robustness of a monolithic approach within a modular implementation like a partitioned approach. The modularity greatly improves usability, allowing the user to modify the 0D LPN independently of the 3D solver and vice versa. The paper is organized as follows. In Section 2, the proposed coupling framework is derived. In Section 3, we leverage our numerical scheme in three different test cases, including an ellipsoidal LV coupled to an open-loop LPN, a spherical shell inflated through a limit point, and a pulmonary arterial model coupled to a closed-loop LPN. We also provide preliminary results on the convergence and computational cost of our method. In Section 4, we consider our method in relation to recent works and discuss limitations and future directions. Finally, in Section 5, we summarize our findings with respect to the proposed scheme. ## 2 Methods In this section, we derive the proposed coupling framework. The resulting equations are the same as those given in [19, 33], but a generalized mathematical derivation, inspired by ANM [32] and applicable to both 3D fluid and 3D structure problems, is provided here. First, we state the governing equations for the 3D and 0D systems, then we describe how the two systems are mathematically coupled, and finally, we explain how to solve the coupled problem in a modular manner. In the following, the minor differences in the equations when considering a 3D fluid versus a 3D structure are highlighted. ### 3D mechanical model: fluid or structure We first state the governing equations and numerical formulation for an incompressible and Newtonian fluid on a fixed 3D domain [19], which models blood flow in large blood vessels (Fig. 1 left). Specifically, the Navier-Stokes equations, consisting of the momentum and continuity equations, as well as the Newtonian constitutive model, read \[\rho\frac{\partial\mathbf{u}}{\partial t}+\rho\mathbf{u}\cdot \nabla\mathbf{u}-\nabla\cdot\boldsymbol{\sigma}-\mathbf{f}=\mathbf{0}, \tag{1}\] \[\nabla\cdot\mathbf{u}=0,\] (2) \[\boldsymbol{\sigma}=-p\mathbf{I}+\mu(\nabla\mathbf{u}+\nabla \mathbf{u}^{T}), \tag{3}\] with boundary conditions \[\mathbf{u}=\mathbf{g},\ \ \mathbf{x}\in\Gamma_{g}, \tag{4}\] \[\boldsymbol{\sigma}\cdot\mathbf{n}=\mathbf{h},\ \ \mathbf{x}\in \Gamma_{h}, \tag{5}\] and initial conditions \[\mathbf{u}(t=0)=\mathbf{u}_{0}, \tag{6}\] \[p(t=0)=p_{0}, \tag{7}\] with position vector \(\mathbf{x}\), time \(t\), density \(\rho\), velocity \(\mathbf{u}\), Cauchy stress tensor \(\boldsymbol{\sigma}\), pressure \(p\), dynamic viscosity \(\mu\), body force \(\mathbf{f}\), which we assume to be zero, and surface normal vector \(\mathbf{n}\). Eq. (4) is a Dirichlet boundary condition with prescribed velocity \(\mathbf{g}\) on \(\Gamma_{g}\). Similarly, Eq. (5) is a Neumann boundary condition with prescribed traction \(\mathbf{h}\) on \(\Gamma_{h}\). We may write these equations in abstract form as \[\begin{cases}\mathcal{P}^{3D,fluid}(\mathbf{u},p,\mathbf{x},t)=\mathbf{0},\\ \text{Boundary conditions},\\ \text{Initial conditions}.\end{cases} \tag{8}\] Figure 1: Left: An idealized geometry of a section of a blood vessel is given as an example 3D fluid domain. Right: An idealized geometry of the LV of the heart is given as an example 3D structure domain. The dynamics in each are described by standard governing partial differential equations (PDEs), and on both, we may define Dirichlet or Neumann boundary conditions, or a combination of both. In this work, the PDEs are spatially discretized using the finite element method. Following [19], these equations are discretized in space using a stabilized (variational multiscale) finite element formulation and in time using the generalized-\(\alpha\) method. This yields the nonlinear residual equation at timestep \(n+1\) \[\mathbf{R}^{3D,fluid}(\dot{\mathbf{U}}_{n+1},\mathbf{\Pi}_{n+1})=\mathbf{0}, \tag{9}\] to be solved for \(\dot{\mathbf{U}}_{n+1}\) and \(\mathbf{\Pi}_{n+1}\), the vectors of nodal accelerations and nodal pressures at the next timestep \(n+1\), respectively. The residual is also a function of \(\dot{\mathbf{U}}_{n}\) and \(\mathbf{\Pi}_{n}\), but these are assumed to be known, and thus we do not explicity write the functional dependence on them. This equation is solved using Newton's method, which in turn requires solving the following linear system at each Newton iteration \(k\) \[\begin{bmatrix}\ddot{\mathbf{K}}&\mathbf{G}\\ \mathbf{D}&\mathbf{L}\end{bmatrix}_{n+1}^{(k)}\begin{bmatrix}\Delta\dot{ \mathbf{U}}_{n+1}^{(k)}\\ \Delta\dot{\mathbf{U}}_{n+1}^{(k)}\end{bmatrix}=-\begin{bmatrix}\mathbf{R}^{3D,fluid}_{c}\\ \mathbf{R}^{3D,fluid}_{c}\end{bmatrix}_{n+1}^{(k)}. \tag{10}\] \(\mathbf{R}^{3D,fluid}_{m}\) is the residual associated with momentum balance Eq. (1), while \(\mathbf{R}^{3D,fluid}_{c}\) is the residual associated with mass continuity Eq. (2). \(\ddot{\mathbf{K}},\mathbf{G},\mathbf{D},\mathbf{L}\) are blocks of the tangent or stiffness matrix, and \(\Delta\dot{\mathbf{U}}_{n+1}^{(k)}\) and \(\Delta\mathbf{\Pi}_{n+1}^{(k)}\) are the Newton increments in nodal accelerations and pressures, respectively. The notation \(\begin{bmatrix}\cdot\end{bmatrix}_{n+1}^{(k)}\) indicates that terms inside the brackets are evaluated at \(\dot{\mathbf{U}}_{n+1}^{(k)}\) and \(\mathbf{\Pi}_{n+1}^{(k)}\). The solutions are updated each Newton iteration until convergence according to \[\dot{\mathbf{U}}_{n+1}^{(k+1)} =\dot{\mathbf{U}}_{n+1}^{(k)}+\Delta\dot{\mathbf{U}}_{n+1}^{(k)}, \tag{11}\] \[\mathbf{\Pi}_{n+1}^{(k+1)} =\mathbf{\Pi}_{n+1}^{(k)}+\Delta\mathbf{\Pi}_{n+1}^{(k)}. \tag{12}\] In cardiovascular biomechanics modeling, we are not only interested in blood flow, but also in the dynamics of the tissues surrounding the blood, notably the heart (Fig. 1 right). The deformation of these tissues is governed by the equations of finite deformation elastodynamics. Specifically, we may state the Cauchy momentum equation in Lagrangian form \[\rho\frac{D\mathbf{u}}{Dt}-\nabla\cdot\boldsymbol{\sigma}-\mathbf{f}= \mathbf{0}, \tag{13}\] where \(\frac{D}{Dt}\) denotes the material derivative. As with the fluid equations, \(\mathbf{u}\) is the velocity, \(\boldsymbol{\sigma}\) is the Cauchy stress tensor, and \(\mathbf{f}\) is a body force, which we assume to be zero in this work. In our finite element solver, the structural problem is solved in the reference configuration, in which the relevant stress measure is the second Piola-Kirchhoff stress \[\mathbf{S}=J\mathbf{F}^{-1}\boldsymbol{\sigma}\mathbf{F}^{-T}, \tag{14}\] where \(\mathbf{F}\) is the deformation gradient tensor and \(J=\det\mathbf{F}\) is the Jacobian. For a hyperelastic material described by a strain energy density function \(\psi(\mathbf{F})\), we have \[\mathbf{S}=\frac{\partial\psi}{\partial\mathbf{E}}, \tag{15}\] where \(\mathbf{E}=\frac{1}{2}(\mathbf{C}-\mathbf{I})\) is the Green-Lagrange strain tensor and \(\mathbf{C}=\mathbf{F}^{T}\mathbf{F}\) is the right Cauchy-Green tensor. These equations are augmented with boundary and initial conditions so that we may write the structural dynamics problem in abstract form \[\begin{cases}\mathcal{P}^{3D,struct}(\mathbf{u},\mathbf{x},t)=\mathbf{0},\\ \text{Boundary conditions},\\ \text{Initial conditions}.\end{cases} \tag{16}\] These equations are solved using similar methods to those for fluid flow (i.e., finite element method and generalized-\(\alpha\) method) [20], leading to an analogous nonlinear system of equation to be solved at each timestep \[\mathbf{R}^{3D,struct}(\dot{\mathbf{U}}_{n+1})=\mathbf{0}. \tag{17}\] Usually, nodal displacements are chosen as the structural unknowns after time discretization, but in our implementation, we instead choose nodal accelerations. This is an arbitrary choice that allows the structure problem to be treated similarly to the fluid problem, but it is not necessary for the present coupling framework. The system is solved using Newton's method, \[\left[\mathbf{K}\right]_{n+1}^{(k)}\left[\Delta\dot{\mathbf{U}}_{n+1}^{(k)} \right]=-\left[\mathbf{R}^{3D,struct}\right]_{n+1}^{(k)}, \tag{18}\] where \(k\) again indicates the Newton iteration. In the remainder of the paper, we consider the general 3D problem \[\begin{cases}\mathcal{P}^{3D}(\mathbf{\phi},\mathbf{x},t)=\mathbf{0},\\ \text{Boundary conditions},\\ \text{Initial conditions},\end{cases} \tag{19}\] where \(\mathcal{P}^{3D}\) may represent either the 3D fluid or 3D structure PDE. \(\mathbf{\phi}\) are the 3D variables, which include velocity or velocity and pressure, depending on the physics. After space-time discretization, we obtain the general 3D residual equation as \[\mathbf{R}^{3D}(\mathbf{\Phi}_{n+1})=\mathbf{0}. \tag{20}\] \(\mathbf{\Phi}_{n+1}\) is the state vector of the 3D system, where, for the fluid, \(\mathbf{\Phi}_{n+1}=\begin{bmatrix}\dot{\mathbf{U}}_{n+1}\\ \mathbf{\Pi}_{n+1}\end{bmatrix}\), while for the structure, \(\mathbf{\Phi}_{n+1}=\dot{\mathbf{U}}_{n+1}\). Newton's method to solve this nonlinear system gives the general 3D linear system \[\left[\frac{\partial\mathbf{R}^{3D}}{\partial\mathbf{\Phi}_{n+1}}\right]_{n+1}^{( k)}\left[\Delta\mathbf{\Phi}_{n+1}^{(k)}\right]=-\left[\mathbf{R}^{3D}\right]_{n+1}^ {(k)}. \tag{21}\] ### 0D circulation model Blood flow throughout the circulatory system is modeled using an LPN [20, 22, 21], also called a 0D fluid model. Fig. 2 shows an example LPN that will be used later in Section 3.1. LPN models are typically combinations of resistors \(R\), which model the viscous resistance of vessels to blood flow, inductors \(L\), which account for the inertia of blood, and capacitors \(C\), which model the compliance of blood vessels. In addition, diodes are used to represent the heart valves. Many LPN models of the vasculature have been used in the literature, ranging from simple resistance or resistance-capacitance models of arteries to extensive closed-loop networks representing the entire circulatory system [19, 22, 34]. An LPN can be analyzed using Kirchhoff's first law for an electrical circuit, which leads to a general representation as a system of differential-algebraic equations (DAEs) [35], \[\frac{d\mathbf{y}}{dt}=\mathbf{f}(\mathbf{y},\mathbf{z},t), \tag{22}\] \[\mathbf{g}(\mathbf{y},\mathbf{z},t)=\mathbf{0}, \tag{23}\] with initial conditions \[\mathbf{y}(t=0)=\mathbf{y}_{0}, \tag{24}\] Figure 2: An example of a 0D circulation model or LPN, which treats blood flow through the body like the flow of current through an electrical circuit. The LPN is forced by prescribed flow rates \(\mathbf{q}\) at the Dirichlet boundary nodes \(\gamma_{g}=\{\gamma_{g,1}\}\) and prescribed pressures \(\mathbf{p}\) at the Neumann boundary nodes \(\gamma_{h}=\{\gamma_{h,1},\gamma_{h,2}\}\), both of which are denoted by red circles. where \(\mathbf{y}\) contains differential variables determined by the differential equations Eq. (22), \(\mathbf{z}\) contains algebraic variables determined by the algebraic equations Eq. (23), and \(t\) is time. Both \(\mathbf{y}\) and \(\mathbf{z}\) contain pressures and flow rates at nodes and branches, respectively, and may also contain other variables, such as the cross-sectional area of a vessel or the state of a valve. \(\mathbf{f}\) and \(\mathbf{g}\) are a potentially nonlinear functions of \(\mathbf{y}\), \(\mathbf{z}\), and \(t\). As with the 3D model, we may define boundary conditions for the 0D-LPN model. Let \(\gamma_{g}\) denote the set of Dirichlet boundary nodes on which we prescribe flows \(\mathbf{q}\). Let \(\gamma_{h}\) denote the set of Neumann boundary nodes on which we prescribe pressures \(\mathbf{p}\). Fig. 2 gives an example LPN with these boundary nodes shown in red. The boundary flow rates and pressures are not boundary conditions in a strict sense because there is no notion of length in a 0D fluid model. Instead, they act as forcing terms directly in the 0D equations [17] \[\frac{d\mathbf{y}}{dt}=\mathbf{f}(\mathbf{y},\mathbf{z},t,\mathbf{q},\mathbf{ p}), \tag{25}\] \[\mathbf{g}(\mathbf{y},\mathbf{z},t,\mathbf{q},\mathbf{p})=\mathbf{0}. \tag{26}\] \(\mathbf{y}\) and \(\mathbf{z}\) can often be considered together, so we define the combined 0D state vector \(\mathbf{w}=[\mathbf{y},\mathbf{z}]^{T}\). Analogous to the 3D system, the 0D equations are written in the following abstract form \[\begin{cases}\mathcal{P}^{0D}(\mathbf{w},t,\mathbf{q},\mathbf{p})=\mathbf{0},\\ \text{Initial conditions}.\end{cases} \tag{27}\] In this work, we integrate the 0D system with a 4th-order Runge-Kutta (RK4) scheme (Appendix A). We first apply RK4 to integrate the differential variables \(\mathbf{y}\) using Eq. (25) from timestep \(n\) to \(n+1\), then determine the algebraic variables \(\mathbf{z}\) with Eq. (26) using the updated differential variables. This yields a system of algebraic equations to be solved at timestep \(n+1\) \[\mathbf{R}^{0D}(\mathbf{w}_{n+1})=\mathbf{0}. \tag{28}\] If the time-stepping scheme is explicit, as in this work, this system can be solved directly (i.e., in one iteration). If the scheme is implicit and the DAE system is nonlinear in \(\mathbf{w}\), this system can be solved by Newton's method. ### The coupled problem So far, we have individually discussed the numerical treatment of the 3D mechanical model and the 0D circulation model. In this section, we describe how these two models are mathematically coupled. The 3D Neumann boundary is split into coupled and uncoupled parts, \(\Gamma_{h}=\Gamma_{h_{e}}\cup\Gamma_{h_{e}}\). In general, there may be multiple distinct coupled Neumann boundaries, so that \(\Gamma_{h_{e}}=\Gamma_{h_{e}}^{(1)}\cup\Gamma_{h_{e}}^{(2)}\cup\ldots\cup \Gamma_{h_{e}}^{(n^{BC})}\), where \(n^{cBC}\) is the number of coupled boundaries. In Fig. 3, these are the outflow boundaries of the aorta model and the encoradial surfaces of the biventricular model (i.e., the inner surfaces of the heart muscle in contact with the blood). Similarly, the set of 0D Dirichlet nodes is split into coupled and uncoupled parts, \(\gamma_{g}=\gamma_{g_{e}}\cup\gamma_{g_{u}}\). The set of coupled Dirichlet nodes may be written \(\gamma_{g_{e}}=\gamma_{g_{e}}^{(1)}\cup\gamma_{g_{e}}^{(2)}\cup\ldots\cup \gamma_{g_{e}}^{(n^{BC})}\). Each 0D coupled Dirichlet node \(\gamma_{g_{e}}^{(i)}\) is associated with a 3D coupled Neumann boundary \(\Gamma_{h_{e}}^{(i)}\), \(i\in\{1,\ldots,n^{cBC}\}\), as shown in Fig. 3. With these definitions in place, we may state the mathematical coupling between the 3D and 0D domains. For the 3D, we impose a spatially uniform pressure \(P_{i}\) on the coupled 3D boundary \(\Gamma_{h_{e}}^{(i)}\), where the value of \(P_{i}\) is taken as the pressure at the corresponding coupled node \(\gamma_{g_{e}}^{(i)}\) of the 0D model. Stated mathematically, \[P_{i}=(\mathbb{P}\mathbf{w})_{i}\quad\text{on}\ \Gamma_{h_{e}}^{(i)}, \tag{29}\] where \(\mathbb{P}\) is a matrix that selects the appropriate components from \(\mathbf{w}\). For example, if \(\mathbf{w}\) has 5 components, but only the second and fourth components represent pressures at coupled nodes 1 and 2, then \[\mathbb{P}=\begin{bmatrix}0&1&0&0&0\\ 0&0&0&1&0\end{bmatrix}. \tag{30}\] Note that the spatially uniform pressure assumption is made in several other coupling approaches [20, 22, 21]. Analogously, for the 0D, we impose a flow rate \(Q_{i}\) at the coupled 0D node \(\gamma_{g_{e}}^{(i)}\), where the value of \(Q_{i}\) is taken as the velocity flux through the corresponding coupled 3D boundary \(\Gamma_{h_{e}}^{(i)}\) of the 3D model. Stated mathematically, \[Q_{i}=\int_{\Gamma_{h_{e}}^{(i)}}\mathbf{u}\cdot\mathbf{n}d\Gamma, \tag{31}\] where \(\mathbf{n}\) is the outward surface normal vector. This definition of flow rate should be clear in the context of a 3D fluid. For a 3D structure, however, in order to define a flow rate \(Q_{i}\), we must restrict our attention to 3D structures that enclose some volume of blood \(V_{i}\), most commonly a chamber of the heart. Then, we may define flow rate as \(Q_{i}=-\frac{dV_{i}}{dt}\). In Appendix B, using the Reynolds Transport Theorem, it is shown that Eq. (31) is equally valid for such a 3D structure, provided \(\Gamma_{h_{c}}^{(i)}\) is a surface that closes the volume of interest \(V_{i}\) (see Section 2.6 if surface is not closed) and the integral is taken over \(\Gamma_{h_{c}}^{(i)}\) in the deformed configuration. It is this observation that permits uniform treatment of 3D fluid and structure. On a practical note, in the finite element setting, the flow rate integral is computed from the 3D degrees of freedom as follows. \[Q_{i}=\int_{\Gamma_{h_{c}}^{(i)}}\mathbf{u}\cdot\mathbf{n}d\Gamma=\sum_{A}\int _{\Gamma_{h_{c}}^{(i)}}N_{A}(\mathbf{U})_{A}\cdot\mathbf{n}d\Gamma, \tag{32}\] where \((\mathbf{U})_{A}\) is the velocity of node \(A\) of the finite element model and \(N_{A}(\mathbf{x})\) is the associated shape function in the finite element formulation. Note also that in the time-discrete setting, the nodal velocities are obtained from the nodal accelerations by the generalized-\(\alpha\) method expression \[\mathbf{U}_{n+1}=\mathbf{U}_{n}+\Delta t\dot{\mathbf{U}}_{n}+\gamma\Delta t( \dot{\mathbf{U}}_{n+1}-\dot{\mathbf{U}}_{n}), \tag{33}\] where \(\Delta t\) is the timestep size and \(\gamma\) is a parameter of the generalized-\(\alpha\) method, not to be confused with the 0D Dirichlet and Neumann boundary node sets \(\gamma_{g}\) and \(\gamma_{h}\). Depending on the physical formulation (structure or fluid), \(\dot{\mathbf{U}}\) is either precisely \(\mathbf{\Phi}\) or a component of \(\mathbf{\Phi}\). Figure 3: Left: Coupling between a 3D fluid and 0D fluid. The 3D fluid is an aorta model taken from the Vascular Model Repository ([https://www.vascularmodel.com/](https://www.vascularmodel.com/)). Right: Coupling between a 3D structure and 0D fluid. The 3D structure is a biventricular model obtained from patient MRI data. Both coupling problems are treated identically. Coupled Neumann surfaces \(\Gamma_{h_{c}}^{(i)}\) on the 3D models are highlighted in yellow. Coupled Dirichlet nodes \(\gamma_{g_{c}}^{(i)}\) on the 0D model are shown in green. Along each \(\Gamma_{h_{c}}^{(i)}\) - \(\gamma_{g_{c}}^{(i)}\) connection is an associated exchange of flow rate \(Q_{i}\) and pressure \(P_{i}\). Note that in addition to coupled boundaries, the 3D model will generally have additional uncoupled boundaries on which one may prescribe uncoupled Dirichlet and/or Neumann boundary conditions, and likewise the 0D model will generally have additional uncoupled nodes on which one may prescribe uncoupled Dirichlet and/or Neumann boundary forcings. The coupled problem in the time and space continuous domain may be summarized in the following abstract manner \[\begin{cases}\mathcal{P}^{0D}(\mathbf{w},t,[\mathbf{q}_{u},\mathbf{Q}],\mathbf{p} )=\mathbf{0},\\ \text{Initial conditions},\\ \mathcal{P}^{3D}(\boldsymbol{\phi},\mathbf{x},t)=\mathbf{0},\\ \text{Initial conditions},\\ \text{Uncoupled boundary conditions},\\ \boldsymbol{\sigma}\cdot\mathbf{n}=-P_{i}\mathbf{n},\text{ on }\Gamma_{h_{e}}^{(i)},i\in\{1, \ldots,n^{eBC}\},\\ \\ P_{i}(t)=(\mathbb{P}\mathbf{w}(t))_{i},\\ Q_{i}(t)=\int_{\Gamma_{h_{e}}^{(i)}}\mathbf{u}(t)\cdot\mathbf{n}(t)d\Gamma, \end{cases} \tag{34}\] where the 0D Dirichlet forcing term \(\mathbf{q}\) is split into an uncoupled component \(\mathbf{q}_{u}\), which are prescribed flow rates on the 0D model, and a coupled component \(\mathbf{Q}\), which is obtained from the 3D model (i.e., \(\mathbf{q}=[\mathbf{q}_{u},\mathbf{Q}]\)). Similarly, the 3D boundary conditions are split into an uncoupled component (Dirichlet or Neumann), which is prescribed, and coupled pressure boundary conditions with magnitude \(P_{i}\), which are obtained from the 0D model. The expressions for \(P_{i}\) and \(Q_{i}\), given in Eqs. (29) and (31) and restated here, provide the coupling conditions between 0D and 3D. While the values of the uncoupled boundary conditions for both 3D and 0D are generally prescribed, the values of the coupled pressure and flow boundary conditions, \(P_{i}\) and \(Q_{i}\), are unknown and must be determined as part of the solution to the coupled problem. After applying RK4 time discretization to the 0D system and applying generalized-\(\alpha\) time discretization and finite element spatial discretization to the 3D system, the problem reduces to solving the following coupled nonlinear equations at each timestep \(n+1\), \[\begin{cases}\mathbf{R}^{0D}(\mathbf{w}_{n+1},\mathbf{Q}_{n+1}(\mathbf{\Phi}_ {n+1}))=\mathbf{0},\\ \mathbf{R}^{3D}(\mathbf{\Phi}_{n+1},\mathbf{P}_{n+1}(\mathbf{w}_{n+1}))= \mathbf{0},\end{cases} \tag{35}\] where, due to the coupling between 3D and 0D, the 0D residual \(\mathbf{R}^{0D}\) is a function of the 3D state vector \(\mathbf{\Phi}_{n+1}\) through the interface flow rates \(\mathbf{Q}_{n+1}\), and similarly the 3D residual \(\mathbf{R}^{0D}\) is a function of the 0D state vector \(\mathbf{w}_{n+1}\) through the interface pressures \(\mathbf{P}_{n+1}\). ### Solving the coupled problem The method to solve the system Eq. (35) is inspired by ANM [32], with modifications suggested in [36] and in [37]. The coupled problem is solved in a modular manner, which allows us to take advantage of codes that already exist to efficiently solve the 3D and 0D problems. Moreover, the 0D model may be modified without changing the 3D solver, and vice versa. Using ANM as a foundation, we reproduce the equations of the original coupling framework described in [19, 33]. In addition, we show the equations apply not only to 3D fluid - 0D fluid coupling, but also to 3D structure - 0D fluid coupling. We begin by applying Newton's method to solve Eq. (35) in a monolithic manner, which yields a linear system to be solved at each Newton iteration, \(k\), \[\begin{bmatrix}\dfrac{\partial\mathbf{R}^{0D}}{\partial\mathbf{w}_{n+1}}& \dfrac{\partial\mathbf{R}^{0D}}{\partial\mathbf{\Phi}_{n+1}}\\ \dfrac{\partial\mathbf{R}^{3D}}{\partial\mathbf{w}_{n+1}}&\dfrac{\partial \mathbf{R}^{3D}}{\partial\mathbf{\Phi}_{n+1}}\end{bmatrix}_{n+1}^{(k)} \begin{bmatrix}\Delta\mathbf{w}_{n+1}^{(k)}\\ \Delta\mathbf{\Phi}_{n+1}^{(k)}\end{bmatrix}=-\begin{bmatrix}\mathbf{R}^{0D }\\ \mathbf{R}^{3D}\end{bmatrix}_{n+1}^{(k)}, \tag{36}\] with the update \[\mathbf{\Phi}_{n+1}^{(k+1)}=\mathbf{\Phi}_{n+1}^{(k)}+\Delta\mathbf{\Phi}_{n+ 1}^{(k)}\quad\text{ and }\quad\mathbf{w}_{n+1}^{(k+1)}=\mathbf{w}_{n+1}^{(k)}+\Delta\mathbf{w}_{n+1}^ {(k)}. \tag{37}\] As before, the notation \([!]_{n+1}^{(k)}\) indicates that terms inside the brackets are evaluated at \(\mathbf{\Phi}_{n+1}^{(k)}\) and \(\mathbf{w}_{n+1}^{(k)}\). Performing Schur Complement Reduction [38] (also known as Block Gauss Elimination or Static Condensation) yields the equivalent system \[\left[\begin{matrix}\frac{\partial\mathbf{R}^{0D}}{\partial\mathbf{w}_{n+1}}& \frac{\partial\mathbf{R}^{0D}}{\partial\mathbf{\Phi}_{n+1}}\\ \mathbf{0}&\frac{\partial\mathbf{R}^{3D}}{\partial\mathbf{\Phi}_{n+1}}-\frac{ \partial\mathbf{R}^{3D}}{\partial\mathbf{w}_{n+1}}\Big{(}\frac{\partial \mathbf{R}^{0D}}{\partial\mathbf{w}_{n+1}}\Big{)}^{-1}\frac{\partial\mathbf{ R}^{0D}}{\partial\mathbf{\Phi}_{n+1}}\end{matrix}\right]_{n+1}^{(k)}\begin{bmatrix}\Delta \mathbf{w}_{n+1}^{(k)}\\ \Delta\mathbf{\Phi}_{n+1}^{(k)}\end{bmatrix}\\ =-\left[\mathbf{R}^{0D}-\frac{\partial\mathbf{R}^{3D}}{\partial \mathbf{w}_{n+1}}\Big{(}\frac{\partial\mathbf{R}^{0D}}{\partial\mathbf{w}_{n+ 1}}\Big{)}^{-1}\mathbf{R}^{0D}\right]_{n+1}^{(k)}. \tag{38}\] From this, \(\Delta\mathbf{\Phi}_{n+1}^{(k)}\) can be determined by solving the linear system from the bottom row \[\left[\frac{\partial\mathbf{R}^{3D}}{\partial\mathbf{\Phi}_{n+1}}-\frac{ \partial\mathbf{R}^{3D}}{\partial\mathbf{w}_{n+1}}\Big{(}\frac{\partial \mathbf{R}^{0D}}{\partial\mathbf{w}_{n+1}}\Big{)}^{-1}\frac{\partial\mathbf{ R}^{0D}}{\partial\mathbf{\Phi}_{n+1}}\right]_{n+1}^{(k)}\Delta\mathbf{\Phi}_{n+1}^{(k)} = -\left[\mathbf{R}^{3D}-\frac{\partial\mathbf{R}^{3D}}{\partial \mathbf{w}_{n+1}}\Big{(}\frac{\partial\mathbf{R}^{0D}}{\partial\mathbf{w}_{n +1}}\Big{)}^{-1}\mathbf{R}^{0D}\right]_{n+1}^{(k)}. \tag{39}\] This is identical to the linear system for the uncoupled 3D problem Eq. (21), except for additional contributions to the 3D model's residual and tangent from the 0D model. For convenience, we denote the 0D contribution to the 3D tangent \(\mathbf{K}^{3D/0D}\), where \[\left[\mathbf{K}^{3D/0D}\right]_{n+1}^{(k)}=-\left[\frac{\partial\mathbf{R}^{ 3D}}{\partial\mathbf{w}_{n+1}}\Big{(}\frac{\partial\mathbf{R}^{0D}}{\partial \mathbf{w}_{n+1}}\Big{)}^{-1}\frac{\partial\mathbf{R}^{0D}}{\partial\mathbf{ \Phi}_{n+1}}\right]_{n+1}^{(k)}. \tag{40}\] As will be shown, rather than considering the 0D contribution to the 3D residual, it is more convenient to consider the entire 0D-modified 3D residual \(\mathbf{R}^{3D/0D}\), where \[\left[\mathbf{R}^{3D/0D}\right]_{n+1}^{(k)}=\left[\mathbf{R}^{3D}-\frac{ \partial\mathbf{R}^{3D}}{\partial\mathbf{w}_{n+1}}\Big{(}\frac{\partial \mathbf{R}^{0D}}{\partial\mathbf{w}_{n+1}}\Big{)}^{-1}\mathbf{R}^{0D}\right]_ {n+1}^{(k)}. \tag{41}\] Thus, the solution strategy is as follows: 1. Approximate \(\left[\mathbf{R}^{3D/0D}\right]_{n+1}^{(k)}.\) 2. Approximate \(\left[\mathbf{K}^{3D/0D}\right]_{n+1}^{(k)}.\) 3. Solve the modified 3D linear system \[\left[\frac{\partial\mathbf{R}^{3D}}{\partial\mathbf{\Phi}_{n+1}}+\mathbf{K}^{ 3D/0D}\right]_{n+1}^{(k)}\Delta\mathbf{\Phi}_{n+1}^{(k)}=-\left[\mathbf{R}^{3D /0D}\right]_{n+1}^{(k)}.\] (42) We then perform the Newton update with \(\Delta\mathbf{\Phi}_{n+1}^{(k)}\), proceed to the next Newton iteration \(k+1\), and repeat until convergence. Note that because we use RK4 (an explicit scheme) for the 0D system, the 0D system does not depend on an updated guess for \(\mathbf{w}_{n+1}\), and thus we do not need to compute \(\Delta\mathbf{w}_{n+1}^{(k)}\). The approximations for \(\left[\mathbf{R}^{3D/0D}\right]_{n+1}^{(k)}\) and \(\left[\mathbf{K}^{3D/0D}\right]_{n+1}^{(k)}\) are performed using a fixed point iteration operator \(F^{0D}(\mathbf{w}_{n+1},\mathbf{\Phi}_{n+1})\) for the 0D system, which is introduced next. Then, explicit expressions for the two terms are provided. #### 2.4.1 0D fixed point iteration operator Here, we introduce the 0D fixed point iteration operator, which is necessary for the 0D residual and tangent approximations to follow. Assume we have an operator \(F^{0D}(\mathbf{w}_{n+1},\mathbf{\Phi}_{n+1})\) such that the iteration \(\mathbf{w}_{n+1}^{(m-1)}=F^{0D}(\mathbf{w}_{n+1}^{(m)},\mathbf{\Phi}_{n+1})\) converges to the solution of \(\mathbf{R}^{0D}(\mathbf{w}_{n+1},\mathbf{\Phi}_{n+1})=\mathbf{0}\) (for fixed \(\mathbf{\Phi}_{n+1}\)). One can identify this operator for nearly all conceivable 0D solvers, implicit or explicit. In this work, RK4 is used to integrate the 0D system, and the fixed point operator corresponding to RK4, \(F^{0D,RK4}(\mathbf{w}_{n+1},\mathbf{\Phi}_{n+1})\), is provided in Appendix A. As was done for \(\mathbf{R}^{0D}\) Eq. (35), it is convenient to view \(F^{0D}\) as a function of \(\mathbf{\Phi}_{n+1}\) through the coupling flow rates \(\mathbf{Q}_{n+1}\) as, \[F^{0D}(\mathbf{w}_{n+1},\mathbf{\Phi}_{n+1})=F^{0D}(\mathbf{w}_{n+1},\mathbf{Q} _{n+1}(\mathbf{\Phi}_{n+1})). \tag{43}\] We briefly list some relevant features of \(F^{0D,RK4}\). Because RK4 is an explicit scheme, \(F^{0D,RK4}\) is a function of \(\mathbf{\Phi}_{n+1}\), but is not a function of \(\mathbf{w}_{n+1}\). However, for generality and in case one is interested in using an implicit scheme, we retain its dependence on \(\mathbf{w}_{n+1}\) in the remainder of the derivation. It is a fixed point operator corresponding to Newton's method, which converges in one iteration; in other words, \(\mathbf{w}_{n+1}^{(m+1)}=F^{0D,RK4}(\mathbf{w}_{n+1}^{(m)},\mathbf{Q}_{n+1}( \mathbf{\Phi}_{n+1}))\) converges to the solution of \(\mathbf{R}^{0D}(\mathbf{w}_{n+1},\mathbf{Q}_{n+1}(\mathbf{\Phi}_{n+1}))= \mathbf{0}\) in only one step regardless of \(\mathbf{w}_{n+1}^{(m)}\). Finally, on a practical note, we emphasize that \(F^{0D}\) is also a function of \(\mathbf{w}_{n}\) and \(\mathbf{Q}_{n}\), which are assumed to be known. See Appendix A for more details. With \(F^{0D}\) described, we continue by deriving explicit expressions for \(\left[\mathbf{R}^{3D/0D}\right]_{n+1}^{(k)}\) and \(\left[\mathbf{K}^{3D/0D}\right]_{n+1}^{(k)}\). #### 2.4.2 0D-modified 3D residual In this section, an explicit expression is derived for \(\left[\mathbf{R}^{3D/0D}\right]_{n+1}^{(k)}\). Recall Eq. (41), \[\left[\mathbf{R}^{3D/0D}\right]_{n+1}^{(k)}=\left[\mathbf{R}^{3D}-\frac{ \partial\mathbf{R}^{3D}}{\partial\mathbf{w}_{n+1}}\Big{(}\frac{\partial \mathbf{R}^{0D}}{\partial\mathbf{w}_{n+1}}\Big{)}^{-1}\mathbf{R}^{0D}\right]_ {n+1}^{(k)}.\] First, use a finite difference approximation as suggested in [37], \[\left[\mathbf{R}^{3D/0D}\right]_{n+1}^{(k)}=\left[\mathbf{R}^{3D}-\frac{ \partial\mathbf{R}^{3D}}{\partial\mathbf{w}_{n+1}}\Big{(}\frac{\partial \mathbf{R}^{0D}}{\partial\mathbf{w}_{n+1}}\Big{)}^{-1}\mathbf{R}^{0D}\right]_ {n+1}^{(k)}\approx\mathbf{R}^{3D}(\mathbf{\Phi}_{n+1}^{(k)},\tilde{\mathbf{w}} _{n+1}^{(k)}), \tag{44}\] where \[\tilde{\mathbf{w}}_{n+1}^{(k)}=\mathbf{w}_{n+1}^{(k)}-\left[\Big{(}\frac{ \partial\mathbf{R}^{0D}}{\partial\mathbf{w}_{n+1}}\Big{)}^{-1}\mathbf{R}^{0D} \right]_{n+1}^{(k)}. \tag{45}\] Note that in our case, the 3D residual \(\mathbf{R}^{3D}\) is linear in the 0D variables \(\mathbf{w}_{n+1}\), so this approximation is exact. Eq. (45) is in fact one Newton iteration to solve the 0D system at fixed \(\mathbf{\Phi}_{n+1}^{(k)}\) (or \(\mathbf{Q}_{n+1}^{(k)}\)). Thus, we may approximate it using the 0D fixed point operator \[\tilde{\mathbf{w}}_{n+1}^{(k)}\approx F^{0D}(\mathbf{w}_{n+1}^{(k)},\mathbf{ \Phi}_{n+1}^{(k)}). \tag{46}\] If \(F^{0D}\) is a Newton iteration, as in our case, this approximation is exact. In terms of implementation, at each Newton iteration, first compute \(\tilde{\mathbf{w}}_{n+1}^{(k)}\) using Eq. (46). Recalling \(\mathbf{R}^{3D}\) is a function of \(\mathbf{w}_{n+1}\) through coupling pressures (Eq. (35)), next compute modified pressures defined as \[\tilde{\mathbf{P}}_{n+1}^{(k)}=\mathbf{P}\tilde{\mathbf{w}}_{n+1}^{(k)}. \tag{47}\] Finally, evaluate \[\left[\mathbf{R}^{3D/0D}\right]_{n+1}^{(k)}=\mathbf{R}^{3D}(\mathbf{\Phi}_{n +1}^{(k)},\tilde{\mathbf{P}}_{n+1}^{(k)}). \tag{48}\] This last step can be implemented as follows in a 3D finite element solver. Splitting the residual into a term from the coupled Neumann boundaries and terms from all other residual contributions (internal stresses, other boundary conditions, etc.), \(\left[\mathbf{R}^{3D/0D}\right]_{n+1}^{(k)}\) is computed as follows: \[\left(R^{3D/0D}\right)_{n+1,Ai}^{(k)}=\text{Uncoupled residual terms}+\sum_{m=1}^{n^{ eBC}}\int_{\Gamma_{k_{e}}^{(m)}}N_{A}\tilde{P}_{n+1,m}^{(k)}n_{i}d\Gamma, \tag{49}\] where \(A\) is the node index, \(i\) indexes the spatial dimension, \(m\) indexes the coupled Neumann boundaries, of which there are \(n^{eBC}\), and \(\Gamma_{h_{e}}^{(m)}\) is the surface corresponding to coupled Neumann boundary \(m\). \(N_{A}\) is the shape function for node A, \(\tilde{P}_{n+1,m}^{(k)}\) is the \(m\)th component of \(\tilde{\mathbf{P}}_{n+1}^{(k)}\), and \(n_{i}\) is the \(i\)th component of the outward surface normal. The integral expression in Eq. (49) is the same as for any other (uncoupled) pressure boundary condition; the only difference is that the value of the pressure \(\tilde{P}_{n+1,j}^{(k)}\) is obtained by communicating with the 0D solver. If \(\mathbf{R}^{3D}\) contains momentum and continuity components (as in the fluid system Eq. (10)), the contribution of the coupled Neumann boundary conditions should be assembled into the _momentum_ equation residual. **Remark:** We point out the minor difference in Eq. (49) when considering a 3D fluid - 0D fluid problem vs. a 3D structure - 0D fluid problem. For a 3D fluid, we may consider the fluid domain to be fixed (non-deforming), so the integral is taken over the coupled surface in the reference configuration. For a 3D structure, we typically assume a "follower pressure load". Thus, if the structure deforms, the integral is taken over the coupled surface in the current (deformed) configuration with the current surface normal vector, corresponding to timestep \(n+1\). #### 2.4.3 0D contribution to 3D tangent In this section, an explicit expression is derived for \(\big{[}\mathbf{K}^{3D/0D}\big{]}_{n+1}^{(k)}\). Recall Eq. (40), \[\big{[}\mathbf{K}^{3D/0D}\big{]}_{n+1}^{(k)}=-\left[\frac{\partial\mathbf{R}^{3 D}}{\partial\mathbf{w}_{n+1}}\Big{(}\frac{\partial\mathbf{R}^{0D}}{\partial \mathbf{w}_{n+1}}\Big{)}^{-1}\frac{\partial\mathbf{R}^{0D}}{\partial\Phi_{n+1 }}\right]_{n+1}^{(k)}.\] Following [32], define a matrix \(\mathbf{C}\) \[\mathbf{C}=\Big{(}\frac{\partial\mathbf{R}^{0D}}{\partial\mathbf{w}_{n+1}} \Big{)}^{-1}\frac{\partial\mathbf{R}^{0D}}{\partial\mathbf{\Phi}_{n+1}}, \tag{50}\] so that \[\big{[}\mathbf{K}^{3D/0D}\big{]}_{n+1}^{(k)}=-\left[\frac{\partial\mathbf{R}^ {3D}}{\partial\mathbf{w}_{n+1}}\mathbf{C}\right]_{n+1}^{(k)}.\] The key approximation is provided in [32], in which it was shown that \(\mathbf{C}\) can be reasonably approximated by \[\mathbf{C}\approx-\frac{\partial F^{0D}(\mathbf{w}_{n+1},\mathbf{\Phi}_{n+1} )}{\partial\mathbf{\Phi}_{n+1}}=-\frac{\partial F^{0D}(\mathbf{w}_{n+1}, \mathbf{Q}_{n+1})}{\partial\mathbf{Q}_{n+1}}\frac{\partial\mathbf{Q}_{n+1}}{ \partial\mathbf{U}_{n+1}}\frac{\partial\mathbf{U}_{n+1}}{\partial\mathbf{ \Phi}_{n+1}}, \tag{51}\] where we have used the chain rule and the fact that \(F^{0D}\) is a function of \(\mathbf{\Phi}_{n+1}\) through the flow rates \(\mathbf{Q}_{n+1}\) and the nodal velocities \(\mathbf{U}_{n+1}\) (Eqs. (32) and (33)). For the other term, \(\frac{\partial\mathbf{R}^{3D}}{\partial\mathbf{w}_{n+1}}\), we also apply the chain rule and the fact that \(\mathbf{R}^{3D}\) is a function of \(\mathbf{w}_{n+1}\) through the pressures \(\mathbf{P}_{n+1}\) (Eq. (29)) \[\frac{\partial\mathbf{R}^{3D}}{\partial\mathbf{w}_{n+1}}=\frac{\partial \mathbf{R}^{3D}}{\partial\mathbf{P}_{n+1}}\frac{d\mathbf{P}_{n+1}}{d\mathbf{w }_{n+1}}=\frac{\partial\mathbf{R}^{3D}}{\partial\mathbf{P}_{n+1}}\mathbb{P}. \tag{52}\] Thus, the 0D tangent matrix contribution is \[\big{[}\mathbf{K}^{3D/0D}\big{]}_{n+1}^{(k)}=\left[\frac{\partial\mathbf{R}^ {3D}}{\partial\mathbf{P}_{n+1}}\mathbb{P}\frac{\partial F^{0D}(\mathbf{w}_{n +1},\mathbf{Q}_{n+1})}{\partial\mathbf{Q}_{n+1}}\frac{\partial\mathbf{Q}_{n+1 }}{\partial\mathbf{U}_{n+1}}\frac{\partial\mathbf{U}_{n+1}}{\partial\mathbf{ \Phi}_{n+1}}\right]_{n+1}^{(k)}. \tag{53}\] All partial derivatives are computed analytically (see Appendix C for details) except for \[\left[\mathbb{P}\frac{\partial F^{0D}(\mathbf{w}_{n+1},\mathbf{Q}_{n+1})}{ \partial\mathbf{Q}_{n+1}}\right]_{n+1}^{(k)}.\] This term, which is denoted by \(\mathbf{M}\), is computed in a finite difference manner by communicating with the 0D solver as \[M_{ij} =\left[\mathbb{P}_{ip}\Big{(}\frac{\partial F^{0D}(\mathbf{w}_{n+ 1},\mathbf{Q}_{n+1})}{\partial\mathbf{Q}_{n+1}}\Big{)}_{pj}\right]_{n+1}^{(k)}\] \[\approx\frac{\mathbb{P}_{ip}F_{p}^{0D}(\mathbf{w}_{n+1}^{(k)}, \mathbf{Q}_{n+1}^{(k)}+\epsilon\mathbf{e}_{j})-\mathbb{P}_{ip}F_{p}^{0D}( \mathbf{w}_{n+1}^{(k)},\mathbf{Q}_{n+1}^{(k)})}{\epsilon},\] where \(\mathbf{e}_{j}\) is the \(j\)th unit vector and \(\epsilon\) is a small numerical perturbation. Note that for the evaluation of \(\big{[}\mathbf{R}^{3D/0D}\big{]}_{n+1}^{(k)}\) we already require \[\mathbb{P}F^{0D}(\mathbf{w}_{n+1}^{(k)},\mathbf{Q}_{n+1}^{(k)})=\mathbb{P} \tilde{\mathbf{w}}_{n+1}^{(k)}=\tilde{\mathbf{P}}_{n+1}^{(k)}.\] The slightly perturbed quantity, \(\mathbb{P}F^{0D}(\mathbf{w}_{n+1}^{(k)},\mathbf{Q}_{n+1}^{(k)}+\epsilon \mathbf{e}_{j})\), can be computed in exactly the same manner. We define \(\tilde{\mathbf{w}}_{n+1,\epsilon_{j}}^{(k)}\) and \(\tilde{\mathbf{P}}_{n+1,\epsilon_{j}}^{(k)}\) such that \[\mathbb{P}F^{0D}(\mathbf{w}_{n+1}^{(k)},\mathbf{Q}_{n+1}^{(k)}+\epsilon \mathbf{e}_{j})=\mathbb{P}\tilde{\mathbf{w}}_{n+1,\epsilon_{j}}^{(k)}=\tilde{ \mathbf{P}}_{n+1,\epsilon_{j}}^{(k)}.\] Thus, we may write \[M_{ij}=\frac{\tilde{P}_{n+1,\epsilon_{j},i}^{(k)}-\tilde{P}_{n+1,i}^{(k)}}{ \epsilon}. \tag{54}\] In this form, it is revealed that \(M_{ij}\) is a resistance-like quantity that describes how pressure at coupled surface \(i\) changes with flow rate at coupled surface \(j\). As argued in [19], the off-diagonal entries of \(M_{ij}\) are ignored in this work. The resulting 0D tangent contribution is given as (again, see Appendix C for derivation) \[\left(K^{3D/0D}\right)_{n+1,AiBj}^{(k)}=\sum_{l=1}^{n^{\epsilon \bar{\epsilon}D}}\sum_{m=1}^{n^{\epsilon\bar{\epsilon}D}}\gamma\Delta tM_{lm} \int_{\Gamma_{b_{c}}^{(l)}}N_{A}n_{i}d\Gamma\int_{\Gamma_{b_{c}}^{(m)}}N_{B}n _{j}d\Gamma. \tag{55}\] The variable definitions are the same as for \(\mathbf{R}^{3D/0D}\) (a vector), with the addition of indices (\(B\), \(j\), \(m\)) since we are now dealing with a matrix. Analogous to the 0D-modified 3D residual, the 0D tangent contribution should be assembled into the _momentum-acceleration_ block of the tangent matrix, if applicable. **Remark:** For a 3D structure - 0D fluid problem, the integrals in Eq. (55) should be taken over the coupled surface in the current (deformed) configuration with the current surface normal vector, corresponding to timestep \(n+1\). **Remark:** Eq. (49) (0D-modified 3D residual) and Eq. (55) (0D contribution to the 3D tangent) can be found in [19, 33], in slightly different notation. However, we obtain these expressions through a new derivation involving ANM and show that they apply not only to the 3D fluid - 0D fluid problem, but also to the 3D structure - 0D fluid problem. #### 2.4.4 Summary The coupling strategy is summarized below. See Fig. 4 for a graphic representation of the required computations and communications. At each Newton iteration \(k\), 1. Compute \(\mathbf{Q}_{n+1}^{(k)}=f(\mathbf{\Phi}_{n+1}^{(k)})\), where \(f\) is some function according to Eqs. (32) and (33), and communicate with the 0D solver. The 3D solver must also compute and send \(\mathbf{Q}_{n}=f(\mathbf{\Phi}_{n})\), which is also required by the 0D solver (see Appendix A for more details). 2. Compute \[\tilde{\mathbf{w}}_{n+1}^{(k)} =F^{0D}(\mathbf{w}_{n+1}^{(k)},\mathbf{Q}_{n+1}^{(k)};\mathbf{w}_ {n},\mathbf{Q}_{n}),\] \[\tilde{\mathbf{w}}_{n+1,\epsilon_{j}}^{(k)} =F^{0D}(\mathbf{w}_{n+1}^{(k)},\mathbf{Q}_{n+1}^{(k)}+\epsilon \mathbf{e}_{j};\mathbf{w}_{n},\mathbf{Q}_{n}),\quad j\in\{1,\ldots,n_{bc}\}.\] This requires \(n_{bc}+1\) calls to the 0D solver. 3. Compute \[\tilde{\mathbf{P}}_{n+1}^{(k)} =\mathbb{P}\tilde{\mathbf{w}}_{n+1}^{(k)},\] \[\tilde{\mathbf{P}}_{n+1,\epsilon_{j}}^{(k)} =\mathbb{P}\tilde{\mathbf{w}}_{n+1,\epsilon_{j}}^{(k)},\quad j\in \{1,\ldots,n_{bc}\}.\] This requires the 0D to simply extract the proper components of \(\tilde{\mathbf{w}}_{n+1}^{(k)}\) and \(\tilde{\mathbf{w}}_{n+1,\epsilon_{j}}^{(k)}\) and send them to 3D. 4. Given \(\tilde{\mathbf{P}}_{n+1}^{(k)}\), compute \(\left[\mathbf{R}^{3D/0D}\right]_{n+1}^{(k)}\) using Eq. (49). 5. Given \(\tilde{\mathbf{P}}_{n+1}^{(k)}\) and \(\tilde{\mathbf{P}}_{n+1,\epsilon_{j}}^{(k)},j\in\{1,\ldots,n_{bc}\}\), compute \(\mathbf{M}\) using Eq. (54) and construct \(\left[\mathbf{K}^{3D/0D}\right]_{n+1}^{(k)}\) using Eq. (55). 6. Solve the modified 3D linear system Eq. (42) \[\left[\frac{\partial\mathbf{R}^{3D}}{\partial\mathbf{\Phi}_{n+1}}+\mathbf{K}^ {3D/0D}\right]_{n+1}^{(k)}\Delta\mathbf{\Phi}_{n+1}^{(k)}=-\left[\mathbf{R}^ {3D/0D}\right]_{n+1}^{(k)},\] for \(\Delta\mathbf{\Phi}_{n+1}^{(k)}\). The 3D degrees of freedom are updated with \(\Delta\mathbf{\Phi}_{n+1}^{(k)}\), the Newton iteration index is incremented \(k\to k+1\), and the process repeats until \(\left[\mathbf{R}^{3D/0D}\right]_{n+1}^{(k)}\) falls below a prescribed relative or absolute tolerance. ### Numerical considerations The 0D tangent contribution \(\left[\mathbf{K}^{3D/0D}\right]_{n+1}^{(k)}\) is dense and adding it explicitly to \(\left[\frac{\partial\mathbf{R}^{3D}}{\partial\Phi_{n+1}}\right]_{n+1}^{(k)}\) may deteriorate the performance of the linear solver, which typically takes advantage of the sparse structure of tangent matrices arising in finite element solvers [39]. To address this, we rewrite Eq. (55) as \[\left(K^{3D/0D}\right)_{n+1,AiBj}^{(k)}=\sum_{l=1}^{n^{nB0}}\sum_{m=1}^{n^{nB 0}}\gamma\Delta tM_{lm}(S_{l})_{Ai}(S_{m})_{Bj},\quad(S_{l})_{Ai}=\int_{\Gamma _{h_{e}}^{(l)}}N_{A}n_{i}d\Gamma. \tag{56}\] That is, \(\left[\mathbf{K}^{3D/0D}\right]_{n+1}^{(k)}\) is the sum of rank 1 matrices. It is more efficient to store the vector \(\mathbf{S}_{k}\) separately and apply when needed (in matrix multiplication and in preconditioning) than to explicitly form the outer product and add it to the tangent [33]. The 0D tangent contribution also increases the condition number of the linear system, proportional to the resistance of the coupled Neumann boundaries (i.e., \(\mathbf{M}\)). This can cause poor performance of standard iterative linear solvers. Resistance-based preconditioning is an effective remedy [40; 33]. ### Capping non-closed surfaces and consequences The proposed coupling method requires computing flow rates from the 3D domain. For a 3D structure, this flow rate is identical to the negative rate of change of the enclosed volume. An important complication arises if the structure does not enclose a volume, such as in modeling the mechanics of the heart muscle. When the cardiac valves are closed, each cardiac chamber encloses a volume. However, the valves are often ignored when modeling the heart. As an example, Fig. 5 (left) shows an idealized LV, where the inflow and outflow valves are omitted. Here, we would like to couple the endocardial surface to a 0D fluid model, but the endocardial surface is not closed. Not accounting for this will lead to an inaccurately computed flow rate. In this work, we address this issue by introducing a "cap," a surface that closes the endocardial surface, thereby defining an enclosed fluid-tight volume (Fig. 5 right). Stated formally, if a coupled surface \(\Gamma_{h_{e}}^{(i)}\) is not closed, we consider a cap surface \(\Gamma_{h_{e},cap}^{(i)}\) such that \(\Gamma_{h_{e}}^{(i)}\cup\Gamma_{h_{e},cap}^{(i)}\) is a closed surface. Because there is some flexibility in defining the cap surface, in our formulation, we leave the responsibility of constructing it to the user. Methods exist for constructing such surfaces, such as the ear clipping algorithm [41] or the vtkFillHolesFilter from VTK [42], the latter being used in this work. Figure 4: Communication diagram between 3D and 0D solvers. At each timestep \(n\), we iterate until convergence. At each Newton iteration \(k\), we perform 6 computations/communications. These steps correspond to the coupling algorithm summary in Section 2.4.4. We note that this cap surface is used only for computing flow rate and should not be treated in the same manner as a boundary of the 3D model. In particular, even though the cap surface is used to compute flow rate, the coupling pressure is not applied to the cap surface. The consequences are as follows. When computing flow rates, the cap surface is included in the integral in Eq. (32), \[Q_{i}=\sum_{A}\Big{(}\int_{\Gamma_{h_{c}}^{(i)}}N_{A}(\mathbf{U})_{A}\cdot \mathbf{n}d\Gamma+\int_{\Gamma_{h_{c},cap}^{(i)}}N_{A}(\mathbf{U})_{A}\cdot \mathbf{n}d\Gamma\Big{)}.\] When computing the 0D-modified residual, the cap surface is not included in the pressure integral in Eq. (49); it is unchanged, \[\Big{(}R^{3D/0D}\Big{)}_{n+1,Ai}^{(k)}=\text{Uncoupled residual terms}+\sum_{j=1}^{n^{\text{\tiny{${}^{\text{${}^{\text{${}^{ \text{${}^{\text{${}^{$\text{$\text{$\text{$\text{$\ vary linearly from \(+60^{\circ}\) (relative to circumferential) on the endocardial surface to \(-60^{\circ}\) on the epicardial surface. The material model also requires a "sheet" orientation field \(\mathbf{s}\), which in this work is perpendicular to \(\mathbf{f}\) and to the ellipsoidal normal direction \(\mathbf{n}\). See Fig. 6b for visualization. The myocardium is modeled with a Holzapfel-Ogden strain energy, plus a quadratic volumetric penalty term, as well as a viscous pseudo-potential and active stress along fiber directions to recapitulate cardiac contraction [44; 45]. The second Piola-Kirchhoff stress is thus given by \[\mathbf{S}=\frac{\partial}{\partial\mathbf{E}}(\psi_{HO}+\psi_{ vol})+\frac{\partial}{\partial\mathbf{E}}(\psi_{visc})+\mathbf{S}_{act}, \tag{57}\] \[\psi_{HO}=\frac{a}{2b}\Big{(}e^{b(\bar{I}_{1}-3)}-1\Big{)}+\frac{ a_{fs}}{2b_{fs}}\Big{(}e^{b_{fs}I_{8,fs}^{2}}-1\Big{)}+\sum_{i\in\{f,s\}} \chi(I_{4,i})\frac{a_{i}}{2b_{i}}\Big{(}e^{b_{i}(I_{4,i}-1)^{2}}-1\Big{)},\] (58) \[\psi_{vol}=\frac{\kappa}{2}\,(1-J)^{2},\] (59) \[\psi_{visc}=\frac{\eta}{2}\text{tr}(\hat{\mathbf{E}}^{2}), \tag{60}\] where \(a_{i}\) and \(b_{i}\) are material parameters, \(\kappa\) is a volumetric penalty parameter, and the strain invariants are defined as \[\bar{I}_{1} =J^{-2/3}I_{1}\text{, where }I_{1}=\text{tr}(\mathbf{C}), \tag{61}\] \[I_{4,f} =\mathbf{f}\cdot\mathbf{C}\mathbf{f},\] \[I_{4,s} =\mathbf{s}\cdot\mathbf{C}\mathbf{s},\] \[I_{8,fs} =\mathbf{f}\cdot\mathbf{C}\mathbf{s}.\] Figure 6: Problem setup for the coupled idealized LV example. a) An idealized LV model, shown in cut-view, is coupled to an open-loop LPN of the systemic circulation (LPN parameters are given in Table 1). The LV is supported by Robin boundary conditions (denoted by the spring-dashpot assemblies) on the base and epicardial surfaces, and periodic active stress along fiber directions causes the LV to contract. b) Fiber \(\mathbf{f}\) (left) and sheet \(\mathbf{s}\) (right) orientation fields for the idealized LV model. Arrows denote the local fiber or sheet direction, and they are colored by their component along the longitudinal axis. Fiber angles (relative to circumferential) vary from \(+60^{\circ}\) on the endocardial surface to \(-60^{\circ}\) degrees on the epicardial surface. c) Fiber stress curve (top) and prescribed atrial pressure curve (bottom). Here, we assume the cardiac cycle duration is 1s. Atrial systole is set to begin at \(t_{sys,a}=0\text{ms}\) and lasts for a duration \(T_{sys,a}=200\text{ms}\). Atrial pressure ranges between 6 \(\mathrm{mmHg}\) and 14 \(\mathrm{mmHg}\). Ventricular systole is set to begin at \(t_{sys,v}=143\text{ms}\), and the active stress reaches a maximum value of approximately 65 \(\mathrm{kPa}\) (488 \(\mathrm{mmHg}\)). \(\eta\) is the viscosity and \(\dot{\mathbf{E}}\) is the rate of Green-Lagrange strain tensor. \(\chi(x)\) is a smoothed Heaviside function centered at \(x=1\) with smoothing parameter \(k\) \[\chi(x)=\frac{1}{1+e^{-k(x-1)}} \tag{62}\] Finally, the active stress is applied along fiber directions to recapitulate cardiac contraction, \[\mathbf{S}_{act}=\tau(t)\cdot\mathbf{f}\otimes\mathbf{f}. \tag{63}\] \(\tau(t)\) is determined using the model from [45], yielding the active stress curve shown in Fig. 6c (top). On \(\Gamma_{epi}\), we apply a Robin boundary condition in the normal direction only, following [45] to mimic the effect of the pericardium. On \(\Gamma_{base}\), we apply Robin boundary conditions in all directions. On \(\Gamma_{endo}\), we use a coupled Neumann boundary condition (Fig. 6a). The initial LV geometry is set as the stress-free reference configuration, and the simulation is initialized with zero displacements and velocities. In this example, the 0D fluid is an open-loop Windkessel-type model of the systemic circulation [45], shown in Fig. 6a. Performing a nodal analysis on this LPN yields the following set of ODEs. \[\begin{split}\frac{p_{v}-p_{at}}{R_{av}}+\frac{p_{v}-p_{p}}{R_{ sl}}-Q_{1}&=0,\\ q_{p}-\frac{p_{v}-p_{p}}{R_{sl}}+C_{p}\dot{p}_{p}&= 0,\\ q_{p}+\frac{p_{d}-p_{p}}{R_{p}}+\frac{L_{p}}{R_{p}}\dot{q}_{p}& =0,\\ \frac{p_{d}-p_{ref}}{R_{d}}-q_{p}+C_{d}\dot{p}_{d}&= 0.\end{split} \tag{64}\] The atrial pressure \(p_{at}\) is a prescribed function of time, given by \[p_{at}=\begin{cases}\frac{\Delta p_{at}}{2}\Big{(}1-\cos\frac{2\pi(t-t_{sys,a })}{T_{sys,a}}\Big{)}+p_{at0},&\text{for }t_{sys,a}<t<t_{sys,a}+T_{sys,a},\\ p_{at0},&\text{otherwise}.\end{cases} \tag{65}\] See Fig. 6c (bottom) for plot. The atrioventricular (av) and semilunar (sl) valves are modeled as diodes with nonlinear resistances \(R_{av}\) and \(R_{sl}\) that depend on the pressure differential on either side as below: \[R_{av}=R_{min}+(R_{max}-R_{min})S^{+}(p_{v}-p_{at}), \tag{66}\] \[R_{sl}=R_{min}+(R_{max}-R_{min})S^{+}(p_{p}-p_{v}), \tag{67}\] where \(S^{+}\) is a sigmoid function with steepness parameter \(k_{p}\). To cast into the general form in Section 2.3, we identify the 0D unknowns as \(\mathbf{w}=[p_{v},p_{p},p_{d},q_{p}]^{T}\), while the coupled Dirichlet boundary data are \(\mathbf{Q}=[Q_{1}]^{T}\). The uncoupled Neumann boundary data are \(\mathbf{p}=[p_{at},p_{ref}]^{T}\), and there are no uncoupled Dirichlet boundary data \(\mathbf{q}_{u}=[]\). The 0D variables are initialized using values from [45]. The coupled problem for this example can be summarized as \[\begin{cases}\mathcal{P}^{0D}(\mathbf{w},t,[\mathbf{q}_{u},\mathbf{Q}], \mathbf{p})=\mathbf{0}&\text{given by Eq.~{}\eqref{eq:p_at0},}\\ \text{Initial conditions from~{}\@@cite[cite]{[\@@bibref{}{HJ}{}{}]},}\\ \mathcal{P}^{3D,struct}(\mathbf{u},\mathbf{x},t)=\mathbf{0},\\ \text{Initialize with zero displacement and velocity},\\ \text{(Uncoupled) Robin boundary conditions on }\Gamma_{epi}\text{ and }\Gamma_{base},\\ \boldsymbol{\sigma}\cdot\mathbf{n}=-P_{1}\mathbf{n},\text{ on }\Gamma_{endo},\\ P_{1}(t)=w_{1}(t)=p_{v}(t),\\ Q_{1}(t)=\int_{\Gamma_{endo}(t)}\mathbf{u}(t)\cdot\mathbf{n}(t)d\Gamma.\end{cases} \tag{68}\] A complete table of parameters for this simulation can be found in Appendix D Table 1. Fig. 7 shows the pressure-volume (PV) loop over 10 cardiac cycles obtained from this coupled simulation. Although the initial volume of the idealized LV is higher than a physiological heart, the ranges of pressure and volume are in a physiological range. The stroke volume is approximately 90 \(\mathrm{mL}\) and the resulting ejection fraction is 50%, which is comparable to normal physiological values and values reported in other computational studies with different LV geometries [46, 45, 22]. There is a clear delineation of the four cardiac phase - isovolumic contraction, ejection, isovolumic relaxation, and filling. Additionally, the PV loop reaches a limit cycle after about 5 cardiac cycles. ### Temporal convergence In this section, we present preliminary results on the temporal convergence of the coupling method, applied to the idealized LV model (Section 3.1). In Fig. 8, the PV loop for a single cardiac cycle of the LV model is plotted for several timestep sizes. As the timestep size is decreased from \(10^{-3}\mathrm{s}\) to \(10^{-6}\mathrm{s}\), the PV loop converges. As shown in the figure, the PV loop for \(10^{-3}\mathrm{s}\) is very close to the PV loop for \(10^{-6}\mathrm{s}\), except for the maximum pressure, for which the difference is only about 1%. ### Computational cost In this section, wall time is used to compare the computational cost of the 0D-coupled idealized LV simulation (Section 3.1) with two similar but uncoupled simulations - the first with a constant endocardial pressure, and the second with a time-varying endocardial pressure. All simulations were run for one cardiac cycle (1000 timesteps) with a timestep size of \(10^{-3}\mathrm{s}\). All simulations were run in parallel with 4 CPUs of an Intel Gold 5118 2.3 GHz processor, and each case was run 5 times. The mean wall time \(\pm\) standard deviation was computed for each set of simulations. As seen in Fig. 9, after one cardiac cycle, the coupled simulation is approximately 15% slower than the uncoupled simulations. This is due to the extra 0D solver communication and computation that is performed at each Newton iteration of the 3D solver. Note also that around timesteps 200 and 500, the coupled simulation slows down. These timesteps correspond Figure 7: The PV loop for the LV is plotted for 10 cardiac cycles. The PV loop spans a realistic range of pressure and volume, exhibits a clear distinction of the four cardiac phases, and reaches a limit cycle after about 5 cardiac cycles. A movie showing the LV deformation synchronized with the PV loop is provided here: [https://drive.google.com/file/d/17ZCVQqSp-EOBrLB3C7olLT2z2uyNTkWu/view?usp=sharing](https://drive.google.com/file/d/17ZCVQqSp-EOBrLB3C7olLT2z2uyNTkWu/view?usp=sharing). Figure 8: PV loops for a single cardiac cycle of the idealized LV model (Section 3.1) for decreasing timestep size. Insets show zoomed in view at the top and bottom-right portions of the loop. PV loops converge as the timestep size decreases. Figure 9: Comparison of simulation time for 0D-coupled versus uncoupled simulations. “Coupled” is identical to that in Fig. 6, except it is run for only 1 cardiac cycle (1000 timesteps). “Uncoupled (constant P)” is the same except the endocardial pressure is a constant 1500 \(\mathrm{Pa}\) (11.25 \(\mathrm{mmHg}\)). “Uncoupled (variable P)” is the same except the endocardial pressure is given by a prescribed time-varying pressure curve, shown in the inset plot. All simulations were run in parallel with 4 processors. Each simulation was run 5 times, and the mean \(\pm\) standard deviation of those samples are plotted. to the isovolumic phases when both valves are closed, where our method experiences somewhat greater difficulty in convergence. ### Inflation of a spherical shell through its limit point An interesting feature of the coupling framework is that it also permits the investigation of so-called limit point problems in structural mechanics. In this example, a simple 0D LPN is used to inflate a thick-walled sphere at approximately a constant flow rate (Fig. 10). The coupled problem for this example may be stated in the structure of Eq. (34) as \[\begin{cases}p_{in}=p_{high}+Q_{1}R_{high},\\ \mathcal{P}^{3D,struct}(\mathbf{u},\mathbf{x},t)=\mathbf{0},\\ \text{Initialize with zero displacement and velocity},\\ \boldsymbol{\sigma}\cdot\mathbf{n}=-P_{1}\mathbf{n},\text{ on }\Gamma_{inner},\\ \\ P_{1}(t)=w_{1}(t)=p_{in}(t),\\ Q_{1}(t)=\int_{\Gamma_{inner}(t)}\mathbf{u}(t)\cdot\mathbf{n}(t)d\Gamma.\end{cases} \tag{69}\] The LPN is the simplest "constant current" circuit, consisting of a large resistance \(R_{high}\) and a large pressure source \(p_{high}\), chosen together to yield an approximately constant flow rate \(-Q_{1}\) into the sphere. In this case, the LPN is described by a single algebraic equation for the pressure inside the sphere, \(\mathbf{w}=[p_{in}]^{T}\). The coupled 0D Dirichlet boundary data are \(\mathbf{Q}=[Q_{1}]^{T}\). The uncoupled 0D Neumann boundary data are \(\mathbf{p}=[p_{high}]^{T}\), and there are no uncoupled 0D Dirichlet boundary data \(\mathbf{q}_{u}=[]\). By the coupling conditions, the pressure \(P_{1}\) applied to the inner surface of the sphere, \(\Gamma_{inner}\), is equal to \(p_{in}\). The sphere has an inner radius of \(25\upmu\)m and a thickness of \(2.5\upmu\)m, and it is composed of a neo-Hookean material with material constant \(C_{1}=3\mathrm{kPa}\). With these parameters, the spherical shell displays limit point behavior. Specifically, as the sphere is inflated in a quasi-static process, the pressure increases, reaches a maximum (the limit point, where \(dP/dV=0\)), then decreases [47; 48] (Fig. 10 right). Similar concave down diastolic pressure-volume relations have been observed in embryonic chick hearts [49] and embryonic zebrafish hearts [50], and limit point behavior has been proposed as a possible explanation. This example represents an idealized model of this phenomena in small-scale, embryonic hearts. Limit point behavior presents a challenge to standard simulation methods. To simulate inflation of the sphere, one would typically apply an increasing pressure on the inner surface at constant increments. Unfortunately, this approach would never be able to capture the descending portion of the PV curve; once the applied pressure exceeds the limit Figure 10: Left: A 3D thick spherical shell is coupled to a constant current 0D fluid. The inner surface of the shell is the coupled Neumann boundary. The 0D fluid model has a high-pressure source and a high resistance, which produces an approximately constant flow rate. Right: The pressure-volume relation for the spherical shell is plotted. Markers are placed every 100 ms. Inflating the sphere at roughly a constant rate of change of volume allows the limit point (where \(dP/dV=0\)) to be traversed. point pressure, Newton's method will diverge and the simulation will crash because there is no static equilibrium configuration at that pressure. Techniques exist to traverse the limit point (and similar phenomena like snap-through behavior), the most common being the arc-length method [51]. As shown in Fig. 10, the present coupling framework allows us to traverse this limit point. There is in fact a connection between this coupling method and the arc-length method. In both cases, the load magnitude (pressure) is treated as unknown, and the equations corresponding to the mechanics problem are augmented by additional equations required to determine the load increment or load scaling factor. Actually, ANM was originally applied to solve coupled nonlinear systems arising from the arc-length method [32]. We note that a standard monolithic 3D-0D coupling approach, such as in [20], should also be able to traverse limit points in the same manner, but to the best of our knowledge, no previous studies have applied it to limit point problems. ### 3D fluid - 0D fluid example For completeness, and to illustrate the generality of the coupling scheme to both 3D structure - 0D fluid problems and 3D fluid - 0D fluid problems, we reproduce a simulation similar to those in [52] using the current coupling scheme. In this work, the authors applied the original 3D fluid - 0D fluid coupling of [19] to simulate blood flow in the pulmonary arteries coupled to a complex, closed-loop model of the cardiovascular system. They used this model to investigate the hemodynamic effects of left pulmonary artery stenosis after the stage II superior cavo-pulmonary connection (SCPC) surgery. The model is illustrated in Fig. 11. The parameters of the model are taken from Table E1 P2 in [52]. The coupled problem for this example may be stated as, \[\begin{cases}\mathcal{P}^{0D}(\mathbf{w},t,[\mathbf{q}_{u},\mathbf{Q}], \mathbf{p})=\mathbf{0}\text{ (equations not provided)},\\ \text{Initial conditions},\\ \mathcal{P}^{3D,fluid}(\mathbf{u},p,\mathbf{x},t)=\mathbf{0},\\ \text{Initial conditions},\\ \text{(Uncoupled) no-slip boundary conditions on walls},\\ \boldsymbol{\sigma}\cdot\mathbf{n}=-P_{1}\mathbf{n},\text{ on }\Gamma_{h_{c}}^{(1)},\\ \quad\vdots\\ \boldsymbol{\sigma}\cdot\mathbf{n}=-P_{14}\mathbf{n},\text{ on }\Gamma_{h_{c}}^{(14)},\\ \quad P_{1}(t)=(\mathbb{P}\mathbf{w}(t))_{1},\\ \quad\vdots\\ P_{14}(t)=(\mathbb{P}\mathbf{w}(t))_{14},\\ Q_{1}(t)=\int_{\Gamma_{h_{c}}^{(1)}}\mathbf{u}(t)\cdot\mathbf{n}(t)d\Gamma,\\ \quad\vdots\\ Q_{14}(t)=\int_{\Gamma_{h_{c}}^{(14)}}\mathbf{u}(t)\cdot\mathbf{n}(t)d\Gamma,\end{cases} \tag{70}\] where \(\Gamma_{h_{c}}^{(1)},\ldots,\Gamma_{h_{c}}^{(14)}\) are the 14 faces (1 inlet + 13 outlet) of the pulmonary model, which are associated with 14 coupling pressures \(P_{1}\ldots P_{14}\) and flow rates \(Q_{1}\ldots Q_{14}\). For the sake of clarity, when applied to 3D fluid - 0D fluid problems, the coupling code used in the present work is functionally identical to the code used in [52]. We include this section to emphasize that a 3D fluid - 0D fluid problem can be solved using the current unified coupling scheme; however, the method when applied to the 3D fluid - 0D fluid problem and the results in this section are not novel. Fig. 11 shows the 3D fluid and 0D fluid models, as well as the simulation results, which include a visualization of blood flow in the pulmonary arteries and the PV loop of the LPN ventricle component. Other examples of 3D fluid - 0D fluid coupling using the present coupling method can be found in [19, 53, 54]. ## 4 Discussion We present a general framework for coupling 3D models of cardiovascular components - cardiac structures like the LV, as well as vascular blood flow models like the pulmonary arteries - to 0D models of the circulatory system. The novel aspects of this work are the extension of the modular framework of [19] to the 3D structure - 0D fluid problem, a demonstration of the scheme's effectiveness in 0D-coupled cardiac mechanics applications, especially in its ability to avoid the balloon dilemma and capture isovolumic cardiac phases with ease, and finally a new derivation of the coupling scheme based on ANM, which provides important mathematical underpinnings and clearly shows the close connection to the monolithic coupling approach. Previous approaches to the coupling problem use either a partitioned or monolithic strategy, and the advantage of the present coupling is that it adopts a hybrid strategy, retaining the attractive features of both. In a monolithic approach, such as [20; 45], modifying the 0D model can be cumbersome, requiring a detailed understanding of the 3D solver structure. In particular, one must typically derive new expressions for the off-diagonal tangent blocks in the monolithic (Newton) linear system Eq. (36). In our approach, one can modify the 0D model without touching the 3D solver at all, and vice versa. As shown in Section 2, this is achieved by applying Schur complement reduction to Eq. (36), which separates the 3D and 0D computations, then using the 0D fixed point operator to approximately solve the modified linear system. The present approach corresponds to the Neumann coupling in [19], in which a Neumann (pressure) boundary condition is imposed on the 3D domain, while a Dirichlet boundary condition is imposed on the 0D domain. In [19], the authors also described a Dirichlet coupling, in which a Dirichlet boundary condition is imposed on the 3D domain, while a Neumann boundary condition is imposed on the 0D domain. In this type of coupling, the shape of the velocity profile on the 3D Dirichlet surface must be chosen. While applicable to fluid simulations, in which one can reasonably assume a parabolic or flat profile, this is an inconvenient limitation for structural mechanics simulations, since in most cases, the shape of the velocity profile cannot reasonably be assumed _a priori_. In this paper, we only consider Neumann coupling. Figure 11: a) A 3D model of the pulmonary arteries is coupled to a closed-loop LPN model of the circulatory system [52]. b) Streamlines of blood flow, colored by pressure, in the pulmonary arterial model during mid-diastole (see red dot in PV loop panel c). c) The PV loop over two cardiac cycles of the LPN ventricle, which is modeled as a time-varying capacitor. A different kind of Dirichlet coupling for the 3D structure - 0D fluid problem may be achieved by imposing a volume for the 3D structure, enforced by augmenting the structural equations with a volume constraint, as in recent works [22, 55, 21]. The augmented equations are then solved using Newton-like methods. Our Neumann coupling approach does not require such a volume constraint. Furthermore, we do not require a stabilization term like that introduced in the loosely coupled approach of [55], which was needed to eliminate unphysical oscillations near the isovolumic phases. Our approach automatically captures the isovolumic phases without unphysical oscillations, and without any adaptive/refined time stepping or any other special numerical techniques. The authors of [19] also identified three variations of the method. In the "implicit" method, the \(\mathbf{M}\) matrix is updated each Newton iteration. In the "semi-implicit" method, \(\mathbf{M}\) is only computed once at the start of the simulation. In the "explicit" method, the entire 0D contribution to the 3D tangent matrix is ignored. In [19], the three variations were compared, and the semi-implicit method was found to provide the best balance of stability and cost-effectiveness. In this work, we described and used only the implicit method. Both the semi-implicit and explicit methods were unstable for our coupled LV simulation, likely due to the significant magnitude and drastic change in resistance produced by the 0D valves. We leave a detailed comparison of the three variations for coupled cardiac mechanics simulations to future work. In multiple examples, we have shown the effectiveness of the present coupling method. To illustrate the application to cardiac mechanics modeling, we simulated an idealized LV coupled to an open-loop LPN over 10 cardiac cycles (Section 3.1). The resulting PV loop spans physiological ranges of pressure and volume, shows a clear distinction of the four cardiac phases, and reaches a limit cycle. Critically, the isovolumic phases of the cardiac cycle are automatically captured without numerical stability issues. These results provide significant verification of the effectiveness of the coupling framework for cardiac mechanics simulations. Using the coupled LV model, we made preliminary assessments of the temporal convergence and cost of our method. In Section 3.2, we showed that the coupling algorithm is stable and converges for small timesteps. In Section 3.3, we showed that coupled simulations are moderately more expensive than uncoupled simulations (15% longer for one cardiac cycle). The additional cost comes from communication with the 0D solver and a few assembly operations each Newton iteration of the 3D solver. This relatively small increase in cost is well-warranted, given the increased physiological fidelity afforded by the coupling. In the spherical shell inflation case (Section 3.4), which represents a simplified model of an embryonic heart, we showed 3D-0D coupling can be used to traverse a limit point and discussed its relationship to the arc-length method. Finally, we simulated blood flow in a pulmonary arterial model coupled to a closed-loop LPN of the circulatory system, demonstrating that our coupling also applies to the 3D fluid - 0D fluid problem (Section 3.5). Limitations and Future WorkWith respect to the present coupling algorithm, we identify important areas for future investigation and development. We will first analyze the temporal convergence of the method, including the order of convergence as well as the effect of the 3D and 0D time-stepping schemes on overall convergence. In addition, under certain assumptions met in our case, ANM should converge quadratically [32]. We intend to verify this quantitatively. We are also interested in applying the stabilized structural mechanics formulation presented in [56] for cardiac mechanics simulations. This formulation treats both fluid and structural problems under a unified continuum modeling framework in which pressure is a primitive variable. It is highly effective for incompressible solids, which cardiac tissues are often assumed to be. Generally, we plan on applying the coupling in more complex cardiac mechanics simulations, for example with advanced myocardial constitutive models [26] or using 4-chamber anatomies [24]. This framework can be extended to couple a 3D model to a 1D model of blood circulation [57], which, unlike a 0D model, can recapitulate wave propagation phenomenon in arteries [58]. On the 3D side, the expressions will be identical. One only needs to define the effect of a flow rate boundary condition on the 1D system, and define how to extract pressure from the 1D system to communicate back to the 3D system. Finally, more recent coupling ideas similar to ANM have been summarized and proposed in [59], in which each solver is treated as a blackbox defined only through its fixed point iteration operator. These algorithms are even more modular than the present coupling scheme, while retaining the quadratic convergence properties of Newton's method. Future work may aim to apply these ideas to the 3D-0D coupling discussed here. ## 5 Conclusion In this work, a unified and modular framework for 3D-0D coupling in cardiovascular simulations is introduced. The algorithm, originally described in [19] for the 3D fluid - 0D fluid problem, is extended to solve the closely-related 3D structure - 0D fluid problem, showing that both problems can be treated uniformly within the same mathematical formulation. Through multiple examples, the effectiveness of the coupling algorithm is demonstrated. Notably, we construct a 0D-coupled idealized LV model that produces a physiological pressure-volume loop and effectively captures the isovolumic cardiac phases without additional numerical treatment. We also provide a new derivation using ANM, which reveals the present coupling scheme's connection to the monolithic Newton coupling approach. This hybrid coupling strategy combines the stability of monolithic approaches with the modularity and flexibility of partitioned approaches, with relatively small additional computational cost compared to uncoupled simulations. Overall, this work provides a robust, flexible, and efficient method for modeling the circulatory system in cardiovascular simulations of tissue mechanics and blood flow. ## Acknowledgements Funding for this research was provided by the National Institutes of Health grants 5R01HL159970-02 and 5R01HL129727-06. The authors wish to thank Dr. Erica Schwarz, Dr. Fannie Gerosa, and Reed Brown for their helpful discussions during this project and for their comments on this paper. ## Appendix A 4th Order Runge-Kutta scheme RK4 applied to the ODE system Eq. (25) reads \[\mathbf{k}_{1} =\mathbf{f}\Big{(}\mathbf{y}_{n},\mathbf{z}_{n},t_{n},\mathbf{q}( t_{n}),\mathbf{p}(t_{n}))\Big{)},\] \[\mathbf{k}_{2} =\mathbf{f}\Big{(}\mathbf{y}_{n}+\mathbf{k}_{1}\frac{\Delta t}{3},\mathbf{z}_{n},t_{n}+\frac{\Delta t}{3},\mathbf{q}(t_{n}+\frac{\Delta t}{3}), \mathbf{p}(t_{n}+\frac{\Delta t}{3})\Big{)},\] \[\mathbf{k}_{3} =\mathbf{f}\Big{(}\mathbf{y}_{n}-\mathbf{k}_{1}\frac{\Delta t}{3 }+\mathbf{k}_{2}\Delta t,\mathbf{z}_{n},t_{n}+\frac{2\Delta t}{3},\mathbf{q}( t_{n}+\frac{2\Delta t}{3}),\mathbf{p}(t_{n}+\frac{2\Delta t}{3})\Big{)}, \tag{71}\] \[\mathbf{k}_{4} =\mathbf{f}\Big{(}\mathbf{y}_{n}+\mathbf{k}_{1}\Delta t- \mathbf{k}_{2}\Delta t+\mathbf{k}_{3}\Delta t,\mathbf{z}_{n},t_{n}+\Delta t, \mathbf{q}(t_{n}+\Delta t),\mathbf{p}(t_{n}+\Delta t)\Big{)},\] \[\mathbf{y}_{n+1} =\mathbf{y}_{n}+\frac{\mathbf{k}_{1}+3\mathbf{k}_{2}+3\mathbf{k} _{3}+\mathbf{k}_{4}}{8}\Delta t.\] The algebraic variables are then determined by solving Eq. (26) for \(\mathbf{z}_{n+1}\) with the updated differential variables \(\mathbf{y}_{n+1}\), \[\mathbf{g}(\mathbf{y}_{n+1},\mathbf{z}_{n+1},t_{n}+\Delta t,\mathbf{q}(t_{n}+ \Delta t),\mathbf{p}(t_{n}+\Delta t))=\mathbf{0}.\] Rearranging this equation for \(\mathbf{z}_{n+1}\) yields for some function \(\tilde{\mathbf{g}}(\mathbf{y},t,\mathbf{q},\mathbf{p})\), \[\mathbf{z}_{n+1}=\tilde{\mathbf{g}}(\mathbf{y}_{n+1},t_{n}+\Delta t,\mathbf{q}( t_{n}+\Delta t),\mathbf{p}(t_{n}+\Delta t)). \tag{72}\] Note that we assume the flow and pressure boundary forcings, \(\mathbf{q}\) and \(\mathbf{p}\), are known functions of time. In the coupled problem, \(\mathbf{q}\) is composed of a prescribed uncoupled component \(\mathbf{q}_{u}\) and a coupled component \(\mathbf{Q}\), \[\mathbf{q}(t)=[\mathbf{q}_{u}(t),\mathbf{Q}(t)]\] Since \(\mathbf{q}_{u}\) is prescribed, its value is known at any time \(t\). The value of \(\mathbf{Q}\), on the other hand, is obtained from the 3D system, and its variation with time may be approximated by interpolating between its values at timestep \(n\) and \(n+1\), \[\mathbf{Q}(t_{n}+h)=\mathbf{Q}_{n}+(\mathbf{Q}_{n+1}-\mathbf{Q}_{n})\frac{h}{ \Delta t},\] where \(\mathbf{Q}_{n}\) and \(\mathbf{Q}_{n+1}\) are calculated from the 3D degrees of freedom \(\mathbf{\Phi}_{n}\) and \(\mathbf{\Phi}_{n+1}\), respectively. From Eqs. (71) and (72), we can identify the fixed point operator corresponding to this 0D time integration scheme as \[F^{0D,RK4}(\mathbf{w}_{n+1},\mathbf{\Phi}_{n+1};\mathbf{w}_{n}, \mathbf{\Phi}_{n}) =\begin{bmatrix}\mathbf{y}_{n+1}\\ \mathbf{z}_{n+1}\end{bmatrix}\] \[=\begin{bmatrix}\mathbf{w}_{n}+\frac{\mathbf{k}_{1}+3\mathbf{k}_{2 }+3\mathbf{k}_{3}+\mathbf{k}_{4}}{8}\Delta t\\ \tilde{\mathbf{g}}\Big{(}\mathbf{w}_{n}+\frac{\mathbf{k}_{1}+3\mathbf{k}_{2}+3 \mathbf{k}_{3}+\mathbf{k}_{4}}{8}\Delta t,t_{n}+\Delta t,\mathbf{q}(t_{n}+ \Delta t),\mathbf{p}(t_{n}+\Delta t)\Big{)}\end{bmatrix}.\] ## Appendix B Calculation of flow rate For a coupled Neumann boundary condition, the 3D model must compute a flow rate \(Q\) and send this value to the 0D domain. For 3D fluid - 0D fluid coupling (e.g., blood flow in the aorta coupled to Windkessel LPN), this is easily computed by integrating the normal component of the velocity over the coupled surface, \(Q=\int_{\Gamma}\mathbf{u}\cdot\mathbf{n}d\Gamma\). For structural simulations, we must specify the context. In cardiac mechanics, we are typically interested in modeling a chamber or chambers of the heart. In this context, we are interested in the flow rate of blood into or out of a chamber, which is identical to the rate of change of volume of the chamber. We use this interpretation to compute flow rate in the 3D structure - 0D fluid problem. Let \(\Gamma\) be a closed surface, for example a sphere. The rate of change of volume enclosed by \(\Gamma\) can be computed using the Reynolds Transport Theorem, which states that for an arbitrary scalar function \(f\) defined over a time-varying region \(\Omega(t)\) with boundary \(\Gamma(t)\) \[\frac{d}{dt}\int_{\Omega(t)}fd\Omega=\int_{\Omega(t)}\frac{\partial f}{ \partial t}d\Omega+\int_{\Gamma(t)}(\mathbf{u_{b}}\cdot\tilde{\mathbf{n}})fd \Gamma, \tag{73}\] where \(\mathbf{u_{b}}\) is the velocity of the boundary and \(\tilde{\mathbf{n}}\) is the outward unit normal of the boundary. Taking \(f=1\) and observing that \(\frac{\partial f}{\partial t}=0\), we find \[\frac{d}{dt}\int_{\Omega(t)}(1)d\Omega =\int_{\Omega(t)}0d\Omega+\int_{\Gamma(t)}(\mathbf{u_{b}}\cdot \tilde{\mathbf{n}})(1)d\Gamma,\] \[\frac{dV}{dt} =\int_{\Gamma(t)}(\mathbf{u_{b}}\cdot\tilde{\mathbf{n}})d\Gamma. \tag{74}\] That is, the rate of change of volume enclosed by a surface \(\Gamma\) is the velocity flux integral over that surface. In cardiac mechanics, it is more natural to use the inward surface normal \(\mathbf{n}=-\tilde{\mathbf{n}}\) (pointing away from the cardiac tissue and into the blood pool on the endocardial surface, for example). In addition, a positive flow rate \(Q\) out of the heart chamber is associated with a decrease in chamber volume. Thus, we may write \[Q=-\frac{dV}{dt}=\int_{\Gamma(t)}(\mathbf{u_{b}}\cdot\mathbf{n})d\Gamma. \tag{75}\] This is in fact the same integral as in blood flow simulations, except we must be careful to take the integral over the coupled surface in the current (deformed) configuration, \(\Gamma(t)\). It is important to note that Eq. (75) is valid only if \(\Gamma\) is a closed surface. In general, this is not the case, for example in the left ventricle or biventric models cut at the basal plane. For these models, in order to accurately calculate \(\frac{dV}{dt}\), it is necessary to close \(\Gamma\) with a "cap" surface. Such a capping was done in this work and slightly modifies the expressions for the 0D residual and tangent contributions (see Section 2.6). ## Appendix C Contribution of coupled surface to tangent matrix Here we derive Eq. (55) from Eq. (53). Eq. (53) reads \[\left[\mathbf{K}^{3D/0D}\right]^{(k)}_{n+1}=\left[\frac{\partial\mathbf{R}^{3 D}}{\partial\mathbf{P}_{n+1}}\mathbb{P}\frac{\partial F^{0D}(\mathbf{w}_{n+1}, \mathbf{Q}_{n+1})}{\partial\mathbf{Q}_{n+1}}\frac{\partial\mathbf{Q}_{n+1}}{ \partial\mathbf{U}_{n+1}}\frac{\partial\mathbf{U}_{n+1}}{\partial\mathbf{ \Phi}_{n+1}}\right]^{(k)}_{n+1}.\] We consider each term individually. * The residual \(\mathbf{R}^{3D}\) depends on the coupling pressures \(\mathbf{P}_{n+1}\) through an equation like Eq. (49), \[R^{3D}_{Ai}=\text{other terms}+\sum_{j=1}^{n^{eBC}}\int_{\Gamma^{(j)}_{k_{c}}}N _{A}P_{n+1,j}n_{i}d\Gamma.\] Thus, \[\frac{\partial(R^{3D}_{Ai})}{\partial P_{n+1,j}}=\int_{\Gamma^{(j)}_{k_{c}}}N _{A}n_{i}d\Gamma.\] (76) Note that if the 3D residual is composed of momentum and continuity components (e.g., for Navier-Stokes), then this term is multiplied by \(\mathbb{I}_{m}=\begin{bmatrix}\mathbf{1}\\ \mathbf{0}\end{bmatrix}\), which is simply a vector with 1s corresponding to momentum rows and 0s corresponding to continuity rows. This places the tangent contribution in the momentum block row. * In Section 2.4.3, we described how \(\mathbb{P}\frac{\partial F^{0D}(\mathbf{w}_{n+1},\mathbf{Q}_{n+1})}{ \partial\mathbf{Q}_{n+1}}\) is computed in a finite difference manner (Eq. (54)). This term is denoted by the resistance-like matrix \(M_{ij}\). * From Eq. (32), we have \[Q_{n+1,i}=\sum_{A}\int_{\Gamma_{h_{e}}^{(i)}}N_{A}U_{n+1,Ak}n_{k}d\Gamma.\] Thus, \[\frac{\partial Q_{n+1,i}}{\partial U_{n+1,Ak}}=\int_{\Gamma_{h_{e}}^{(i)}}N_{A}n _{k}d\Gamma.\] (77) * Finally, we deal with the term \(\frac{\partial\mathbf{U}_{n+1}}{\partial\mathbf{\Phi}_{n+1}}\). This term depends on the time discretization scheme. In our case, we use the generalized-\(\alpha\) method and choose the nodal accelerations \(\dot{\mathbf{U}}\) as our 3D unknowns (along with nodal pressures \(\mathbf{\Pi}\) if the 3D is a fluid). The nodal velocities \(\mathbf{U}\) are related to the nodal accelerations \(\dot{\mathbf{U}}\) by Eq. (33) \[\mathbf{U}_{n+1}=\mathbf{U}_{n}+\Delta t\dot{\mathbf{U}}_{n}+\gamma\Delta t( \dot{\mathbf{U}}_{n+1}-\dot{\mathbf{U}}_{n}).\] Thus, \[\frac{\partial U_{n+1,Ai}}{\partial\Phi_{n+1,Bj}}=\frac{\partial U_{n+1,Ai}} {\partial\dot{U}_{n+1,Bj}}=\gamma\Delta t\delta_{AB}\delta_{ij}.\] (78) Note that if the 3D degrees of freedom contain both acceleration and pressure components (e.g., for Navier-Stokes), then this term is multiplied by \(\mathbb{I}_{a}=\begin{bmatrix}\mathbf{1}\\ \mathbf{0}\end{bmatrix}\), which is simply a vector with 1s corresponding to acceleration rows and 0s corresponding to pressure rows. This places the tangent contribution in the acceleration block column. Combining these terms yields the tangent matrix contribution Eq. (55). In addition to this tangent matrix contribution, with a follower pressure load there should be additional tangent contribution due to the fact that \(\Gamma_{h_{e}}^{(i)}\) and \(n_{i}\) change with the deformation. However, this term is no different than the required term for an uncoupled surface and is unrelated to our coupling method, so we do not include it here. ## Appendix D Idealized LV simulation parameters We list the parameter values for the idealized LV coupled to open-loop LPN simulation, shown in Fig. 6. Most values are taken from [45]. \begin{tabular}{c c c c} Name & Parameter & Value & Unit \\ \hline \hline _General mechanical_ & & & & \\ Tissue Density & \(\rho_{0}\) & \(10^{3}\) & \(\mathrm{kg/m^{3}}\) \\ Viscosity & \(\eta\) & \(100\) & \(\mathrm{Pa\cdot s}\) \\ Volumetric Penalty & \(\kappa\) & \(10^{6}\) & \(\mathrm{Pa}\) \\ _Active stress_[45] & & & & \\ Contractility & \(\sigma_{0}\) & \(8\times 10^{4}\) & \(\mathrm{Pa}\) \\ Activation rate & \(\alpha_{max}\) & \(+5\) & \(1/\mathrm{s}\) \\ Deactivation rate & \(\alpha_{min}\) & \(-30\) & \(1/\mathrm{s}\) \\ Ventricular systole & \(t_{sys,v}\) & \(0.143\) & \(\mathrm{s}\) \\ Ventricular diastole & \(t_{dias,v}\) & \(0.484\) & \(\mathrm{s}\) \\ Stepness & \(\gamma\) & \(0.005\) & \(\mathrm{s}\) \\ _Passive myocardial tissue (HO model)_ & & & \\ Matrix & \(a\) & \(59.0\) & \(\mathrm{Pa}\) \\ Fiber & \(a_{f}\) & \(18.472\times 10^{3}\) & \(\mathrm{Pa}\) \\ \(b_{f}\) & \(16.026\) & \(-\) \\ Sheet & \(a_{s}\) & \(2.481\times 10^{3}\) & \(\mathrm{Pa}\) \\ \(b_{s}\) & \(11.12\) & \(-\) \\ Fiber sheet & \(a_{f}\) & \(216\) & \(\mathrm{Pa}\) \\ \end{tabular}
2307.06024
balance -- a Python package for balancing biased data samples
Surveys are an important research tool, providing unique measurements on subjective experiences such as sentiment and opinions that cannot be measured by other means. However, because survey data is collected from a self-selected group of participants, directly inferring insights from it to a population of interest, or training ML models on such data, can lead to erroneous estimates or under-performing models. In this paper we present balance, an open-source Python package by Meta, offering a simple workflow for analyzing and adjusting biased data samples with respect to a population of interest. The balance workflow includes three steps: understanding the initial bias in the data relative to a target we would like to infer, adjusting the data to correct for the bias by producing weights for each unit in the sample based on propensity scores, and evaluating the final biases and the variance inflation after applying the fitted weights. The package provides a simple API that can be used by researchers and data scientists from a wide range of fields on a variety of data. The paper provides the relevant context, methodological background, and presents the package's API.
Tal Sarig, Tal Galili, Roee Eilat
2023-07-12T09:09:49Z
http://arxiv.org/abs/2307.06024v2
# balance - a Python package for balancing biased data samples ###### Abstract Surveys are an important research tool, providing unique measurements on subjective experiences such as sentiment and opinions that cannot be measured by other means. However, because survey data is collected from a self-selected group of participants, directly inferring insights from it to a population of interest, or training ML models on such data, can lead to erroneous estimates or under-performing models. In this paper we present balance, an open-source Python package by Meta, offering a simple workflow for analyzing and adjusting biased data samples with respect to a population of interest. The balance workflow includes three steps: **understanding** the initial bias in the data relative to a target we would like to infer, **adjusting** the data to correct for the bias by producing weights for each unit in the sample based on propensity scores, and **evaluating** the final biases and the variance inflation after applying the fitted weights. The package provides a simple API that can be used by researchers and data scientists from a wide range of fields on a variety of data. The paper provides the relevant context, methodological background, and presents the package's API. ###### Contents * 1 Introduction * 2 Related Work * 3 Methodological Background * 3.1 The Total Survey Error framework * 3.2 Definitions and notations * 3.3 Estimation of the survey weights * 3.3.1 Post-stratification * 3.3.2 Raking * 3.3.3 Inverse Propensity score Weighting (IPW) * 3.3.4 Covariate Balancing Propensity Score (CBPS) * 3.4 Evaluation of survey weights * 3.4.1 Overview * 3.4.2 Visualizing Distributions * 3.4.3 Diagnostics for the covariates using ASMD * 3.4.4 Diagnostics for the weights * 3.4.5 Diagnostics for the outcome * 4 The balance workflow * 4.1 The workflow * 4.2 An end-to-end example * 4.2.1 Understanding the initial bias * 4.2.2 Fitting survey weights * 4.2.3 Evaluating the Results * 4.3 How does balance implement the adjustment? * 5 Future directions * Appendices * A Acknowledgments * B Limitations of the ASMD * C Kish's design effect * D Estimating the variance of the weighted mean Introduction Surveys play an important role in the study of social phenomena across research fields and industries. From their traditional usage by statistics bureaus in producing population estimates, through the long history of public opinion surveys in political science, to more recent applications like studying user experience in online services and even playing part in epidemiological studies [1]. The widespread use of surveys, and their unique role in providing measurements on subjective indicators such as sentiment and opinions, makes the field abundant with methodological research. A central challenge in designing and analyzing survey data stems from bias due to sampling limitations and non-response. Since the data is collected from a self-selected group of participants, directly inferring insights or training ML models on such data can result in erroneous estimates or under-performing models. An insightful theoretical framework for the sources of error present in survey data is given in the "Total Survey Error" framework [2]. While the sources might be different, similar manifestations of bias are often present in observational studies when comparing treatment groups, and in any data produced through self-selection processes. The field of survey statistics offers methods for mitigating bias in samples, at least partially, by relying on auxiliary information (i.e., "covariates" or "features"). When such information is available for all items in the sample as well as for the population from which it was sampled, it can be used to create weights. Under some assumptions on the relation between the auxiliary information, the response mechanism, and the survey responses, applying the weights to the data will produce less biased estimates or models. Different approaches were proposed for the task, from simple post-stratification [3] to methods more suitable for high dimensional covariates space such as raking [4, 5, 6], inverse propensity weighting [7, 8, 9], covariate balancing methods [10], outcome regression based approaches [11], and others. Weighting methods have been shown to be effective in reducing bias of survey estimates [12]. Following methodological advancements in survey statistics, statistical software packages were developed to allow researchers and practitioners to apply these methodologies to survey data and observational data. Most software packages for this aim have R implementations, and other implementations in environments such as SPSS, stata, SAS exist as well. In recent years a rich ecosystem of data science software has been developed for Python, and its usage has become prevalent among researchers and data scientists. This shift created a need for a reliable Python package for working with survey data, and more generally with biased data sets. Here we introduce balance - a Python package for balancing biased data samples. balance offers a simple easy-to-use framework for weighting data and evaluating its biases. The package is designed to provide best practices for weights fitting and offers several modeling approaches. The methodology in balance can support ongoing automated survey data processing, as well as ad-hoc analyses of survey data. The main workflow API of balance includes three steps: (1) understanding the initial bias in the data relative to a target population as observed by the differences in covariate distribution (2) adjusting the data to correct for the bias by producing weights for each unit in the sample based on propensity scores, and (3) evaluating the final biases and the variance inflation after applying the fitted weights. The adjustment step provides a few alternatives for the researcher to choose from: Inverse propensity weighting using logistic regression model based on LASSO (Least Absolute Shrinkage and Selection Operator [13]), Covariate Balancing Propensity Scores [10], Raking, and post-stratification. The focus is on providing a simple to use API, based on Pandas's DataFrame structure, which can be used by researchers from a wide spectrum of fields. In this paper we describe the balance workflow in more detail and provide guidance on how to implement it using the package. We include details on methods, assumptions, and model choices made in the package. The methodological background part of the paper is an accessible review of the theoretical frameworks, methods, and practices often used in survey statistics. We invite readers new to the field to use it as a short and effective introduction. The rest of this paper is structured as follows. We discuss related work in Section 2, focusing on software packages available in the R and Python ecosystems for survey data analysis and related use cases. In Section 3 we provide details on the statistical background that guided the implementation of the package, including theoretical frameworks, estimation methods, and diagnostic tools. In Section 4 we present the balance workflow and provide an end-to-end walk through using code snippets that are applied to simulated data. We conclude with a discussion on future directions for the package in Section 5. ## 2 Related Work The open-source ecosystem offers a variety of packages for weighting biased data. This section gives a brief survey of prominent tools in this space and describes some of their capabilities. We find the R ecosystem to be the most developed in terms of packages for survey analysis. The Python ecosystem has some packages for survey statistics. It also has several, well developed, packages for casual inference, which employs similar models (e.g.: propensity scores models, outcome models, etc.). While various R packages exist with similar capabilities to what is available in balance, no Python package (that we are aware of) gives a comprehensive end-to-end coherent solution for researchers. The R ecosystem is exceptionally rich and diverse when it comes to survey statistics. The most comprehensive review can be seen in the CRAN task view of "Official Statistics & Survey Statistics" [14]. To date, it includes over 130 packages - ranging from the classical survey package [15] to more niche packages. Similarly, the CRAN task view of "Causal Inference" [16] also includes over 130 packages that offer related methods. A short review on the current state of R packages can be found in the PSweight R package [17], which compares 9 R packages that implement propensity score weighting with discrete treatments. For survey weights diagnostics, the cobalt package [18] offers many options, including balance tables and plots for covariates of multiple groups. This package includes various capabilities that could inspire future development of balance. For Python, the ipfn package [19] specializes in implementing a fast iterative proportional fitting (raking). This package is utilized in balance and used as the back-end for the raking implementation we rely on. The quantity3 package [20] is designed to support data processing, analysis and reporting for survey data using pandas and numpy. It supports native handling of special data types like multiple choice variables, statistical analysis using case or observation weights, DataFrame metadata and different data exports. quantity3 seems to be the most similar to what balance tries to achieve but lacks many of the capabilities balance has in all stages of the workflow. The samples package offers a comprehensive solutions for dealing with complex sampling designs [21], with various overlapping and non-overlapping capabilities between this package and balance. The package offers tooling for random selection techniques used to draw a sample from a population, and sample size calculations. It also provides methods for weight adjustments using post stratification or calibration. Additional capabilities in samples include functions for estimation of statistics and their variance (beyond just the Taylor linearization estimation in balance). These include bootstrap, balanced repeated replication and Jackknife. Other packages we found seem to be only lightly maintained, and do not provide additional capabilities of relevance to our use-case. These include PySurvey [22], Surveyweights [23], pscore_match [24], pymatch [25], and causal_nets [26]. Stepping aside from survey statistics, several Python packages offer tools for casual inference that can be repurposed for adjusting biased samples. The DoWhy package [27], developed by Microsoft, is a well maintained package with the focus on causal inference. It models a given problem as a causal graph to help explore assumptions clearly, estimate causal effects, and test assumptions' validity for robustness. It offers a variety of methods for estimation including Propensity-based Stratification, Propensity Score Matching, and Inverse Propensity Weighting (similar to balance). It also offers outcome based models (currently not implemented in balance) using Linear Regression or Generalized Linear Models, and supports other methods such as Instrumental Variable methods. The package emphasizes graphical interpretation of causal inference. It also gives various refutation methods (dummy outcome, simulated outcome, etc.) and basic visualizations (e.g.: barplots of treatment and control). The Empirical Calibration package [28], developed by Google, provides a method to compute empirical calibration weights using convex optimization. This approach balances out the marginal distribution of covariates directly while reducing the inflation of variance. This is similar to performing raking while trying to keep the weights to be as equal as possible. It offers a bias correction solution that resembles the raking and CBPS methods that are implemented in the balance package. The causalml package [29] provides a set of modeling and causal inference methods for analyzing observational data using machine learning algorithms. It provides tool to estimate the Conditional Average Treatment Effect (CATE) and the Individual Treatment Effect (ITE). This package offers a wide variety of ML algorithms, including tree-based algorithms, meta-learner algorithms, instrumental variables algorithms, and neural-network-based algorithms. While these packages are comprehensive, there is still an overhead and complexity for using them for balancing data for the workflow balance is optimized in handling with the focus on surveys data. ## 3 Methodological Background Before diving into the workflow and the implementation details of _balance_, we introduce a brief description of the methodological background concerning the representation error problem in surveys, weights estimation and tools to evaluate survey weights. ### The Total Survey Error framework The "Total Survey" Error framework [2] provides a theoretical framework to describe statistical properties of surveys. It is used as a conceptual tool for researchers when designing and analyzing surveys to minimize estimation errors and biases. While the research goal is to estimate a population parameter, such as average or ratio, surveys only provides a glimpse on this parameter through the survey responses and are subject to a range of sources for statistical errors, as described by the "Total Survey Error" concept. Figure 1: A flow diagram of ”Total Survey Error”, illustrating the different components of surveys’ representation error. The "Total Survey Error" has two main components: representation error and measurement error [30]. Since neither can be overcome by increasing the sample size, researchers should be aware of these as early as the survey design stage. _Measurement error_ deals with potential biases introduces to the estimation due to the instrument of measurement. It includes questions about the validity of the responses, the phrasing of the questions and how it is affecting what we are trying to estimate, and similar questions related to whether we measure the exact quantity we aim for. balance is focused on addressing and correcting the representation errors in this framework, and hence for the rest of the section we will focus on the representation part of the framework. The _representation error_ deals with how to infer from a subset of people to the whole population on which we would like to learn, referred to as the _target population_. The magnitude of the error depends on the group of respondents to the survey and depends on how similar or different this group is from the target population. Formally, Figure 1 shows the different sources of representation error and illustrates a breakdown of the difference between the group of respondents and the target population. The first error we consider is the _coverage error_. Its driver is the misalignment between who can be sampled for the survey (the "sampling frame") and the target population. In today's world, where many, if not most, surveys are conducted through the internet, a common sampling frame is people with access to the internet. Since this sampling frame may not be representative of the whole population, a caution should be taken when conducting survey over the web. The most canonical example for a sampling frame that is not fully covering the target population is the "Literary Digest" 1936 poll [31]. During this year the Literary Digest magazine ran a poll to predict the result of the U.S. election. Franklin Delano Roosevelt was the Democrats candidate and the Republican candidate was Governor Alfred Landon of Kansas. The magazine predicted a decisive victory for Landon with a poll that was based on roughly 2.4 million voters, but, as history tells us, Roosevelt won 62% of the votes. Even though the poll sampled 10 million people, the sampling frame was skewed. The sample included magazine readers, and people from phone lists and club memberships lists. However, since people of lower socioeconomic status were disproportionately not part of the magazine's audience those days, the poll missed a significant and unique portion of the U.S. voters population Due to setting a sampling frame that ignores the target population definition, the magazine's coverage error was large and led to mis-prediction of the elections' results. Once the sampling frame is set, the researcher samples a certain amount of people from the frame to ask to reply on the survey. This is the _sample population_, or the group of people that have a "real" opportunity to reply on the survey. When doing so, the researcher reveal another gap where error can occur due to sampling, which is the _sampling error_. This might be small if we are able to sample either completely at random from the sampling frame or by designing the sampling with the right sampling probabilities, but can be significant given wrong assumptions on the structure of the sampling frame or a complex mechanism of sampling. This error can be reduced as we increase the sample size (and will be 0 if the sample population is the same is the sampling frame), and is the one captured by the margin of error, often reported with a survey results. Once researcher has sent out the invitation for filling the survey to the sample population, most often only a portion will choose to take part in the survey. These are the respondents, or the _observed sample_. This self selection behavior causes another error component which is the _non-response error_. This error can be substantial depending on how the survey is conducted, the survey questions, and other issues related to the instrument. The percent of non-response can give us some intuition of how large this error is but the actual size of the bias depends only on the properties of the people who chose to respond. In the case of the Literary Digest poll the response rate was only 24%. In fact, research suggests that the primary source of the error in the poll originated in the non-response bias. Specifically, people who strongly disliked Roosevelt were more willing to take the time to mail back their response [32, 33]. balance aims to correct for all types of representation errors at once (see Figure 1). Using additional assumptions, as described in the next section, we are able to make the group of respondents, i.e. the observed sample, similar in properties to the target population and hence overcome some parts of the representation error. However, it is important to note that there are cases where it is impossible to fully correct the representation error. Such cases occur when the assumptions on the missingness are not satisfied. The simplest example of such case is when there is a substantial coverage error for which we cannot overcome using auxiliary data. For example, if we want to learn about North America's population but survey people only from The United States. Even given lots of auxiliary information we will likely not be able to adjust the sample such that it correctly represents Canada's population as well. ### Definitions and notations With the Total Survey Error framework in mind, we will now set definitions to be used throughout the paper. Let \(\mathcal{S}\) denote a sample of respondents to a survey consisting of \(n\) respondents (sometimes referred to as the sample), and let \(\mathcal{T}\) represent a target population with \(N\) units. Furthermore, we assume we have some auxiliary data on all units in sample and target population, represented by a covariates (or features) vector attached, \(X_{i}\). Note that we assume that the same covariates are available for the sample and the target, otherwise we ignore the non-overlapping covariates. This framework is applicable when we have the auxiliary data at the unit level for the entire population or for a representative random sample from it. For example, when we sample from a list of customers for which we have auxiliary information available, or in cases when a reference survey is available (a survey with better sampling properties to be used for correcting biases [34]). Another common use-case is when census data of the population is available. We define \(R\) to be an indicator for inclusion in the respondents group1, i.e. \(R_{i}=1\) if \(i\in\mathcal{S}\) and \(R_{i}=0\) if \(i\notin\mathcal{S}\). Furthermore, we define \(y\) to be the answer to one item of the survey. The answer can be numeric or discrete, and is observed only for \(\mathcal{S}\). In our setup, we think about \(y\) as a constant and the random variable, later considered for statistical properties, is \(R\). Footnote 1: For estimation in balance, we think of the target population as a reference group, and hence units of the target population are distinct from the units of the observed sample. Our objective is to estimate a certain parameter of the population. The simplest example is estimating the mean of one item of the survey, i.e. the mean of \(y\). In this case, a natural estimate to \(\bar{y}=\frac{1}{N}\sum_{i\in\mathcal{T}}y_{i}\) is the sample mean \(\bar{y}_{\mathcal{S}}=\frac{1}{n}\sum_{i\in\mathcal{S}}y_{i}\). However, due to the non-random sampling of \(\mathcal{S}\) from the population \(\mathcal{T}\), the proposed estimate will be biased, such that \(\mathbb{E}\left[\bar{y}-\bar{y}_{\mathcal{S}}\right]\neq 0\). ### Estimation of the survey weights Weights are a common way to overcome survey error, and are essential when estimating a population parameter [12]. This is generally done by incorporating the weights, \(w_{i}\) where \(i\in\mathcal{S}\), into the parameter estimation procedure2. An example is the case of estimating a parameter of the population using a weighted mean of the sample: Footnote 2: In the balance package, we choose to scale the weights to sum to the population size. This way, each sample unit weight represents the number of corresponding units from the population. \[\bar{y}_{w}=\frac{\sum_{i=1}^{n}w_{i}y_{i}}{\sum_{i=1}^{n}w_{i}} \tag{1}\] Further details about using the weights for estimations are described in section 3.4.5. One of the advantages of using weights, unlike alternative methods, is the flexibility it gives in estimation. The weights depend only on the group of respondents and not on the outcome itself, hence give the researcher the flexibility to use the same set of weights for multiple outcomes, combine multiple outcomes into one parameter, or consider the outcome in only in specific cuts of the population. We typically employ weights to adjust from the observed sample to match the target population, so to overcome the representation bias of the sample. In scenarios where the sampling procedure is set by design and therefore known, we define the inverse of the sampling (or selection) probabilities as _design weights3_. However, to overcome the full gap between the respondents and the target population we need to estimate the weights according to the actual realization of the observed sample. When estimated against the complete target population, the weights can help address the non-response error, the by-design sampling error, the "unknown" sampling error and the coverage error. Footnote 1: The \(\mathcal{S}\)-sample weights are defined as \(\mathcal{S}=\{\mathcal{S}_{1},\ldots,\mathcal{S}_{n}\}\). A few assumptions are required to utilize the estimated weights for valid estimations of the parameter of interest and mitigate the representation bias. Under these assumptions, the estimation of the weights relies on the auxiliary data \(X_{i}\). The first assumption is the Missing At Random assumption (MAR) [35]. The MAR assumption states that the response mechanism is independent of the survey responses conditional on the auxiliary data. In other words, \(Y\perp\!\!\!1\;R\mid X\), which means that given the covariates the likelihood of a person to respond to the survey doesn't depend on their answer. This assumption is also known as the ignorability assumption or conditional unconfoundedness in causal inference literature [36]. It is worth noting that recent research (such as [37]) proposes alternative approaches to address missing values created by design in surveys analysis. The second assumption is positivity \(0<P(R_{i}=1|X_{i})<1\) for all units in \(\mathcal{S}\) and \(\mathcal{T}\). \(0<P(R_{i}=1|X_{i})\) means that given the auxiliary data, every unit of the target population has non-zero probability to be include in the sample. In other words, in a counterfactual world, any unit \(i\in\mathcal{T}\) could have participated in the survey given their covariates. Conversely, we also assume \(P(R_{i}=1|X_{i})<1\), implying that every unit in the observed sample is also in the target population. The combination of the MAR assumption and the positivity assumption is often known as the "strong ignorability" assumption [36]. Given the assumptions we are now left with the question of how to estimate the weights. The balance package currently supports 4 different methods for estimating the weights: post-stratification, raking, Inverse Propensity score Weights (IPW or IPSW) and Covariate Balancing Propensity Score (CBPS). Next, we provide more background about the estimation process in each method and describe advantages and limitations of each. #### 3.3.1 Post-stratification Post-stratification [3] is one of the most common weighting approaches in survey statistics. It originates from a stratified sample or probability sampling, where the population is divided into sub-populations (strata) and a sample is independently drawn from each. However, in post-stratification, the stratification is done after the sample has been selected. This is done to overcome errors originating in mechanisms outside the sampling design, such as non-response. The idea behind post-stratification is straightforward. For each cell (strata) in the population, calculate the percentage it represents of the total population. Then fit weights so that they adjust each stratum in the sample so to have the same proportions as each strata as in the population. Let \(\mathcal{H}\) be the group items which represent some stratum in the population, and \(P_{\mathcal{H}}\) represent the proportion of this stratum in the target population \(\mathcal{T}\), i.e. \(p_{h}=\frac{|\mathcal{H}|}{N}\). Let \(n_{\mathcal{H}}\) be the number of respondents from stratum \(\mathcal{H}\) in the observed sample. We also define the "inflation factor" as \(I=N/n\), i.e. the factor indicating by how much we need to multiply the total observed sample size to get to the total target population size. Consequently, the post-stratification weight for each unit \(i\) from stratum \(\mathcal{H}\) in the observed sample is: \[w_{i}=P_{\mathcal{H}}\frac{n}{n_{\mathcal{H}}}*I\quad\forall i\in\mathcal{H} \tag{2}\] Note that the multiplication by \(I\) is a result of the arbitrary choice to scale the weights to the population size, and could be omitted. The goal of post-stratification is to have the sample exactly match the joint-distribution of the auxiliary data of the target population. Hence it requires the researcher to know the joint distribution of the covariates to weight on. This level of resolution for the target population may not always be available. When only marginal distributions of the covariates are available then raking might serve as an alternative method to estimate the weights. Raking is described in sub-section 3.3.2. Another limitation of post-stratification is on the number of covariates that can be used to correct the biases due to the limitations of having enough respondents in each of the cells. Having a cell with very few respondents could easily lead to a handful of respondents that receive very large weights - which leads to inflated variance of estimation based on such weights. Furthermore, when continuous variables are required for weighting, the researcher must decide on the thresholds for bucketing the variables into. A more general approach is the inverse propensity score weighting described in sub-section 3.3.3. #### 3.3.2 Raking _Raking_[4, 5, 6], also known as Iterative Proportional Fitting procedure (IPF), is a method that fits the sample data to a target population using only the marginal distributions of the population's covariates. Typically, we have access to these marginal distributions but often not to their joint distribution. Since raking weights do not represent the joint distribution, this can be thought of as a type of regularized model. This approach helps to avoid over-fitting small cells as in post-stratification and instead focuses only on the marginals [38]. Raking essentially applies post-stratification sequentially over all covariates using only the marginal distributions. This is done repeatedly until a convergence is achieved. If exist, the design weights of the sample are used as the starting point of the algorithm. For example, we may have the marginal distribution of gender, age, and education. Raking would first adjust weights to match the gender distribution and then take these weights as input to adjust for age, and then for education. It would then adjust again to gender and then again to age, and so forth until it converges. This process will repeat until one of three stopping criteria are met: (1) we reached a pre-defined number of iterations,(2) the maximum difference in proportions between sample and target marginal distribution on any covariate is smaller then a preset convergence rate, or (3) the weights have converged and the change from one iteration to another is smaller then a preset rate tolerance parameter. The resulting weights will be close to the marginal distribution of the population covariates. However, one cannot assume that the weighted sample joint distribution is the same as the joint distribution in the target population. Hence, if one wants to infer only for a sub-group of the population (such as young-adults only), it is less recommended to use raking weights, and, if possible, one should prefer a method that take into account the joint distribution of the covariates if such data exists. Similar to post-stratification, raking is limited by the number of covariates that can be included in the model, due to the need to have enough respondents in each margin cell. In addition, raking may be sensitive to the order in which covariates are adjusted for, which may lead to under-correction of some covariates. #### 3.3.3 Inverse Propensity score Weighting (IPW) A natural expansion to post-stratification and raking is the inverse propensity score weighting that can be viewed as a continuous extension of post stratification. The _Propensity Score_ is defined as the conditional probability to be part of the observed sample given the covariates: \[p(X):=Pr(R=1|X) \tag{3}\] It was first suggested by Rosenbaum and Rubin [35, 36] as a method to perform matching for causal effects estimations in observational studies, and was later adopted to weighting survey data [7, 8, 9]. Rosenbaum and Rubin [36] showed that the assumptions of "strong ignorability" (uncounfoundness: \(Y\perp\!\!\!\perp R\mid X\) and positivity: \(0<P(R_{i}=1|X_{i})<1\)) implies that \(Y\perp\!\!\!\perp R\mid p(X)\), and that \(p(X)\) is the coarsest balancing score (a score \(B(X)\) that satisfies \(Y\perp\!\!\!\perp R\mid B(X)\)). This means that the propensity score is an inexpensive way, in terms of dimension, to estimate the selection probabilities. Hence, in the spirit of "Horvitz-Thompson estimator" [39] of using the inverse selection probabilities as weights, the inverse of the propensity score was suggested as a weighting procedure to adjust for non-response bias [8]. The estimation of the propensity scores can be done in any standard tools for classification, such as logistic regression, decision trees and random forests (such as in [40]) or others. The choice of the model depends on the researcher assumptions regarding the parametric model of the non-response and the number and types of features used. In balance, we chose to implement the estimation of the propensity scores through a (regularized) logistic regression. The logistic regression model assumes a linear relation between the covariates and the log odds, of the form: \[\log(\frac{p_{i}}{1-p_{i}})=\beta^{T}X \tag{4}\] Once the propensity scores are estimated, the weight of unit \(i\) is calculated by \(w_{i}=\frac{1-\hat{p_{i}}}{\hat{p_{i}}}\). This is because we define the target population as a reference group and we don't assume the target doesn't include the observed sample (i.e. we don't exclude units from the target based on their appearance in the sample). In this case, borrowing concepts from the causal inference literature [41], the estimation we care about is only the estimation of the average treatment effect for the "control" ("untreated") group (the target population), and hence we use \(\frac{1-\hat{p_{i}}}{\hat{p_{i}}}\) as the weights. One challenge when including many covariates is that the estimation of the propensity scores (and hence, the weights), can have a a high variance, which may lead to unnecessary inflation of the survey estimates4[42]. In balance we try to mitigate this by applying regularization to the logistic model using LASSO (Least Absolute Shrinkage and Selection Operator) [43]. This either excludes or reduces the magnitude of the covariates' coefficients that are not predictive for the response mechanism in the propensity model. This helps to minimize the variance of the estimated weights, at the potential expense of some consistent (hopefully small) bias in their estimated values. However, this process doesn't exclude covariates that are uncorrelated with the response itself. These should be excluded by the researcher in order to avoid variance inflation [42]. Another protective measure against variance inflation and extreme weights is weight trimming. balance offers automatic trimming, for details see subsection 4.3. Another weakness of inverse propensity score weighting is that it may strongly depend on the specification of the model of the propensity scores, as shown in a simulation study in [10]. Imai and Ratkovic [10] have suggested the method of Covariate Balancing Propensity Score (CBPS) (described in the next subsection) to overcome this issue. Fitting tree-based methods for the propensity scores have also shown to be a good alternative [44, 40]. #### 3.3.4 Covariate Balancing Propensity Score (CBPS) Covariate Balancing Propensity Score (CBPS), suggested by Imai and Ratkovic [10], is a method to estimate the propensity score in a way that will also result in maximizing the covariate balance. The method is preferable in the cases of misspecification of the propensity score model, which may lead to a bias in the estimated weights (and consequently, the estimated survey statistic). CBPS is described in details in [10] and implemented in the R package [45]. We give here a short summary of the method for completeness of the estimation methods section. The CBPS method is an expansion of the maximization problem of logistic regression. The propensity score of the logistic regression model is modeled by: \[p_{\beta}(X_{i})=\frac{\exp(\beta^{T}X_{i})}{1+\exp(\beta^{T}X_{i})}\quad \forall i\in\mathcal{S},\mathcal{T} \tag{5}\] By the maximum-likelihood approach, \(\beta\) is estimated by maximizing the log-likelihood, which results in: \[\hat{\beta}_{MLE}=\arg\max_{\beta}\sum_{i\in\mathcal{S}}\log(p_{\beta}(X_{i}) )+\sum_{i\in\mathcal{T}}\log(1-p_{\beta}(X_{i}))\] At the maximum of the log-likelihood \(\beta\) satisfies first order condition: \[\frac{1}{n}\left[\sum_{i\in\mathcal{S}}\frac{p^{\prime}_{\beta}(X_{i})}{p_{ \beta}(X_{i})}+\sum_{i\in\mathcal{T}}\frac{-p^{\prime}_{\beta}(X_{i})}{1-p_{ \beta}(X_{i})}\right]=0\] where \(p^{\prime}_{\beta}(X_{i})\) is the derivative of \(p\) by \(\beta^{T}\). This condition can be viewed as the condition that balances a certain function of the covariates, in this case the derivative of the propensity score \(p^{\prime}_{\beta}(X_{i})\). Generally, one can expand the above to hold for any function \(f\) of the covariates \(X\) (\(f(X)\)) and depends on the researcher's goals and assumptions: \[\mathbb{E}\left\{\sum_{i\in\mathcal{S}}\frac{f(X_{i})}{p_{\beta}(X_{i})}+\sum _{i\in\mathcal{T}}\frac{f(X_{i})}{1-p_{\beta}(X_{i})}\right\}=0 \tag{6}\] CBPS method chooses \(f(X)=X\) as the balancing function \(f\) in order to balance the first moment of each covariate in addition to the to derivative of the propensity score parametric model. The estimation of the propensity score is then done by using Generalized Methods of Moments (GMM) [46] and is described in [10]. ### Evaluation of survey weights #### 3.4.1 Overview As mentioned, survey weights are essential to improve the accuracy of survey estimates, but their reliability and validity hinge on several assumptions and modeling decisions. Survey weights are valuable when: (a) the non-response pattern is sufficiently captured by the measurable covariates, (b) the covariates are accurately represented in the fitted propensity score model, ensuring that the weighted distribution of covariates in the sample closely resembles that in the target population, and (c) the survey weights correlate with the outcome of interest to an extent that justifies the increased variance resulting from the weights [47, 42]. To see the level to which survey data empirically complies with the above criteria, diagnostics measures can be applied on each of the main elements: covariates, weights, and outcome. Both covariates and outcomes can be checked before and after applying the weights, allowing for a comprehensive assessment of the weights influence on the data. Such evaluations helps confirm whether the weights have successfully enhanced the representativeness of the sample data in a way that also substantially influences the outcome of interest. Additionally, various diagnostics can be performed on the weights themselves to understand if their are extreme weights which dominate the sample, as well as the overall impact the weights have on the effective sample size. Distributions can be compared using summary statistics and plots, with calculations incorporating the fitted survey weights. The following sections describes various methods for that purpose. #### 3.4.2 Visualizing Distributions Distribution plots are effective tools for visualizing the covariates and outcomes in the data, offering insights that extend beyond basic summary statistics. For numerical variables there are Kernel Density Estimator (KDE) plots (see an example in Fig 2), histograms, and quantile-quantile (QQ) plots. For categor ical variables it is common to use bar-plots (see an example in Fig 2). These distribution plots enable users to observe the differences between the observed sample and the target population, as well as the influence of the applied weights. The advantage of visualizations lies in their ability to reveal unexpected patterns in the complete range of data, as opposed to looking on summary statistics only. However, scaling these visualizations can be challenging. For instance, while examining KDE plots for each covariate comparing the sample and target population is informative, it is often more efficient for the researcher to have summary statistics that can quickly convey the extent of bias in different features. This is particularly useful when evaluating multiple weighting solutions. The following sections discuss particular summary statistics that helps in addressing this need. Figure 2: Examples (from simulated data) of diagnostic plots for covariates #### 3.4.3 Diagnostics for the covariates using ASMD A fundamental statistic for comparing distributions is the first moment, i.e. the mean, of each covariate for the target and the sample (weighted or unweighted). For each covariate, it is insightful to observe how much closer the application of weights brings us to the target mean. The **Absolute Standardized Mean Deviation (ASMD)** can be used to summarize this effect. The Absolute Standardized Mean Deviation (ASMD) is a statistical measure employed to compare the means of two groups (in our case, the sample and the target). It is computed as follows: \[ASMD=\frac{\left|\bar{X}_{Sample}-\bar{X}_{Target}\right|}{SD} \tag{7}\] where \(\bar{X}_{Sample}\) and \(\bar{X}_{Target}\) are the means of the sample and target. The \(SD\) can be either the pooled standard deviation of sample and target, or the standard deviation of the target population. In balance we use the standard deviation of the target population. The concept of ASMD is derived from the standardized mean difference, which is a measure of effect size used to compare the means of two groups, expressed in the standard deviation units. This is often referred to as Cohen's \(d\)[48], a standardized measure of the magnitude of the difference between two means. ASMD values range from 0 to infinity, with larger values indicating greater differences between the means of the two groups.5 Footnote 5: A value of 0 signifies no difference between the means, while a value of 1, for example, indicates that the difference between the means is equal to one standard deviation. ASMD is most easily conceptualized when the distributions being compared are unimodal and symmetric. The ASMD can be calculated using the unweighted and weighted mean of the Sample, and these two quantities can be compared. If applying the weights lead to an ASMD value that is closer to 0 than the ASMD of the unadjusted sample, then it is an indication the weights help to reduce the bias. The level of adjustment can be measured by taking the difference of these two ASMD values. \[ASMD_{diff}=ASMD_{unadjusted}-ASMD_{weighted} \tag{8}\] The more the adjusted ASMD (the ASMD that is based on the weighted mean of the sample) is smaller than the unadjusted ASMD (based on the unweighted mean) - i.e.: the closer the diff is to 0 - the stronger the indication we have of the potential benefit of the weights for adjusting a bias in the covariates. The magnitude of the difference of the two ASMD values is a measure of the impact of the weights. If \(ASMD_{diff}\) is positive then it means the weights have helped reduce the bias, while a negative value indicates that the weights have potentially increased the bias. Since we often wish to adjust over many covariates, then the ASMD difference from each covariate can be summarized by taking the average ASMD (or \(ASMD_{diff}\)) over all covariates. This gives a single summary statistic to measure the level of impact the weights had on reducing the bias of the sample in the covariates. For categorical variables, one possible behavior for ASMD calculation is to use dummy variables and calculates the ASMD for each of them. The ASMD per dummy variable approach could lead to over-weighting categorical variables with many categories when calculating the mean ASMD. A possible solution is to aggregate these ASMD per covariate. I.e.: calculates a single mean ASMD value for each categorical variable, and then the general mean ASMD will give each variable the same weight in the final calculation. For some limitations of using ASMD, see the appendix section B. #### 3.4.4 Diagnostics for the weights #### Kish's design effect One important aspect to consider when using survey weights is the potential increase in variance of some estimate of interest (e.g.: the mean) due to the variability in the weights. This is measured by a quantity known as _design effect_, which is generally defined as the variance of the weighted estimator to the variance expected with simple random sample (SRS) without replacement [49, 50]. It assesses the potential impact that weights might have on the variance of estimating the weighted mean. _Kish's design effect_[51] is a widely known and commonly used design effect measure for the potential impact that weights might have on the variance of estimating the population mean using the weighted mean. Kish's design effect assumes that there is no correlation between the weights and the outcome variable, also known as "haphahazard weights.", its formula is: \[D_{eff}=\frac{n\sum_{i=1}^{n}w_{i}^{2}}{(\sum_{i=1}^{n}w_{i})^{2}}=\frac{ \frac{1}{n}\sum_{i=1}^{n}w_{i}^{2}}{\left(\frac{1}{n}\sum_{i=1}^{n}w_{i} \right)^{2}}=\frac{\overline{w^{2}}}{\overline{w}^{2}} \tag{9}\] The _effective sample size proportion (ESSP)_ indicates what is the effective proportion of sample size we'll keep after applying the weights. It's simply the inverse of \(D_{eff}\): \[ESSP=\frac{1}{D_{eff}} \tag{10}\] The _effective sample size_ is a related measure that takes into account both the design effect and the actual sample size. It can be used to approximate the number of independent observations that would yield the same variance as the weighted sample. The effective sample size is calculated as follows (where \(n\) is the sample size): \[n_{eff}=ESS=\frac{n}{D_{eff}} \tag{11}\] The effective sample size provides a useful way to gauge the impact of the weights on the precision of the estimates. A smaller effective sample size indicates that the weights have introduced greater variability in the estimates, potentially requiring a larger actual sample size to achieve the desired precision. Further details on assumptions and proofs are available in appendix C. **Summary Statistics for the Distribution of Weights** While Kish's design effect can be used to estimate an effective sample size as a summary measure for the impact of using weights, it may also be beneficial to examine the distribution of weights using other summary statistics. For instance, extremely large or small weights could indicate potential issues with the weighting process or the presence of outliers in the data used for estimating the weights. Furthermore, the distribution of weights can help determine whether the weights are concentrated on a small number of observations or more evenly distributed across the sample. These observations are often easier to infer from summary statistics than from distribution plots of the weights. Understanding the distribution of the weights can also help to better understand Kish's design effect (and effective sample size) value, which may indicate whether follow-up manipulation of the weights is necessary (such as using an alternative weighting model or weight trimming). For diagnostic purposes, it is often more convenient to examine the weights after they have been normalized so that their sum equals the sample size. i.e.: by dividing each weight in the sample by the average of the weights (\(w_{i}^{*}=w_{i}/\bar{w}\)). When weights are normalized to sum to the sample size, they have the appealing property of being more or less informative as they deviate from 1. A weight smaller than 1 for an observation indicates that the weighting procedure considers this observation less informative than the average observation. Conversely, a weight larger than 1 suggests that this observation is more informative, on average, than other observations. For instance, if we have weights based on gender and find that males have weights smaller than 1 while females have weights larger than 1, we can infer that our sample has an over-representation of males and an under-representation of females - an imbalance that the weights attempt to rectify. It is helpful to look at the distribution of the weights. Looking at the KDE plot can help detect multimodal distribution (which might indicate clusters of users of higher/lower representativeness of the population). It is also helpful to look at basic summary statistics, such as the main quartiles (25%, 50%, and 75%) as well as the proportion of weights above and below certain values (e.g., over 2 and under 0.5, along with other similar quantities). This can help identify which proportions of the responses might be over/under weighted. Such insights could lead to followup changes to the final weighting model. For example, if we find out a handful of users have weights that are extremely large we might decide to look at the skewed features. We might find a need to bucket some classes in a covariate together, remove some features from the weighting model, use weight trimming, or some other post-processing manipulation to the weights. #### 3.4.5 Diagnostics for the outcome The entire procedure of fitting weights and diagnostics is geared towards an impactful change in the outcome (or outcomes) of interest towards reducing the estimation bias. A common population parameter of interest is the mean. The statistics used to review it are the sample weighted mean, the variance of the weighted mean, as well as asymptotic confidence intervals. The formula for the weighted mean, using a Horvitz-Thompson estimator [39], is simply: \[\bar{y}_{w}=\frac{\sum_{i=1}^{n}w_{i}y_{i}}{\sum_{i=1}^{n}w_{i}} \tag{12}\] The variance of the weighted mean is based on the \(\pi\)-estimator for the ratio-mean:[52] \[\widehat{V(\bar{y}_{w})}=\frac{1}{(\sum_{i=1}^{n}w_{i})^{2}}\sum_{i=1}^{n}w_{ i}^{2}(y_{i}-\bar{y}_{w})^{2} \tag{13}\] This estimator works for cases when the probability of selection for each \(y_{i}\) are not identical, treating the \(y_{i}\) values themselves as fixed.6 See section D for more details. Footnote 6: The formula presented for the variance of the weighted mean assumes that the weights are known and fixed quantities. Hence, this formula does not account for the uncertainty that is introduced from the estimation of the weights. If measuring this uncertainty is of interest, are not identical, treating the \(y_{i}\) values themselves as fixed. The confidence intervals (CI) available uses the above formula and are the standard approximate CI based on the central limit theorem: \[CI(\mu):\bar{y}_{w}\pm z_{\alpha/2}\sqrt{\widehat{V(\bar{y}_{w})}} \tag{14}\] In an applied setting, it is advisable to calculate the weighted mean and their CI after applying the weights, and also without weights, and compare the quantities to each other. The difference of the weighted and unweighted mean could be thought of as an estimator of the potential bias reduced by using the weights (assuming the general trend of the ASMD calculations on the covariates indicate a positive improvement in their imbalance). This estimated bias can be compared to the effective sample size to allow a rough decision if the increase in variance due to the weights is adequately compensated by the reduction in bias. ## 4 The balance workflow ### The workflow Survey data weighting using balance is achieved with the following three main steps: 1. **Understanding the initial bias in the data relative to a target population**: First, the survey data is loaded for both respondents and the target population. A pandas DataFrame can be created using pandas.read_csv() and converted into a balance Sample class object with Sample.from_frame. A similar step is repeated for the target population's data, and then the two Sample objects can be combined by assigning the target object as the target of the sample object. Once the data is loaded, we can conduct a diagnostic evaluation of the sample-vs-target covariates' distributions to determine if weighting is necessary. These include ASMD and distribution plots such as bar-charts and kernel-density-estimation plots. 2. **Adjusting the sample to the target**: next, we generate weights for the sample to more accurately represent the target population's distributions. Currently, the package implements the following methods: Inverse Probability Weighting (IPW) using LASSO regression, Covariate Balancing Propensity Score (CBPS), Post-stratification, and raking. These are all available through the adjust method in the Sample class. 3. **Results evaluation**: Once the weights are estimated, their effect is evaluated on the covariates imbalance (again, using ASMD and plots), the effective sample size, and the change in weighted mean of the outcome as well as their confidence intervals. The next section gives a detailed example for applying this workflow. ### An end-to-end example #### 4.2.1 Understanding the initial bias ##### Loading simulated data This section presents an example of simulated data extracted from the balance tutorial page [53]. The data set is comprised of two pandas DataFrames: one for the target population and the other for the sample population. Both DataFrames contain an identifier column (id), three covariate columns (gender, age_group, and income), and an outcome variable (happiness)7 Footnote 7: Code for creating the distributions is available here: [https://github.com/facebookresearch/balance/blob/main/balance/datasets/__init__py#L17](https://github.com/facebookresearch/balance/blob/main/balance/datasets/__init__py#L17). In this particular simulation, we intentionally designed the outcome to be associated with all covariates, ensuring that this relationship remains consistent for both the target and sample populations. It is important to note that in real-world data sets, we generally don't observe the outcome for the target population. However, in this simulated data set we have included it for illustrative purposes. This setup allows us to later demonstrate how weighting methods can mitigate bias and approximate population-level parameters more accurately. In real-world use-cases the data is often loaded using pandas.read_csv(). Here, we use pre-made DataFrames that can be loaded (and inspected) using the following Python code: ``` 1frombalanceimportload_data 2#INFO(2023-05-1409:00:15,410)[__init__/<module>(line52)]: Usingbalanceversion0.9.0 3 4target_df,sample_df=load_data() 5 6print("sample_df:\n",sample_df.head()) sample_df: id gender age_group income happiness 0 0 Male 25-34 6.428659 26.043029 1 1 Female 18-24 9.940280 66.885485 2 2 Male 18-24 2.673623 37.091922 3 3 NaN 18-24 10.550308 49.394050 4 4 NaN 18-24 2.689994 72.304208 # The target_df DataFrame looks similarly to sample_df. #### Creating instances of the Sample class with the DataFrames The main class for our analyses is the Sample class from the balance package. The following illustrates how we incorporate the DataFrames into this class: ``` 1frombalanceimportSample 2 3sample=Sample.from_frame(sample_df,outcome_columns=["happiness"]) 4 5target=Sample.from_frame(target_df,outcome_columns=["happiness"]) 6#Usuallythecodewillbesimply: 7#target=Sample.from_frame(target_df) 8#Thisisincemosttimeswedonnothavetheoutcomefortthetarget.Intheexampleintthispaperwehaveddeditjusttovalidatelatterthattheweightsindeedhelpusreducethebiasoftheoutcome. 9 10 11#Followingthis,weassociatetheSampleobjectinstanceofsamplewiththatoftthetargetobject,enablingustoadjustthesampledtomatchthetarget. 12sample_with_target=sample.set_target(target) ``` The Sample class provides a wide range of attributes, methods, and properties. For instance, the df property can reveal the DataFrame encapsulated within the instance of the Sample class (e.g.: sample_with_target.df): Invoking the Sample object directly provides a concise summary of its attributes: ``` 1sample_with_target ``` (balance.sample_class.Sample) balanceSampleobjectwithtargetset 1000observationsx3variables:gender,age_group,incomeid_column:id,weight_column:weight,outcome_columns:happiness target: balance Sample object 10000 observations x 3 variables: gender,age_group,income id_column: id, weight_column: weight, outcome_columns: happiness 3 common variables: gender,age_group,income ### Exploring the imbalances in covariates We can use methods such as.covars() with.plot(),.mean(), and.asmd() to get some diagnostics about the imbalance. We can use the.plot() method to look at the distributions of covariates in the sample versus the target data. sample_with_target.covars().plot() The output in Figure 3 helps to easily identify imbalance. For example, we can see the sample has many more males than females, as opposed to a 50%-50% split in the target population. And for age_group we can see how the sample is skewed towards younger respondents, as compared to the target population. The package leverages plotly [54] (as the default) to create interactive visualizations, but it also supports static figures using the seaborn package [55] for added flexibility. The default asmd method uses ASMD to compare sample (which is unweighted) with the target using dummy variables for categorical variables, and calculates the ASMD for each of them. The aggregate ASMD per covariate can be achieved using the aggregate_by_main_covar = True argument, as described in section 3.4.3. ``` print(sample_with_target.covars().asmd(aggregate_by_main_covar=True).T.round(2)) ``` source self age_group0.23 gender0.25 income0.49 mean(asmd)0.33 ``` The ASMD helps quantify the levels of imbalance in each covariate. Figure 3: Examples (from simulated data) of diagnostic plots for covariates (unweighted sample vs target) #### 4.2.2 Fitting survey weights In order to estimate weights for the sample the.adjust() method as used on the Sample object. The default is ipw, and other methods could be invoked using the method argument. ``` 1#Usingipwtofitsurveyweights 2adjusted=sample_with_target.adjust() ``` #### 4.2.3 Evaluating the Results Covariates We can get a basic summary of the results using the.summary() method: ``` 1print(adjusted.summary()) ``` CovarASMDreduction: 59.7%,designeffect: 1.897 CovarASMD (7 variables): 0.327 -> 0.132 Modelperformance:Modelproportiondevianceexplained: 0.172 It shows that the weights led to an improvement of around 60% reduction in the mean ASMD (from 0.327 to 0.132), and that the price we paid for it is an increasing the variance of the estimator by 1.897 in comparison to a random sample (as calculated using Kish's design effect, if assuming haphazard weights). The same tools used to evaluate the bias before adjustment can be used for evaluating the effect of the weights on the balance after adjustment. ``` 1adjusted.covars().plot() ``` The output in Figure 4 shows how the weights help mitigate some (though not all) of the bias, for all three covariates (gender, age and income). We can also see the improvement per caovariate (averaged across category) using the.asmd() method: ``` 1print(adjusted.covars().asmd(aggregate_by_main_covar=True).T.round(2)) ``` ``` 1sourceselfunadjustedunadjusted-self age_group0.060.230.18 gender0.100.250.16 income0.240.490.25 mean(asmd)0.130.330.20 ``` Figure 4: Examples (from simulated data) of diagnostic plots for covariates (unweighted and weighted sample vs target) We can see that while we got improvements in all covariates, there is still some imbalance that remained, especially in the income variable. **Weights** Next, we wish look at the diagnostics of the weights to identify if there are any extreme weights or signs of issue that requires further investigation. This can be done by using the summary method on the.weights() method of the adjusted object. ``` print(adjusted.weights().summary().round(2)) ``` var val 0design_effect1.90 1effective_sample_proportion0.53 2effective_sample_size527.04... 7describe_min0.31 11describe_max11.65 16prop(w<1)0.65 21prop(w>=10)0.00 ``` We can see a design effect of 1.9 which corresponds with an effective sample size proportion of 53%. Since the size of the sample was 1000, it means that the effective sample size is 527. We can also see that 65% of the weights are below 1, meaning that we down-sized 65% of our sample. The minimal weight is 0.31 and the max weight is 11.65, with almost no weights above 10. A conclusion here is that the weights are not too extreme and we get some sense of the cost that using the weights would incur on the precision of our estimates. **Outcome** The summary method on the outcomes method gives us the weighted means and confidence intervals for the adjusted sample, the target, and the unadjusted sample data. From the results below we can see that the real population level mean of happiness in the simulation was 56.2. In our (unweighted/unadjusted) sample it was 48.5. Meaning, the bias was roughly 7.7 points. After applying the weights, we got a value of 53.3, reducing the bias to roughly only 2.9 points. Note that this comparison is only possible in a simulated environment and is given here for a proof of concept of the effect of the weights. We can also see that the CI of the self and unadjusted show very different ranges of bands, indicating how the weights clearly got us a significant change in the estimated mean8. While the model improved the bias, we know it didn't fix it completely. This is because the model also did not perfectly fix the covariate imbalance, since it used some regularization in the process. Footnote 1: The model is not a simple model, but it is not a simple model. The output of.plot is in Figure 5. It shows that we got a relatively symmetrical uni-modal distribution (before and after applying the weights). So we don't observe and strong irregular behavior of the outcome. Note that we are able to compare the outcome with and without the weights in the sample to the real outcome distribution in the target population only because this is simulated data. In real-world cases, we are not expected to have access to the outcome distribution of the target population. Also, it is relatively common to get outcome responses in binary or likert scales, and not a continuous variable. The.plot would work with these just as well. **Downloading data** Figure 5: Examples (from simulated data) of diagnostic plots for outcome (unweighted and weighted sample vs target) Once we are settled with the weights we got, we can download them as csv, as follows: ``` 1adjusted.to_download()#Willcreateadownloadlinkinjupyter 2#Wecanalsopreparethedatatobeexportedascsv 3#Thefollowingcodeshowesthefirst500charactersforsimplicity 4: 5adjusted.to_csv() ``` ### How does balance implement the adjustment? **Pre-processing** Before applying any of the adjustment methods, balance performs a pre-processing step to improve models' results. The pre-processing step includes a few best practiced that makes the use of balance easy and automatic for a default usage. **Transformations.** balance applies the following default behaviours: 1. Handling missing values: balance handles missing values automatically by adding a special indicator column to any variable that contains missing values. The advantage of this is that these are then considered as a separate category for the adjustment. 2. Feature engineering: by default, balance applies feature engineering to be able to fit the covariate distribution better, and not only the first moment. Specifically, each continuous variable is bucketed into 10 quantiles buckets, and rare categories variables are grouped together so to avoid overfitting9. Footnote 9: The user has also an option to change these default behaviours, through setting different values to the transformations argument of Sample.adjust. **Model matrix.** The model matrix of the covariates used in balance for the logistic regression in ipw and for CBPS propensity score is constructed before the fitting is done using the transformed variables and one-hot encoding for discrete variables. The default behaviour is an additive model including all joint covariates of the target and the observed sample. However, thorough the argument formula, one can input a formula for a specified relation between the variables. The formulas adopt the notation from the patsy Python package [56], facilitating a range of operations like addition, multiplication (for interaction effects), and power transformations. A detailed example is available in the "balance: transformations and formulas" tutorial [57]. **Adjustment through ipw** ipw is implemented using LASSO regularized logistic regression. To avoid non-balanced categories in the logistic regression, balance scales the prevalence of the target population to be similar to the observed sample. The penalty factor \(\lambda\) of the LASSO is chosen through cross-validation. Two methods for choosing the parameter are suggested: 1. Unbounded Design Effect: If one doesn't want to bound the design effect of the resulted weights (the default behaviour with max_de=None), the penalty if chosen using lambda_1se, which is the largest value of \(\lambda\) such that the cross-validated error is within one standard error of the minimum value. 2. Bounded Design Effect: If one chooses to bound the design effect (e.g. by using max_de=2), a grid search over 10 of the values of \(\lambda\) that brings the largest design effect within the bound is done, where the \(\lambda\) is chosen to be the one that brings the largest ASMD reduction. In addition, a penalty_factor argument can be also used to indicate how much the model should focus to adjust each term of the formula. Larger penalty factors means that the covariate is more likely to be regularized by the LASSO penalty and as a result the adjustment of this covariate will be smaller, i.e. will end in a less balanced covariate. This feature can be particularly useful when certain components are believed to be more or less responsible for bias in the data, or when the user wants to explore different adjustment scenarios. **Post-processing** Weights in balance are scaled to the population size after estimated, and can be interpret as the number of people from the target the sample unit represent. After the adjustment and scaling is done, weights trimming from above is performed. This is done in order avoid over fitting of the model and unnecessary variance inflation. The weights are trimmed and scales in a way that keeps the mean and sum of the weights the same as before trimming, such that the interpretation how many units in the target this unit represent holds after trimming. ## 5 Future directions The balance package offers benefits for researchers interested in analyzing data with non-response bias in the Python environment by being easy to use, providing an end-to-end workflow, and released as open-source. While comprehensive, there is still room for improvement and expansion. This section highlights several possible areas for future development in the balance package. 1. **Better Diagnostic Tools for Covariates:** The current metric of ASMD has limitations, especially when applied to a wide range of distributions and for categorical variables. Future versions could include more robust measures like the Common Language Effect Size [58] and better methods for handling categorical variables, such as Kullback-Leibler divergence. There is also room for adding statistical hypothesis tests for the evaluations, as well as more plots. The cobalt R package [18] is a good source of inspiration. 2. **Expanded Estimation and Diagnostic Tools for Outcomes:** Currently, the package primarily provides the weighted mean and its confidence intervals. A helpful improvement would be to directly measure the estimated bias reduction caused by the weights, including a confidence interval for this estimate. Also, current implementation focuses on the weighted average and the linearization (Taylor) estimator for the variance. Other possible statistics, and estimations of variance exists. The samples package already implements some of these and would be a good source of inspiration [21]. 3. **Diagnostics for the bias-variance trade-offs when using weights:** At present, the user is provided with a set of weights but with no easy way to check the bias-variance tradeoffs for alternative levels of trimming or tuning other parameters. A future version of the package could include more diagnostics tools and allow automated functions for weight trimming, such as based on empirical-MSE estimation for a given outcome over a range of potential weight trimming values. This could lead to a better balance between the variance induced by the weights and the bias they reduce and save researcher's time in manual tweaking. 4. **Built-in Model Comparison for Multiple Weights:** Our ultimate goal is to allow the most flexibility to the user by conducting easy comparisons of multiple models and adjustments to the weights in order to choose the model that best fits his/hers data. 5. **Feature Selection for Propensity Score Models:** When given several potential models, it can be challenging to choose the best one. This choice could depend on various factors, such as the balance between reduced bias and incurred variance or the impact of different models on different outcomes. Further development in this area could provide useful tools for sensitivity analysis and decision making. 6. **Expansion Beyond Propensity Score Models:** The next step for the package could be to include outcome models and doubly robust models. Thus making the package more versatile and comprehensive. These possible improvements represent exciting opportunities for the future of the balance package, aiming to provide a more robust and user-friendly tool for researchers in the Python environment. We welcome any feedback, suggestions, and opportunities for collaborations.
2305.08670
A Nonlinear Projection-Based Iteration Scheme with Cycles over Multiple Time Steps for Solving Thermal Radiative Transfer Problems
In this paper we present a multilevel projection-based iterative scheme for solving thermal radiative transfer problems that performs iteration cycles on the high-order Boltzmann transport equation (BTE) and low-order moment equations. Fully implicit temporal discretization based on the backward Euler time-integration method is used for all equations. The multilevel iterative scheme is designed to perform iteration cycles over collections of multiple time steps, each of which can be interpreted as a coarse time interval with a subgrid of time steps. This treatment is demonstrated to transform implicit temporal integrators to diagonally-implicit multi-step schemes on the coarse time grid formed with the amalgamated time intervals. A multilevel set of moment equations are formulated by the nonlinear projective approach. The Eddington tensor defined with the BTE solution provides exact closure for the moment equations. During each iteration, a number of chronological time steps are solved with the BTE alone, after which the same collection of time steps is solved with the moment equations and material energy balance. Numerical results are presented to demonstrate the effectiveness of this iterative scheme for simulating evolving radiation and heat waves in 2D geometry.
Joseph M. Coale, Dmitriy Y. Anistratov
2023-05-15T14:25:57Z
http://arxiv.org/abs/2305.08670v1
A Nonlinear Projection-Based Iteration Scheme with Cycles over Multiple Time Steps for Solving Thermal Radiative Transfer Problems ###### Abstract In this paper we present a multilevel projection-based iterative scheme for solving thermal radiative transfer problems that performs iteration cycles on the high-order Boltzmann transport equation (BTE) and low-order moment equations. Fully implicit temporal discretization based on the backward Euler time-integration method is used for all equations. The multilevel iterative scheme is designed to perform iteration cycles over collections of multiple time steps, each of which can be interpreted as a coarse time interval with a subgrid of time steps. This treatment is demonstrated to transform implicit temporal integrators to diagonally-implicit multi-step schemes on the coarse time grid formed with the amalgamated time intervals. A multilevel set of moment equations are formulated by the nonlinear projective approach. The Eddington tensor defined with the BTE solution provides exact closure for the moment equations. During each iteration, a number of chronological time steps are solved with the BTE alone, after which the same collection of time steps is solved with the moment equations and material energy balance. Numerical results are presented to demonstrate the effectiveness of this iterative scheme for simulating evolving radiation and heat waves in 2D geometry. keywords: Boltzmann transport equation, high-energy density physics, thermal radiative transfer, iterative methods, multilevel methods, quasidiffusion method, variable Eddington factor + Footnote †: journal: Journal of Computational Physics ## 1 Introduction We consider the basic thermal radiative transfer (TRT) problem that neglects material motion, heat conduction and the scattering of photons which is formulated with the multigroup Boltzmann transport equation (BTE) describing photons \[\frac{1}{c}\frac{\partial I_{g}}{\partial t}+\mathbf{\Omega}\cdot\mathbf{ \nabla}I_{g}+\varkappa_{g}(T)I_{g}=\varkappa_{g}(T)B_{g}(T), \tag{1a}\] \[I_{g}|_{\mathbf{r}\in\partial\Gamma}=I_{g}^{\text{in}}\ \ \text{for}\ \ \mathbf{\Omega}\cdot\mathbf{n}_{\Gamma}<0,\quad I_{g}|_{t=0}=I_{g}^{0},\] (1b) \[\mathbf{r}\in\Gamma,\quad t\in[0,t^{\text{end}}],\quad\mathbf{\Omega}\in\mathcal{S}, \quad g=1,\ldots,G\] coupled to the material energy balance (MEB) equation that models radiation-matter energy exchange \[\frac{\partial\varepsilon(T)}{\partial t}=\sum_{g=1}^{G}\bigg{(}\int_{4\pi}I_{g}d \Omega-B_{g}(T)\bigg{)}\varkappa_{g}(T),\quad T|_{t=0}=T^{0}, \tag{2}\] where \(\mathbf{r}\) is spatial position, \(\mathbf{\Omega}\) is the direction of particle motion, \(g\) is the photon frequency group index, \(c\) is the speed of light, \(\Gamma\) is the spatial domain, \(\partial\Gamma\) is the domain boundary, \(\mathbf{n}_{\Gamma}\) is the unit normal to \(\partial\Gamma\) and \(\mathcal{S}\) is the unit sphere. \(I_{g}(\mathbf{r},\mathbf{\Omega},t)\) is the group specific intensity of radiation, \(T(\mathbf{r},t)\) is the material temperature, \(\varkappa_{g}(\mathbf{r},t;T)\) is the group photon opacity, \(\varepsilon(\mathbf{r},t;T)\) is the material energy density and \(B_{g}(\mathbf{r},t;T)\) is the group Planck black-body distribution function. The iterative algorithm is based on the multilevel quasidiffusion (MLQD) methodology [1; 2; 3; 4; 5; 6], which is formulated via the nonlinear projective approach. Two systems of moment equations of the BTE are constructed by projecting the BTE onto a series of low-order subspaces. The first system of moment equations is the multigroup low-order quasidiffusion (LOQD) equations (aka Variable Eddington Factor equations), which are the first two angular moments of the BTE \[\frac{\partial E_{g}}{\partial t}+\mathbf{\nabla}\cdot\mathbf{F}_{g}+c \varkappa_{g}(T)E_{g}=4\pi\varkappa_{g}(T)B_{g}(T), \tag{3a}\] \[\frac{1}{c}\frac{\partial\mathbf{F}_{g}}{\partial t}+c\mathbf{\nabla} \cdot(\mathfrak{f}_{g}E_{g})+\varkappa_{g}(T)\mathbf{F}_{g}=0, \tag{3b}\] whose unknowns include the group radiation energy density \(E_{g}=\frac{1}{c}\int_{4\pi}I_{g}d\Omega\) and flux \(\mathbf{F}_{g}=\int_{4\pi}\mathbf{\Omega}I_{g}d\Omega\). The second system of moment equations is the effective grey LOQD equations obtained by summing Eqs. (3) over all frequency groups \[\frac{\partial E}{\partial t}+\mathbf{\nabla}\cdot\mathbf{F}+c\langle \varkappa\rangle_{E}E=c\langle\varkappa\rangle_{B}a_{R}T^{4}, \tag{4a}\] \[\frac{1}{c}\frac{\partial\mathbf{F}}{\partial t}+c\mathbf{\nabla}\cdot( \langle\mathfrak{f}\rangle_{E}E)+\langle\varkappa\rangle_{F}\mathbf{F}+\bar{\mathbf{ \eta}}E=0, \tag{4b}\] which solve for the total radiation energy density \(E=\sum_{g=1}^{G}E_{g}\) and flux \(\mathbf{F}=\sum_{g=1}^{G}\mathbf{F}_{g}\). The MEB equation (2) is coupled with the effective grey LOQD equations (4) on the grey scale, reducing the dimensionality of the TRT problem. This is done by reformulating the MEB in effective grey form \[\frac{\partial\varepsilon(T)}{\partial t}=c\langle\varkappa\rangle_{E}E-c \langle\varkappa\rangle_{B}a_{R}T^{4}. \tag{5}\] The Eddington tensor \(\mathfrak{f}_{g}\) computed by the solution of the BTE (1) and spectrum averaged coefficients calculated by the solution of multigroup LOQD equations (3) which define exact closures for the LOQD system are \[\mathfrak{f}_{g}=\frac{\int_{4\pi}\mathbf{\Omega}\otimes\mathbf{\Omega}I_{g}\ d\Omega}{ \int_{4\pi}I_{g}d\Omega}, \tag{6}\] \[\langle u\rangle_{E}=\frac{\sum_{g=1}^{G}u_{g}E_{g}}{\sum_{g=1}^{G}E_{g}}, \quad\langle u\rangle_{F}=\text{diag}(\langle u_{g}\rangle_{F_{x}},\langle u _{g}\rangle_{F_{y}},\langle u_{g}\rangle_{F_{z}}), \tag{7}\] \[\langle u\rangle_{F_{\alpha}}=\frac{\sum_{g=1}^{G}u_{g}|F_{\alpha,g}|}{\sum_{ g=1}^{G}|F_{\alpha,g}|},\quad\bar{\mathbf{\eta}}=\frac{\sum_{g=1}^{G}(\varkappa_{g}- \bar{\mathbf{K}}_{R})\mathbf{F}_{g}}{\sum_{g=1}^{G}E_{g}}\,. \tag{8}\] The MLQD method for TRT problems is defined by the system of equations consisting of the following parts: * the high-order multigroup BTE (Eq. (1)), * the multigroup LOQD equations (Eq. (3)), * the effective grey LOQD equations (Eq. (4)), * the MEB equation in effective grey form (Eq. (5)). We discretized in time the high-order BTE, the hierarchy of the moment equations, and MEB with a fully implicit temporal scheme over the given temporal mesh using the backward Euler (BE) time-integration method. Thus, we apply a one-stage time-step method to the multilevel system of equations. In a standard approach, this system of equations is solved on each \(n^{th}\) time step, denoted as \(\theta^{n}=(t^{n-1},t^{n}]\), iteratively to compute the solution at \(t^{n}\) and then move to the next layer on time at \(t^{n+1}\)[3; 4; 5; 6; 7; 8]. In this paper, we present a new iteration method defined for agglomerated sets of time steps from the original target grid in time. These sets of time steps form coarse time intervals with a temporal subgrid defined by the original grid in time. The multilevel system of equations approximated with the BE scheme over time steps \(\theta^{n}\) is solved iteratively over the coarse intervals on the imbedded time subgid. This iterative method involves (i) cycles of solving the high-order BTE over the set of time steps included in a coarse time interval and then (ii) cycles of solving the hierarchy of low-order equations coupled with MEB equations over the same temporal subgrid. This numerical method can be interpreted as an implicit multi-stage time integration method. The developed iteration method is stable. As the number of time steps included in the coarse time interval increases, the rate of convergence decreases. However, the outer iteration cycles over coarse time intervals still converge rapidly. The characteristic rate of convergence doesn't exceed 0.2. The analysis of this method enables us to study the effects of decoupling the high-order and low-order parts of multiphysics system of equations over multiple time steps. The elements of this new method have a potential for developing algorithm for parallel computations. The remainder of this paper is organized as follows. In Sec. 2 the multilevel iteration method for TRT problems is described. Numerical results are presented in In Sec. 3. We conclude with a brief discussion in Sec. 4. ## 2 Formulation of the Iteration Method Consider that the TRT problem is discretized on a temporal mesh defined by \(N\) time steps \(\{\theta^{n}\}_{n=1}^{N}\) over the interval \(t\in[0,t^{\text{end}}]\) such that \[\Big{\{}\theta^{n}=(t^{n-1},t^{n}]\ |\ n=1,\ldots,N,\ t^{0}=0,\ t^{N}=t^{\text{end }}\Big{\}}. \tag{9}\] The standard strategy is to iterate over the entire hierarchy of equations (1) & (3) - (8) at each time step \(\theta^{n}\) separately. The iterative scheme presented here instead performs iterations over collections of individual time steps together. Let us define a subset of the discrete instants of time \(\{\mathfrak{T}_{b}\}_{b=0}^{B}\subset\{t^{n}\}_{n=0}^{N}\): \[\Big{\{}\mathfrak{T}_{b}=t^{N_{b}}\ |\ b=0,\ldots,B,\quad 0=N_{0}<N_{1}<\cdots<N_{ b}=N\Big{\}} \tag{10}\] and split the temporal domain into \(B\)_time blocks_\(\{\Theta_{b}\}_{b=1}^{B}\) defined by \[\Big{\{}\Theta_{b}=(\mathfrak{T}_{b-1},\mathfrak{T}_{b}]\ |\ b=1,\ldots,B\Big{\}}. \tag{11}\] Each time block is an amalgamated sequence of consecutive time steps \(\Theta_{b}=\bigcup_{n=N_{b-1}+1}^{N_{b}}\theta^{n}\). The number of time steps embedded in \(\Theta_{b}\) is denoted \(\mathfrak{N}_{b}=N_{b}-N_{b-1}\) The iterative algorithm presented here performs cycles over time blocks \(\Theta_{b}\), effectively treating each time block as a coarse time step. The coupled solution of the BTE and LOQD equations is thus iteratively converged on time blocks instead of on each individual time step. On each outer iteration, the high-order BTE (Eq. (1)) is solved over all time steps \(\theta^{n}\in\Theta_{b}\) using the temperatures \(\{T(t^{n})\ |\ n=N_{b-1}+1,\ldots,N_{b}\}\) over this collection \(\Theta_{b}\) estimated on the previous iteration cycle. The Eddington tensor is calculated for each time step involved in this process and stored until completion of the entire time block. Afterwards the collected \(\{\mathfrak{f}_{g}(t^{n})\ |\ n=N_{b-1}+1,\ldots,N_{b}\}\) is used to close the LOQD equations for all time steps within the block \(\Theta_{b}\). The LOQD and MEB Eqs. (3)-(5) are then solved over \(\theta^{n}\in\Theta_{b}\). This yields the updated material temperature field \(\{T(t^{n})\ |\ n=N_{b-1}+1,\ldots,N_{b}\}\). This iterative process is derived by noting that the BTE can be solved for all time steps in a given block provided the required initial condition \(I_{g}|_{t=\mathfrak{T}_{b-1}}\) and the iterative estimate of temperature field \(\{T(t^{n})\ |\ n=N_{b-1}+1,\ldots,N_{b}\}\). Similarly, the Eqs. (3)-(5) can be solved for all time steps in the same block provided initial conditions for \(E_{g},\mathbf{F}_{g},T\) at \(t=\mathfrak{T}_{b-1}\) and the iterative estimate of Eddington tensor \(\{\mathfrak{f}_{g}(t^{n})\ |\ n=N_{b-1}+1,\ldots,N_{b}\}\). The detailed iterative scheme is presented in Algorithm 1 where \(j\) is the outer-iteration index. The zeroth outer iteration involves solving the low-order system over all time steps in \(\Theta_{b}\) using the initial guess for the Eddington tensor \(\mathfrak{f}_{g}=\frac{1}{3}\mathbb{I}\), where \(\mathbb{I}\) is the identity matrix. During the iteration process, the multilevel system of low-order and MEB equations is solved by means of two nested iteration cycles [6]. Convergence criteria are based on the function \(\xi^{(j)}(E,t^{n})=\|E^{(j)}(t^{n})-E^{(j-1)}(t^{n})\|_{2}\). We also denote \(\bar{E}^{(j)}(t^{n})=\|E^{(j)}(t^{n})\|_{2}\). The specific norm for convergence is defined \(\|\xi^{(j)}(E)\|_{\infty}=\max_{n\in[N_{b-1}+1,N_{b}]}\xi^{(j)}(E,t^{n})\), and \(\|\bar{E}^{(j)}\|_{\infty}=\max_{n\in[N_{b-1}+1,N_{b}]}\bar{E}^{(j)}(t^{n})\). The same notations are used for norms of \(T\). This iterative algorithm can be interpreted as one which solves a multi-step implicit temporal discretization of the TRT problem, similar to diagonally-implicit Runge-Kutta methods. Consider the BTE discretized in time with the backward-Euler scheme on the grid of time steps \(\{\theta^{n}\}_{n=1}^{N}\) \[I_{g}^{n}=\tau^{n}H_{g}^{n}(I_{g}^{n},T^{n})+I_{g}^{n-1}, \tag{12}\] where \(\tau^{n}=c\Delta t^{n}\), \(\Delta t^{n}=t^{n}-t^{n-1}\) and \[H_{g}^{n}(I^{n},T^{n})=\varkappa_{g}(T^{n})B_{g}(T^{n})-\mathbf{\Omega}\cdot\mathbf{ \nabla}I_{g}^{n}-\varkappa_{g}(T^{n})I_{g}^{n}. \tag{13}\] Taking Eq. (12) at the last time step in a given time block (\(n=N_{b}\)), and recursively substituting Eq. (12) for \(\{I_{g}^{n-1}\}_{n=N_{b-1}+1}^{N_{b}}\) yields the following: \[I_{g}^{N_{b}}=I_{g}^{N_{b-1}}+\sum_{n=N_{b-1}+1}^{N_{b}}\tau^{n}H_{g}^{n}(I_{g }^{n},T^{n}). \tag{14}\] Let us define the coarse time-block-step \(\Delta\mathfrak{T}_{b}=\mathfrak{T}_{b}-\mathfrak{T}_{b-1}\). Eq. (14) can be rewritten in the form of a multi-step method with step length \(\Delta\mathfrak{T}_{b}\) as \[I_{g}^{N_{b}}=I_{g}^{N_{b-1}}+\Delta\mathfrak{T}_{b}\sum_{n=N_{b-1}+1}^{N_{b}} \frac{\tau^{n}}{\Delta\mathfrak{T}_{b}}H_{g}^{n}(I_{g}^{n},T^{n}). \tag{15}\] The same process can be used to derive this multi-step temporal discretization over time blocks for the LOQD and MEB equations. ## 3 Numerical Results The iterative scheme is analyzed with numerical testing on the classical Fleck-Cummings (F-C) test [9] in 2D Cartesian geometry. Standard formulation of the test is used. The test domain is a \(6\times 6\) cm homogeneous slab with spectral opacity \(\varkappa_{\nu}=\frac{27}{\nu^{3}}(1-e^{-\nu/T})\). The left boundary is subject to incoming radiation at a temperature of \(T^{\rm in}=1\) KeV at black-body spectrum. All other boundaries are vacuum. The initial temperature of the slab is \(T^{0}=1\) eV. The material energy density of the slab is a linear one \(\varepsilon=c_{v}T\) where \(c_{v}=0.5917a_{R}T^{\rm in}\) and \(a_{R}\) is Stefan's constant. We consider a time interval of \(t\in[0,6\,\mathrm{ns}]\), discretized into 300 uniform time steps \(\Delta t=2\times 10^{-2}\) ns. The phase space is discretized using a \(20\times 20\) uniform orthogonal spatial grid, 17 frequency groups [7], and 144 discrete directions. The Abu-Shumays angular quadrature set q461214 is used [10]. The implicit backward-Euler time integration scheme is used to discretize all equations in time. The BTE is discretized in space with the method of long characteristics, and all low-order equations use a second-order finite-volumes scheme [11]. We consider cases with time blocks of lengths \(\Delta\mathfrak{T}_{b}=0.02,\;0.04,\;0.1,\;0.2,\;0.5,\;1,\;2,\;3,\;6\;\mathrm{ns}\) (i.e. \(\mathfrak{N}_{b}=1,2,5,10,25,50,100,150,300\), respectively). Note that when \(\Delta\mathfrak{T}_{b}=0.02\;\mathrm{ns}\), each time block is simply one time step (\(\mathfrak{N}_{b}=1\)). Thus, this is the case of standard iteration scheme with outer iteration cycle over each given time step. In the case \(\Delta\mathfrak{T}_{b}=6\;\mathrm{ns}\), on each outer iteration cycle the high-order BTE and low-order equations are solved over the whole time range of the problem, i.e. \(\Theta_{1}=[0,t^{\mathrm{end}}]\). The F-C test is solved using the presented iterative scheme and compared against the standard iteration scheme [6]. We use a very tight convergence criteria of \(\epsilon=10^{-14}\). This enables us to analyze the convergence behavior of the proposed iterative scheme over the large range of iterative errors up to very small orders of magnitude and avoid missing iterative stagnation and noise effects. Figure 1 plots the iteration counts per block \(b\) to reach the required convergence level. For sufficiently long time blocks, markers are placed at the end of each block interval. As time blocks become larger, the iteration counts tend to increase. The iteration counts for \(\Delta\mathfrak{T}_{b}=0.02\;\mathrm{ns}\) are the same as for the standard iteration scheme, and therefore the presented scheme requires more iterations (per block) than the standard scheme for all \(\Delta\mathfrak{T}_{b}>0.02\;\mathrm{ns}\). These effects stem from the fact that the high-order and low-order problems are coupled over entire time-block cycles, instead of over each time step. Figure 2 plots the iterative error in the total radiation energy density \(E\) at each time step and iteration for several cases, including \(\Delta\mathfrak{T}_{b}=0.02,\;0.1,\;0.5,\;1,\;2,\;3\;\mathrm{ns}\). The standard case with \(\mathfrak{N}_{b}=1\) is included for reference. Note that errors in the solution at each iteration (\(j\)) are calculated with respect to the reference solution computed by means of the standard scheme, (i.e. with \(\Delta\mathfrak{T}_{b}=0.02\mathrm{ns}\)) denoted as \(\hat{E}\). The developed iterative scheme's solution is shown to converge to the reference solution. Note that similar results are obtained for errors in \(T\), and all other tested Figure 1: Iterations (\(j\)) taken at each time block to reach a relative convergence level of \(\epsilon=10^{-14}\). Blocks designated by markers for sufficiently long block times. values for \(\Delta\mathfrak{T}_{b}\). Errors are seen to converge uniformly at each iteration. The errors in each block sharply increase over the first several time steps, then level off. Figure 2: Relative error in \(E\) computed with several \(\Delta\mathfrak{T}_{b}\) w.r.t. the reference solution. Figure 3 plots the iterative error in \(E\) using \(\Delta\mathfrak{T}_{b}=0.04,\ 0.1,\ 0.5,\ 1\ \mathrm{ns}\) w.r.t. the reference solution computed by the standard iterative scheme. The plot is formatted so that results are graphed against the iteration count, and each line corresponds to a different time block. Errors are calculated in the norm \(\|\cdot\|_{2}^{t}\), which is the 2-norm over space and time for the temporal interval of a given time block. The blocks chosen on the graphs are representative of the overall behavior. Table 1 displays the average rate of convergence of iterations for both \(E\) and \(T\) estimated by averaging over all time intervals for each tested \(\Delta\mathfrak{T}_{b}\). The estimated average rate of convergence for the \(j^{\mathrm{th}}\) iteration of a given time interval is calculated as: \(\rho_{E}^{(j)}=\|\hat{E}-E^{(j)}\|_{2}^{t}/\|\hat{E}-E^{(j+1)}\|_{2}^{t}\). The values shown in Table 1 have been averaged over all iterations and time blocks. The average rate of convergence is observed to increase with \(\Delta\mathfrak{T}_{b}\) and eventually level off at around \(\Delta\mathfrak{T}_{b}=1.00\ \mathrm{ns}\). ## 4 Conclusions In this paper, a new projective-iterative scheme for TRT problems is presented. The algorithm performs iteration cycles which solve (i) high-order equations and (ii) low-order equations over collections of multiple time steps. This is achieved by decoupling the high- and low- order systems in time. The numerical results show this iteration method to be stable in solving the classical TRT problem with the radiation Marshak wave for a very large range of lengths of time blocks. The method converges even with a cycle over a single time block that covers the whole time interval of the problem. As the length of time blocks is increased, the number of outer iterations (cycles) increases. A potential advantage of this iterative scheme is the possibility for parallelization in time. Since the high- and low-order problems are solved separately from one another during the iterations of a given time block, they could be solved in parallel to one another. Further development and analysis of the methodology is required to fully investigate this possibility. Some of the primary questions to answer include how often to communicate between high- and low- order equations the information they require, and optimal sizing of the coarse time blocks. The sizing of time blocks will affect both the amount of memory required to store data between communications, how often communication must be performed, and how much computational load can be distributed among the different processes. ## Acknowledgements Los Alamos Report LA-UR-22-31465. The work of the first author (JMC) was supported by the U.S. Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001). \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\Delta\mathfrak{T}_{b}\) (ns) & \(\mathfrak{N}_{b}\) & \(\rho_{E}\) & \(\rho_{T}\) \\ \hline 0.02 & 1 & 0.042 & 0.035 \\ \hline 0.04 & 2 & 0.068 & 0.049 \\ \hline 0.10 & 5 & 0.067 & 0.058 \\ \hline 0.20 & 10 & 0.128 & 0.100 \\ \hline 0.50 & 25 & 0.158 & 0.136 \\ \hline 1.00 & 50 & 0.171 & 0.154 \\ \hline 2.00 & 100 & 0.194 & 0.178 \\ \hline 3.00 & 150 & 0.177 & 0.167 \\ \hline 6.00 & 300 & 0.159 & 0.156 \\ \hline \end{tabular} \end{table} Table 1: Estimated average rate of convergence of iterations for both \(E\) and \(T\) averaged over all time intervals
2310.03593
Where are the Water Worlds? Identifying the Exo-water-worlds Using Models of Planet Formation and Atmospheric Evolution
Planet formation models suggest that the small exoplanets that migrate from beyond the snowline of the protoplanetary disk likely contain water-ice-rich cores ($\sim 50\%$ by mass), also known as the water worlds. While the observed radius valley of the Kepler planets is well explained with the atmospheric dichotomy of the rocky planets, precise measurements of mass and radius of the transiting planets hint at the existence of these water worlds. However, observations cannot confirm the core compositions of those planets owing to the degeneracy between the density of a bare water-ice-rich planet and the bulk density of a rocky planet with a thin atmosphere. We combine different formation models from the Genesis library with atmospheric escape models, such as photo-evaporation and impact stripping, to simulate planetary systems consistent with the observed radius valley. We then explore the possibility of water worlds being present in the currently observed sample by comparing them with the simulated planets in the mass-radius-orbital period space. We find that the migration models suggest $\gtrsim 10\%$ and $\gtrsim 20\%$ of the bare planets, i.e. planets without primordial H/He atmospheres, to be water-ice-rich around G- and M-type host stars respectively, consistent with the mass-radius distributions of the observed planets. However, most of the water worlds are predicted to be outside a period of 10 days. A unique identification of water worlds through radial velocity and transmission spectroscopy is likely to be more successful when targeting such planets with longer orbital periods.
Aritra Chakrabarty, Gijs D. Mulders
2023-10-05T15:15:27Z
http://arxiv.org/abs/2310.03593v2
Where are the Water Worlds? Identifying the Exo-water-worlds Using Models of Planet Formation and Atmospheric Evolution ###### Abstract Planet formation models suggest that the small exoplanets that migrate from beyond the snowline of the protoplanetary disk likely contain water-ice-rich cores (\(\sim 50\%\) by mass), also known as the water worlds. While the observed radius valley of the Kepler planets is well explained with the atmospheric dichotomy of the rocky planets, precise measurements of mass and radius of the transiting planets hint at the existence of these water worlds. However, observations cannot confirm the core compositions of those planets owing to the degeneracy between the density of a bare water-ice-rich planet and the bulk density of a rocky planet with a thin atmosphere. We combine different formation models from the Genesis library with atmospheric escape models, such as photo-evaporation and impact stripping, to simulate planetary systems consistent with the observed radius valley. We then explore the possibility of water worlds being present in the currently observed sample by comparing them with the simulated planets in the mass-radius-orbital period space. We find that the migration models suggest \(\gtrsim 10\%\) and \(\gtrsim 20\%\) of the bare planets, i.e. planets without primordial H/He atmospheres, to be water-ice-rich around G- and M-type host stars respectively, consistent with the mass-radius distributions of the observed planets. However, most of the water worlds are predicted to be outside a period of 10 days. A unique identification of water worlds through radial velocity and transmission spectroscopy is likely to be more successful when targeting such planets with longer orbital periods. 0000-0002-4880-7880]Aritra Chakrabarty 0000-0002-1888-7070]Gijs D. Mulders ## 1 Introduction Kepler observations have shown that the small exoplanets with radius between 1 \(R_{\oplus}\) and 4 \(R_{\oplus}\) are extremely common among the planets with orbital periods shorter than 100 days (e.g., Lissauer et al., 2011; Mulders et al., 2015, 2018; He et al., 2021). Mass and radius measurements of such planets show evidence of two populations of planets, rocky and gaseous planets (Lopez & Fortney, 2014; Rogers, 2015), from the break in the mass-radius relation (e.g., Wolfgang et al., 2016; Chen & Kipping, 2017). Futher studies with improved precision in stellar radii from the follow-up surveys like the California-Kepler survey (CKS) and with the help of Gaia improved parallaxes (Johnson et al., 2017; Van Eylen et al., 2018) showed that the size distribution of the small exoplanets is bimodal with a radius gap, also known as the "radius valley", at \(\sim\)1.8-2 \(R_{\oplus}\)(Fulton et al., 2017; Fulton & Petigura, 2018; Ho & Van Eylen, 2023). This suggests a clear bifurcation between the "super-Earth" population at the lower peak and the "sub-Neptune" population at the higher peak. The two leading theories that have emerged to explain these two populations of exoplanets are atmospheric dichotomy and compositional dichotomy. The first theory suggests that sub-Neptunes are the planets with primordial H/He dominated atmospheres and super-Earths are the planets that have lost their primordial atmospheres. Mass-loss mechanisms that can cause small planets to lose their primordial atmospheres entirely include photo-evaporation loss due to high energy stellar flux (e.g., Owen & Wu, 2017; Owen & Campos Estrada, 2020; Rogers & Owen, 2021; Rogers et al., 2023), core-powered mass loss (e.g., Ginzburg et al., 2018; Gupta & Schlichting, 2019), giant impact loss (e.g., Inamdar & Schlichting, 2016; Biersteker & Schlichting, 2019; Chance et al., 2022), among others. The other theory suggests that the size bimodality is a manifestation of the two types of core compositions of the small planets. Super-Earths are the planets with Earth-like rocky core composition and sub-Neptunes are the planets with water-ice-rich cores with larger radii due to lower density (e.g., Mordasini et al., 2009; Raymond et al., 2018; Zeng et al., 2021; Venturini et al., 2020). These water-ice-rich planets, also known as "water worlds", are the planets that form exterior to the snowline of the protoplanetary disks and migrate inward through type-I migration to the location where we observe today (e.g., Izidoro et al., 2017; Raymond et al., 2018, 2020). Such plantes form with a silicate-to-ice ratio of 1:1 (e.g., Lodders, 2003; Lopez, 2017; Zeng et al., 2019; Aguichine et al., 2021; Mousis et al., 2020) beyond the snowline. These are analogous to the water-rich minor planets and moons (e.g., Enceladus, Pluto, etc.) of the solar system and the water-rich cores of Uranus and Neptune (Mousis et al., 2018). However, the existence of such planets around other stars at short orbital distances remains elusive to date. Luque & Palle (2022) presented a sample list of small (size \(<\) 4 \(R_{\oplus}\)) transiting planets around M-dwarfs with refined mass and size estimation which strongly hints at a distinct population of these water worlds in the mass-radius and density space. However, the observed mass-radius distributions can also be explained with a rocky population of planets formed in-situ with varying atmospheric content (Rogers et al., 2023). Moreover, in-situ models have been successful at explaining the broad distributions of orbital periods, eccentricities, and radii of the Kepler planets (e.g., Hansen & Murray, 2013; Ogihara et al., 2015; MacDonald et al., 2020). So the presence of water worlds is not a foregone conclusion from current observations. However, in-situ planets start with excess mass in the inner region of the disks, and therefore need either pebble drift (e.g., Boley et al., 2014; Chatterjee & Tan, 2014, 2015) or planet migration (e.g., Izidoro et al., 2017; Raymond et al., 2018, 2020), which would also bring in water worlds from beyond the snowline to the inner region. The latter processes also include planet-disk gravitational interaction that in-situ models ignore (Izidoro et al., 2017). Moreover, migration models too can explain most features of the Kepler systems, including the radii and mutual inclinations (coplanarity) of the planets, and the period-ratio distribution of the systems in mean-motion resonance (e.g., Trappist-1 system). However, most (90-95%) Kepler systems are found to be somewhat offset from the first-order mean motion resonance (Lissauer et al., 2011; Fabrycky et al., 2014) which would require other physical processes, such as dynamical instabilities and collisions (Izidoro et al., 2017, 2022) among others, to break the resonance chains after the migration phase. While the planets containing rocky cores have been shown to reproduce the Kepler size bimodality with the help of atmospheric escape models, the possible fate of the hypothetical water worlds after migrating inward has been somewhat contentious. Many of the early models on compositional dichotomy do not provide any explanation to why the water worlds would not accrete H/He dominated atmospheres (Izidoro et al., 2017; Raymond et al., 2018). Izidoro et al. (2022) present their pebble accretion model of planet formation to show that giant impacts on the migrating planets by planetesimals can wipe away the primordial atmospheres of most of the planets, unveiling the two kinds of core compositions for a certain initial condition. In such a case, giant impacts are shown to be dominating over other atmospheric mass-loss mechanisms, such as photo-evaporation, responsible for atmospheric dichotomy. On the other hand, Mordasini (2018) discusses the presence of water worlds having atmospheres by using the Bern model but most of such planets turn out to be Neptune-sized and hence, not useful in reproducing the Kepler size bimodality. Hence, the impact of the atmospheric evolution and escape of the water-rich sub-Neptunes with atmospheres on the overall demographics of small exoplanets remains a subject of interest. This is further motivated by the recent developments in the model of internal structures of water worlds with H/He atmospheres (Lopez, 2017; Zeng et al., 2019). In this paper, we provide an agnostic overview of the effect of atmospheric evolution and mass-loss processes such as photo-evaporation and giant impacts on the planets simulated through two suites of formation models: migration and in-situ. We use the planetesimal accretion models from the Genesis library developed by Mulders et al. (2020) for that purpose. We compare the outcomes of our models with both the size distribution from Kepler and mass-radius distributions of the transiting planets that have been followed up by radial velocity (RV) observations (e.g., LP22). We further use these comparisons to benchmark our migration models from which we predict the occurrences of water worlds as a function of their mass, radius and orbital period. In Section 2, we explain the general methodology: the combinations of formation models, host stars, and atmospheric mass-loss processes and how we implement them to reproduce the Kepler size bimodality. We define the possible water worlds from observational perspective and discuss the occurrences of such water worlds from observations (as upper limits) and from models in Section 3. In Section 4, we elaborate on how we calculate the probabilities of finding water worlds from their kernel density estimations and use them to guide future search for water worlds. We conclude the key points in Section 5. ## 2 Methods Here we study the different processes of atmospheric evolution of the small planets in tandem with their formation theories. Instead of adopting some initial random distributions for the properties of the planetary cores in our study, we use the outcomes of the planetesimal accretion models available from the Genesis database (see Section 2.1). This combined study provides an avenue to analyze the possible bulk compositions of the observed planets from their bulk density and orbital period distributions. This study also allows us to verify which of the contesting theories of atmospheric evolution of the small planets effectively reproducing the Kepler size bimodality are consistent with the formation theories. Moreover, the Genesis models include migrating planets in addition to planets forming in-situ and also simulate the giant impact phases required to study the effect of impact loss. We produce different models for the final architectures of the simulated planets based on their formation process (migration or in-situ), type of the host star (G or M), atmospheric loss mechanisms (photo-evaporation or impact), and bulk composition (rocky or water-ice-rich). ### The Genesis Database of Models of Planet Formation The Genesis database1 is a library of N-body simulations of formation of the terrestrial planets developed by Mulders et al. (2020). These models use the simulations of Hansen and Murray (2013) to model the Kepler super-Earths as a starting point. Based on the distribution pattern of the solids around the host stars, the annular range of distribution of the solids, and the ratio of the number of embryos to the number of planetesimals, Mulders et al. (2020) presented 15 models with different initial conditions, each having 50 runs of simulations. 12 of such models include planets growing in-situ and the other 3 include migration. Each model shows a different distribution of core mass and orbital period (see Figure 3 of Mulders et al., 2020). We combine these models so that the final suites of models exhibit lognormal-like mass distributions ranging between \(\sim 1M_{\oplus}\) and \(\sim 25M_{\oplus}\) and period distributions closest to the Kepler period distribution ranging between \(\sim 1\) day and \(\sim 100\) days. We arrive at 3 suites of such models: Footnote 1: [https://eos-nexus.org/genesis-database/](https://eos-nexus.org/genesis-database/) * In-situ: combining all 12 in-situ models from Genesis. * Migration-A: combining the migration models Gen-M-s22 and Gen-M-s50 from Genesis. * Migration-B: combining the migration models Gen-M-s10 and Gen-M-s50 from Genesis. The distributions of the core masses for the 3 suites of models are shown in Figure 3. The observed period ratio distribution of small Kepler exoplanets (size \(<4\)\(R_{\oplus}\)) suggests that majority of the resonant chains break after the gas disk dispersal (e.g., Izidoro et al., 2022, 2021, 2017). To match our migration models with this feature, we choose 95% of the runs with broken resonant chain and 5% of the stable runs for each model, following Izidoro et al. (2021, 2022). This step only provides us with the mass and orbital period distributions of the planetary cores. To determine the bulk compositions of the cores we need to define the location of the snowline which in turn depends on the spectral type of the central host star. The choice of host stars is explained in the following subsection. \begin{table} \begin{tabular}{c c c c} \hline \hline Host star & Mass (M\({}_{\oplus}\)) & T\({}_{\rm eq}\) at 1 AU (K) & Snowline location (AU) \\ \hline G & 1.0 & 279 & 2.2 \\ M & 0.35 & 108 & 0.8 \\ \hline \end{tabular} \end{table} Table 1: Fiducial values of the properties of the host stars and the disks chosen in our models. ### Host Stars The final architectures of the planets depend on the central star. We consider two types of hosts stars for our models: G-type and M-type. We use the same distribution of planetary cores adopted from the Genesis database, albeit simulated for a Sun-like star, for both the choices of the host stars. This is motivated by the fact that the overall size distribution of Kepler planets (Gupta & Schlichting, 2019; Ho & Van Eylen, 2023) as well as the total mass budget of the super-Earths and sub-Neptunes (see discussions in Mulders, 2018) does not alter significantly with the change in the host star mass, although the disk masses are known to scale with the stellar mass (Pascucci et al., 2016). The parameters that change with the host stars are the location of snowline of the disks which dictates the water-ice content of the cores, the luminosity of the star which dictates the equilibrium temperature (\(T_{eq}\)) of the planets, and the high energy XUV flux from the stars which dictate the photo-evaporation loss rate of the atmospheres of the planets. The fiducial values of the parameters for the two types of stars chosen in our models are shown in Table-1. For the G-type host star, we have chosen the same values as that of the Sun and for the M-type host star, we have chosen the median values of the parameters of the host stars in the target list of Luque & Palle (2022). #### 2.2.1 Location of Snowline We consider the location of the snowline at the time of planetesimal formation as this is the time when the water/ice gets trapped into the planetesimals beyond the snowline. They do not further sublimate on migrating inward unlike the pebbles (Mulders et al., 2015) but can only lose water partially by late collisions after the gas-disk dispersal or from \({}^{26}\)Al heating (Izidoro et al., 2017, 2022; Raymond et al., 2018; Monteux et al., 2018). The location of the snowline in the solar system at the time of planetesimal formation has been inferred from the transition between hydrous and anhydrous asteroids to be \(\sim 2.5\) AU (Abe et al., 2000). However, this estimated location of snowline could change significantly if we consider that the asteroids have been scattered into the current location (Walsh et al., 2012; DeMeo & Carry, 2014). Moreover, Mulders et al. (2015) calculate the \(1\sigma\) range of locations of the snowlines for a solar-mass star to be between \(\sim 2\) AU and \(\sim 5\) AU for a disk made up of small grains and between \(\sim 1.1\) AU and \(\sim 2.4\) AU for a disk made up of large grains. Keeping this uncertainty in mind, we choose the fiducial location of the snowlines for the G-type stars at 2.2 AU. Following the scaling relation of Mulders et al. (2015) and keeping in mind the associated uncertainties, we select the fiducial location of the snowlines for M-type stars as 0.8 AU. However, the effect of other values of snowline locations is described in Section 3.3. ### The Cores and H/He Atmospheres of the Genesis Planets The final states of simulations of the Genesis planets provide us with the distributions of mass and semi-major axis of the planetary cores. We choose the cores with orbital period (\(P\)) \(<100\) days and size within 1 \(R_{\oplus}\) and 4 \(R_{\oplus}\) as we are interested in the super-Earth and sub-Neptune populations. We consider two kinds of bulk compositions of the cores: * Earth-like rocky composition with 32.5% Fe and 67.5% MgSiO\({}_{3}\) by mass, as the detailed works on mass-radius-relations of the observed terrestrial planets indicate the dominant presence of such cores (Zeng et al., 2016; Dressing et al., 2015; Carter et al., 2012, etc.). * Earth-like rocky composition (50-100% by mass) + a layer of H\({}_{2}\)O (0-50% by mass). To calculate the water/ice mass-fraction (WMF) of the cores, we assume that, in the beginning of the simulations, the embryos/planetesimals within the snowlines are fully rocky (100% Earth-like + 0% H\({}_{2}\)O) and beyond the snowlines are water-ice rich (50% Earth-like + 50% H\({}_{2}\)O) (Raymond et al., 2018; Izidoro et al., 2021). Over the course of the simulations, the water mass-fractions are evolved in post-process by updating them after each collision according to their mass-ratios and pre-collision water mass-fractions. The evolution of the water mass-fractions for a migration model (Gen-M-s22) and an in-situ model (Gen-O-s22) from the Genesis database (Mulders et al., 2020) is shown in Figure 1. Figure 1: Water mass-fractions of the embryos of a migration suite and an in-situ suite of simulations from the Genesis database evolving over the course of simulation, considering a G-type host star. Migration models can explain the transport of water-rich cores to the observable region i.e., within orbital periods of 100 days, while the in-situ models cannot. Also, in the migration model, the planets are found to be trapped in resonance within \(\sim\)1 Myr after which their semi-major axes do not change. Figure 2: Distributions of water mass-fraction (WMF) of the cores produced by all the migration models combined and the in-situ models combined from the Genesis database. Note that, the cores are predominantly rocky or within high water mass-fraction (\(>0.3\)). We identify the planets with water mass-fraction \(>0.3\) as a water-rich planet which is primarily motivated from observations as explained in Section 3.2. The relative abundance of the rocky (WMF \(<0.1\)), water-rich (WMF \(>0.3\)) and intermediate planets with water-rock composition (\(0.1<\) WMF \(<0.3\)) are shown in Figure 2. Evidently, the low abundance of planets with intermediate water-rock composition from the migration models is consistent with our definition. Also, the fraction of water-ice-rich cores is much higher around M-type stars than G-type stars due to the fact that the snowline is much closer in the case of M-type stars (see Table 1). Figure 1 and 2 re-confirm that only migration models can explain the efficient transport of high mass-fraction of H\({}_{2}\)O to the planets within orbital periods of 100 days. We consider that all the planets accrete H/He atmospheres from the disk and retain that after the gas-disk dispersal. We consider a log-uniform distribution for the initial atmospheric mass-fraction (\(X_{i}\)) of the planets following Rogers et al. (2023) as: \[\log X_{i}=\mathscr{U}(10^{-3},0.1) \tag{1}\] The upper limit is consistent with the estimations of the atmospheric mass-fractions (\(X\)) of the observed sub-Neptunes from their mass and radius measurements (e.g., Lopez and Fortney, 2014; Wolfgang and Lopez, 2015). It also covers the range of atmospheric mass-fractions possible after the "boil-off" stage (Rogers et al., 2023; Misener and Schlichting, 2021). We consider any planet with \(X\lesssim 10^{-4}\) as a stripped super-Earth. Figure 3: Distributions of the core mass of the planets we adopted from the Genesis database segregated as rocky cores and water-ice-rich cores by considering a G-type host star. Note that, in our migration models, there is no clear separation in mass between the rocky cores and the water-rich cores which shows why compositional dichotomy is unlikely to work in our models. ### Converting mass to radius The size of the planetary cores are calculated by using the model of Zeng et al. (2019), as: \[r_{c}=f(\mathrm{WMF})\ m_{c}^{1/3.7}, \tag{2}\] where \(r_{c}\) is the core radius in \(R_{\oplus}\), \(m_{c}\) is the core mass in \(M_{\oplus}\), and \(f\) is the mass-radius constant. \(f\) is a function of the water mass-fraction (WMF) of the cores and is calculated by following Zeng et al. (2019). In the case of water-ice-rich planets, Zeng et al. (2019) assume that the bulk of H\({}_{2}\)O exists in the deep interior in solid phase along the liquid-solid phase boundary. This model also includes a thin isothermal vapor/liquid/super-critical fluid envelope on top of the ice layer. Equation 2 provides an approximate expression for the overall structure. The distributions of the core masses of the Genesis models are shown in Figure 3. The existing models for calculating the size of a planet having an atmosphere for a given value of \(X\) and equilibrium temperature (\(T_{eq}\)) are discussed by Rogers et al. (2023). For a planet with age \(>1\) Gyr, we found that the models by Lopez and Fortney (2014) and from the publicly available code _evapmass_ by Owen and Campos Estrada (2020) are in good agreement and both account for the contraction of the atmospheres of the planets due to cooling over time. As pointed out by Rogers et al. (2023), the model by Zeng et al. (2019) considers a temperature parameter which is the temperature at a pressure of 100 bar and can be interpreted as the equilibrium temperature only when the planets are old enough (age \(>1\) Gyr). Thus the model by Zeng et al. (2019) may be used to calculate only the size of the young planets. As we focus on the exoplanets with age \(>1\) Gyr in all our calculations, we consider the model by Lopez and Fortney (2014) to compute the size of atmospheres. Also, the size of an atmosphere does not depend on its metallicity as Lopez and Fortney (2014) showed that any difference between the solar-metallicity model and the enhanced-opacity model gets erased after several Gyr. The atmospheric mass-fraction of the planets and hence, their sizes further change as they lose their atmospheres over time through different mechanisms explained in the next subsection. These loss-mechanisms shape the final architectures of the planets which can be compared with the observed planetary statistics. ### Loss of Atmosphere The terrestrial planets can lose their atmospheres due to Parker-type winds. We discuss here two of the leading theories of such mass-loss mechanisms: photo-evaporation loss and impact loss. The effect of core-powered mass loss on the size distribution of small exoplanets across orbital periods is essentially similar to that of photo-evaporation (e.., Rogers et al., 2021) and hence, not included in this work. The photo-evaporation loss mechanism has been widely studied (Owen and Campos Estrada, 2020; Owen and Wu, 2017) and is found to robustly reproduce the bimodal size distribution of the Kepler planets through simulation. Depending on the density and distance from the stars, the planets could entirely or partially lose their atmospheres, resulting in an overall atmospheric dichotomy. On the other hand, the impact theory suggests that the planets undergoing giant impacts (mass-ratio \(\gtrsim 0.1\)) can completely lose their primordial atmospheres regardless of their mass and distance from the host stars (Izidoro et al., 2022; Biersteker and Schlichting, 2019), exposing their compositional diversity. Here, we assess if the Genesis planets undergo adequate number of giant impacts to reflect their compositional dichotomy, i.e. rocky versus water-ice-rich composition, on their size distribution. We subject the 3 different models explained in Section 2.1 to these atmospheric mass-loss processes to assess which combinations of models and mass-loss mechanisms turn out to be consistent with the Kepler size distribution. Figure 4: The radius of the Genesis planets versus their orbital period without any atmospheric loss (left) and with photo-evaporation loss (middle) and the radius distributions of the planets of all periods after photo-evaporation loss (right). The black dashed lines in the first two columns show the observed slope of radius valley over orbital period (Izidoro et al., 2022; Van Eylen et al., 2019; Gupta and Schlichting, 2019). The observed radius distribution for the Sun-like (G-type) stars is taken from Fulton and Petigura (2018). #### 2.5.1 Photo-evaporation Loss We calculate the photo-evaporation mass-loss rate and timescale by following the formalism of Owen and Wu (2017). The mass-loss rate depends on the high energy luminosity (\(L_{HE}\)) of the host stars and an efficiency parameter \(\eta\) denoting the efficiency of these high energy photons for mass-removal (Owen and Wu, 2017). We calculate \(L_{HE}\) as a function of mass and age of the host stars by following Owen and Wu (2017) who assume a linear dependence of \(L_{HE}\) on the stellar mass. We numerically solve the mass-loss equations and take the size of the planets after an age of 5 Gyr as the final sizes of the planets since they do not change significantly afterwards. Following Rogers and Owen (2021), we calculate the photo-evaporation efficiency, considering the energy-limited scenario, as a function of the escape velocity of the planets (\(v_{esc}\)) as: \[\eta=\eta_{0}\left(\frac{v_{esc}}{25\ km/s}\right)^{\alpha} \tag{3}\] \(\eta_{0}\) and \(\alpha\) are free parameters of our models and hence, we run our simulations over a grid of values for \(\eta_{0}\) and \(\alpha\). For all the model-suites with G-type host stars, we find that the size distribution of the simulated planets, especially the location of the radius valley, turns out to be consistent with that from Kepler when we adopt \(\eta_{0}<0.06\) and \(0<\alpha<0.5\). Since, we do not have an observed size distribution for the planets around late M-dwarfs to date to benchmark with, we use the same values of \(\eta_{0}\) and \(\alpha\) for the case of M-type host stars and still find a bimodal size distribution of the simulated planets as speculated (Cloutier and Menou, 2020). Figure 4 shows that the migration-A model is found to provide the best match with the Kepler size distribution. However, photo-evaporation can produce the "evaporation valley" with all the three suites of models and for both the types of host stars. It's the fractional occurrence of water-ice-rich planets that distinguishes the three models. #### 2.5.2 Impact Loss Giant impacts during planetary accretion can deliver significant energy to the planet, thereby heating the core and the envelope, which can cause hydrodynamic escape of H/He envelope leading to a rapid mass loss. The effect of the shockwave generated by a giant impact can also eject a fraction of the envelope, but that effect is less significant compared to thermal effect Biersteker and Schlichting (2019) and hence, we only consider the latter in this work. Following Izidoro et al. (2022); Biersteker and Schlichting (2019), we assume that, after the gas disk dispersal, thermal effect from a giant impact on a planet (where mass of the impacting body \(\geq 0.1\) times the planet mass) can completely strip the primordial atmosphere. In the absence of the gas disk, this loss cannot be replenished. The in-situ simulations are performed without any gas disk throughout whereas in the case of migration models, the gas disk is disperded after 5 Myr from the start of the simulations. We assume that an impact by a less massive planetesimal doesn't change the atmospheric mass-fraction of the planets. Figure 5 shows that the impact stripping process can somewhat reproduce the size bimodality with migration-B model whereas it absolutely fails to produce the radius valley with migration-A and in-situ models. Moreover, in the case of migration-B, it is not the dichotomy in the bulk composition but rather the dichotomy of atmospheres (presence and absence of atmospheres) that causes the bimodality of the size distribution in our model, similar to the case of photo-evaporation loss, as evident from Figure 5. ### Discussions on Reproducing the Size Bimodality Our study shows that the current suites of migration models of the Genesis database perform significantly better than the in-situ models in reproducing the Kepler size-bimodality. While photo Figure 5: Same as Figure 4 but considering only impact loss. Evidently, impact loss of H/He atmospheres is less efficient than photo-evaporation loss in creating the radius valley in our models. evaporation is found to robustly explain the size bimodality with different distributions of core mass and location, impact stripping process appears to do so only with certain distributions. The primary aim of studying the impact stripping process was to test the hypothesis of compositional dichotomy through impact stripping of migrating planets, as proposed by (Izidoro et al., 2022). However, as we have a different model setup, we find that, impact stripping of the planets from our migration models does not result in compositional dichotomy, but rather accounts for the size bimodality with atmospheric dichotomy. This is primarily because the masses of the rocky and water-rich cores from the migration models do not show any bimodality (see Figure 3) unlike the planets of Izidoro et al. (2022), thereby lacking any inherent bimodality essential for compositional dichotomy to work. Also, our migration models produce a significant number of bare water-ice-rich planets adding to the super-Earth population in contrast to Izidoro et al. (2022). This happens because no small water-ice-rich planet goes through any giant impact in the models of Izidoro et al. (2022), whereas such planets are found to lose their atmospheres by both giant impacts and photo-evaporation in our migration models. We use the occurrence of these bare water worlds as a diagnostic to our analysis as explained in the following section. Since the in-situ models do not favor the water-world hypothesis, we assess the possibility of the water worlds in the following sections with the help of our migration models. ## 3 The water worlds: from models and observations As migration models suggest the possibility of water-ice-rich super-Earths and sub-Neptunes, we look for similar patterns from observations. We consider the planets with both mass and radius measurements for this. Such planets, albeit currently small in sample size, allows us to look into the mass-radius-orbital period distributions of the planets that can be compared with model outcomes. ### The Sample List of Luque and Palle 2022 and the TEPCat Catalogue Luque & Palle (2022) (hereafter, LP22) suggested three populations of planets: "rocky", "water-rich", and "gas-rich" from the mass-radius relations and the density distribution of the planets around M-dwarfs that they studied. Although the density of a "water-rich" planet can also be explained with a rocky planet having a thin layer of atmosphere, it is the distinct clustering of the "water-rich" planets in the mass-radius-density space that interests us as we find similar planets from our migration models. Even after updating the sample list by including planets from the latest updated Transiting Extrasolar Planets catalogue2(TEPCat, Southworth (2011)) around M-dwarfs, we find a similar pattern. Although a few planets have now appeared with intermediate densities (see the green points on Figure 6), the fraction of such "water-rich" planets is found to be almost unchanged which motivates us to use this sample list to define the water worlds. Footnote 2: [https://www.astro.keele.ac.uk/jkt/tepcat/](https://www.astro.keele.ac.uk/jkt/tepcat/) ### Revised Definition of Water Planets We assume that the "water-rich" planets identified by LP22 are truly water planets and verify if their fractional occurrence is consistent with our model predictions. This would imply that these planets have significantly lost their primordial H/He atmospheres and hence, we call them bare water planets (BWP). On the other hand, our migration models suggest that a fraction of the gas-rich planets could also be made up of water-ice-rich cores which we call the gas-rich water planets (GWP). Following LP22, we define the BWPs as the planets strictly falling on or around the 50% H\({}_{2}\)O line in the mass-radius diagram. Keeping in mind the uncertainty in the models of the mass-radius relations, we allow a small range of water mass-fractions around 0.5 for the definition of BWPs. To accommodate the "water-rich" planets of LP22 into our definition, we set this range between the 40% H\({}_{2}\)O line and the 70% H\({}_{2}\)O line in the mass-radius diagram (see Figure 6). To ensure this, we introduce a new parameter \(\mathscr{F}\) which is somewhat equivalent to the mass-radius constant (\(f\)) in Equation 2 as: \[\mathscr{F}=\frac{r_{p}}{m_{p}^{1/3.7}}, \tag{4}\] where \(m_{p}\) and \(r_{p}\) are the observed mass and radius of the planets in Earth units. For a bare planet, \(\mathscr{F}\) becomes exactly equal to \(f\). Thus for a bare Earth-like rocky planet, \(\mathscr{F}\)=\(f\)(0)=1, and according to our definition, BWPs are the planets with \(\mathscr{F}\) between \(f\)(0.4)=1.1976 and \(f\)(0.7)=1.3164. Following this, we identify the planets with \(F\geq f\)(0.7) as gas-rich planets. Accordingly, we also identify the planets from our models as water planets (bare or gas-rich) if their water fractions are \(\geq 0.4\). This constraint is consistent with the models as the water fractions of most of the planets from our models are either 0-0.1 or 0.3-0.5 (see Figure 2). This is also justified from the observational perspective as the water planets with much lower water fractions would be difficult to detect in the initial attempts. The upper limit of 0.7 on water fraction is also consistent with our models as none of the planets from our models have water fraction \(>0.5\) and most of the gas-rich planets are found with \(\mathscr{F}>f\)(0.7). We use this definition to identify the bare rocky, bare water, and gas-rich planets from observations and from our models. For observed planets around M-dwarfs, we use our updated list mentioned in Section 3.1 and in the case of G-dwarfs, we use the planets from the latest version of TEPCat. We apply the same constraint on the precision of mass and radius of the planets as LP22 in all cases, i.e, 8% on radius and 25% on mass, with an exception in the case of Kepler-138 d. We include this planet (orbits around an M-type star) in our sample list albeit having a mass uncertainty of \(\sim\)33% as it is Figure 6: The mass-radius relations of the planets from LP22 and the latest updated TEPCat (Southworth, 2011). a strong water-world candidate suggested by previous studies ([7]). We then compare the fractional occurrence of the bare water planets predicted by our models with that from observations. ### Mass-Radius-Period Distributions and Occurrence of Water Planets Both migration and in-situ models are found to explain the observed mass-radius distributions for both the types of host stars, as evident from Figure 7. This is also on par with the arguments of Rogers et al. (2023). Thus, although current observations cannot be used to draw any inference about the water worlds, we leverage our migration models to prescribe a systematic set of diagnostics that can motivate future observations. We calculate the maximum possible occurrence of the bare water planets as a fraction of the bare planets from current observations by using the definition described in Section 3.2, considering the error-bars of the masses and radii of those planets. Note that, these values only represent the upper limits of the occurrence of bare water planets as the possibility of them being rocky planets with thin atmospheres cannot be ruled out. Table 2 shows the percentage occurrence of the BWPs among bare planets calculated from observations (upper limits) and from our migration models for Figure 7: The mass-radius relations of the simulated planets from our migration and in-situ models both of which are found to explain the mass-radius relations of the observed planets. However, they predict completely different bulk compositions which can be reflected in the atmospheric composition. both G- and M-type host stars. For that, we choose 3 of the combinations of migration models and mass-loss mechanisms mentioned in Section 2.5: migration-A + photo-evaporation, migration-B + photo-evaporation, and migration-B + impact stripping. To predict the occurrences of the water planets from our models, we calculate the probability density functions (PDF) from the kernel density estimations (KDE) of the \(\mathscr{F}\)-period distributions of the simulated planets and then integrate those PDFs over the range of \(\mathscr{F}\) values and orbital periods the samples of observed planets span over to get the total probabilities. We calculate KDEs for each of the four types of compositions, viz., Figure 8: The kernel density estimation (KDE) of the occurrences of the simulated planets in the period-\(\mathscr{F}\) plane calculated separately for the bare rocky, bare water, gas-rich rocky and gas-rich water planets. Likely natures of the observed planets identified only from their \(\mathscr{F}\) values are found to be consistent with the individual KDEs. These KDEs can be used to calculate the conditional probability of detecting a planet of a given nature over the entire period-\(\mathscr{F}\) plane, even when the simulated data points do not cover the entire plane. This information is then leveraged to compute the likelihood of a detected planet of given period and \(\mathscr{F}\) value being either a bare water planet or possessing a water-rich core in general, using Bayesian approach. bare rocky, bare water, gas-rich rocky, and gas-rich water, as shown in Figure 8. Evidently, KDEs calculated from our models are found to be consistent with our classification of the observed planets. Table 2 shows the occurrences from our models for the fiducial values of the locations of snowline of the disks (see Table 1). Figure 9: Variation of the percentage occurrence of the bare water planets among the bare planets (BWP/BP) calculated from our models by using their period-\(\mathscr{F}\) KDEs with the location of snowline of the disk. The dark- and light-shaded regions denote the \(1\sigma\) and \(3\sigma\) uncertainties respectively. The blue errorbar denotes the \(1\sigma\) uncertainty range of the BWP/BP occurrence computed from observations. The cyan vertical lines denote the possible range of snowline locations that can explain the upper limit of the occurrence of the bare water planets derived from observations. Impact loss is very efficient at stripping H/He atmospheres, thereby creating a lot of bare water planets. As observations indicate much lower occurrence (upper limit) of bare water worlds (see Table 2), this alternatively requires the snowline to be further away. We also calculate the occurrences for other values of the locations of the snowline. Figure 9 shows which range of values of the locations of snowline is consistent with observations. Clearly, the fraction of bare water planets from the photo-evaporation models are consistent with the snowline locations suggested by the disk models (Raymond et al., 2007; Mulders et al., 2015). Conversely, the same produced by the impact models are found to be inconsistent with the upper limits obtained from observations, which otherwise suggests the snowline locations to be unusually away from the host stars (\(>3\) AU for G-type host and \(\gtrsim 1.6\) AU for M-type host). ### Discussions on the Possibility of Water Worlds Our study shows that the migration + photo-evaporation models strongly favor the water world hypothesis. The impact stripping mechanism is found to produce more number of water planets compared to photo-evaporation when subjected to the same distribution of the planets. Overall, both the migration models can well explain the upper limits set on the occurrence of the bare water planets derived from observations with some adjustments with the models and the locations of snowline. Moreover, Figure 8 shows that the KDEs can be used to estimate the likelihood of a gas-rich planet having a water-rich core which otherwise can not be inferred directly from observations. Table 2 shows that our migration models tend to produce more fraction of bare water planets around M-dwarfs than around G-dwarfs. In contrary, observations suggest more of a similar BWP occurrence for both the types of host stars. This questions the validity of the water world hypothesis. Alternatively, this discrepancy might arise because previous spectroscopic missions did not look where migration models suggest the highest concentration of water worlds. ## 4 Guidance to Future Search for Water Worlds While the predicted fraction of bare water planets is consistent with current observed exoplanets, these water worlds are not equally distributed across the parameter space. We calculate the likelihood of a detected planet having a water-rich core from our migration models by integrating the 2D PDFs from the KDEs mentioned in Section 3.3 over a rectangular grid of orbital periods and \(\mathscr{F}\) values. Figure 10 shows the corresponding map of likelihood of occurrence of a water-rich planet (either bare or gas-rich) over the \(\mathscr{F}\)-period plane. Observed planets are overplotted in the figure to show the \begin{table} \begin{tabular}{c c c c c c} \hline \hline Host & From & Migration-A & Migration-B & Migration-B & In-situ \\ star & observation & + photo-evap & + photo-evap & + impact & \\ \hline G & \(<24.2\pm 6.1\%\) & \(14.8\pm 1.8\%\) & \(30.1\pm 6.8\) & \(46.3\pm 0\%\) & 0\% \\ M & \(<23.3\pm 5.7\%\) & \(25.5\pm 5.3\%\) & \(34.9\pm 7.6\) & \(54.4\pm 0\%\) & 0\% \\ \hline \end{tabular} Note. – The uncertainties in the upper limits obtained from observation appear from the uncertainties in the mass and radius of the planets. The uncertainties in the model values appear from the random distribution of \(X_{i}\) and also from the range of values adopted for \(\eta\) in the case of photo-evaporation. The snowline locations are chosen at 2.2 AU and 0.8 AU for the G-type and M-type stars respectively. \end{table} Table 2: The maximum possible occurrence of bare water planets as a fraction of bare planets (BWP/BP) from observation and model. probabilities of the bare and gas-rich planets identified from observations being water worlds. Our study shows a strong dependence of the likelihood of water planets on the orbital periods. We show the mass-radius distributions of the simulated planets from the migration-A + photo-evaporation model around M-type host stars in Figure 11 for two different ranges of orbital periods: \(P<10\) days and \(P>10\) days. Clearly, the water worlds are more abundant beyond orbital periods of 10 days, implying that the snowline has effectively moved from \(\sim\)0.8 AU to \(\sim\)0.06 AU (10 days). This is slightly longer than the range over which most planets have been followed up by spectroscopic observations to date. We find a similar pattern around the G-type hosts stars. The period-dependence of the occurrence of water worlds is calculated by computing the 1D PDF of the occurrence of the bare and gas-rich water planets as a function of orbital period which is shown in Figure 12. As evident from the figure, the PDFs for both the bare and gas-rich water planets peak beyond orbital periods of 10 days. The total likelihood (area under the curve) of finding a bare or gas-rich water planet is also high beyond orbital periods of 10 days. The occurrence of bare water planets drops at \(\gtrsim 50\) days due to the over-abundance of gas-rich planets. Hence, future radial velocity (RV) follow-up efforts could prioritize longer orbital periods (10-50 days) to increase their odds of finding bare water planets. Also, since the absence of an atmosphere on such a planet could strongly indicate a water-rich core, this is the range of orbital periods where spectroscopic or phase-curve studies (Kempton et al., 2023) with JWST and future space-bound and ground-based missions would have a higher probability of identifying water worlds. Figure 10: Color-map of probability of a planet with a given orbital period and \(\mathscr{F}\) value being a water world as predicted by our migration+photo-evaporation models. The overplotted error-bars denote the observed planets from our sample lists and their colors denote the likely nature based on their \(\mathscr{F}\) values only. It shows that while the rocky and gas-rich planets can be identified with high confidence, the same for the bare water worlds (i.e., planets with intermediate \(\mathscr{F}\) values) is challenging, calling for better precision in their mass and radius measurements. Also, this map helps us identify which of gas-rich planets are likely to have water underneath the H/He layer (e.g., a hycean planet). Evidently, the planets around M-dwarfs are more likely to be water planets than those around G-dwarfs, especially at longer orbital periods. On the other hand, we find from our models that the likelihood of finding a gas-rich water planet increases as we go further away from the central star. Thus JWST and future atmospheric survey missions should look beyond the orbital periods of 10 days to verify the existence of the water-rich sub-Neptunes. Atmospheric features that can be used as traits for their water-ice-rich bulk composition are high mean molecular weight or high metallicity (Kempton et al., 2023) of the atmospheres, and high abundance of water vapor in the atmospheres, among others. Table 3 shows a list of shortlisted planets with known mass and radius for which the calculated probability of being water worlds from our migration models is \(\gtrsim\)50%. The table also shows the values of their transmission spectroscopic metric (TSM) for observation using the JWST/NIRISS instrument (Kempton et al., 2018). The uncertainties in the probabilities include the effect of the uncertainties in the model parameters (e.g., photo-evaporation efficiency) but are dominated by the uncertainties in the measured mass and radius of the planets, especially for the planets with intermediate \(\mathscr{F}\) values (likely bare water planets). While it is currently difficult to improve the precision of the planetary radii owing to the limits posed by the precision in the stellar radii, future RV studies could focus on improving the precision in their mass measurements. ### Discussions on the Water-World Candidates Table 3 shows the list of potential water-world candidates which could be followed up by JWST to find atmospheric tracers of their volatile contents. This list contains some of the targets that are already suspected to be water worlds on the basis of observations with JWST or HST. For example, K2-18 b, which is suggested to be a gas-rich water world by our models, is suspected to be a hycean (hydrogen + ocean) planet from recent observation with JWST (Madhusudhan et al., 2023). The observed spectra suggest the presence of a shallow H\({}_{2}\) atmosphere which then requires a H\({}_{2}\)O layer (most likely, in an ocean form) that can explain the bulk density (Madhusudhan et al., 2023). Again, Piaulet et al. (2023) claim Kepler-138 d to be a bare water world (\(\sim\)51% water by volume) from their interior modeling of the planet which is in strong agreement with our model prediction. Their calculations were further supported by the flat optical/IR transmission spectrum that they obtained from HST, making it a prime target for JWST. Conversely, our models suggest that TOI 1695 b, previously thought to be a water world based on its mass and radius measurements (Cherubim et al., 2023), is less likely to be a water world due to its relatively high density accompanied by associated uncertainty and short orbital period. However, that does not rule out the possibility of having a low H\({}_{2}\)O content as we have restricted our definition of water worlds to a lower limit of 30% of water mass-fraction. Again, TESS and HST observations of TOI-270 d (Mikal-Evans et al., 2023) tend to support a H\({}_{2}\)-atmosphere with a strong signature of absorption by H\({}_{2}\)O. Although, this is in disagreement with our models suggesting it to be a bare water planet, it could still be a gas-rich water planet which requires further follow-up with JWST. ## 5 Conclusion We combined different in-situ and migration models from the Genesis database and subjected the simulated planets to atmospheric mass-loss mechanisms including photo-evaporation and impact stripping. By comparing the model outcomes with the Kepler size distribution and to the observed sample of planets with precise masses and radii, we find that: * Both in-situ and migration models are consistent with the radius valley and the mass-radius relations of the observed planets, but the migration models predict a significant number of \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Planet & Host type & Nature & Period (days) & ww prob (\%)\({}^{\lx@paragraphsign}\) & TSM & K (m/s) \\ \hline Kepler-1705 c & G & bare & \(11.28\pm 0.0010\) & \(46.0\pm 29.2\) & 2.8 & – \\ Kepler-138 d & M & bare & \(23.09\pm 0.0006\) & \(60.1\pm 28.0\) & 22.0 & \(0.395\pm 0.09\) \\ TOI-270 d & M & bare & \(11.38\pm 4.61e-05\) & \(49.5\pm 23.0\) & 87.1 & \(2.56\pm 0.23\) \\ TOI-1468 c & M & bare & \(15.53\pm 3.4e-05\) & \(54.7\pm 29.8\) & 64.6 & \(3.48\pm 0.35\) \\ K2-314 d & G & gas-rich & \(35.75\pm 0.0050\) & \(61.8\pm 9.8\) & 18.1 & \(1.97^{+0.54}_{-0.47}\) \\ Kepler-33 e & G & gas-rich & \(31.79\pm 0.0002\) & \(61.3\pm 8.8\) & 9.5 & – \\ Kepler-33 f & G & gas-rich & \(41.03\pm 0.0002\) & \(64.0\pm 8.4\) & 9.8 & – \\ Kepler-289 c & G & gas-rich & \(66.03\pm 0.0008\) & \(66.3\pm 6.0\) & 12.7 & – \\ K2-18 b & M & gas-rich & \(32.94\pm 0.0011\) & \(79.2\pm 7.9\) & 41.8 & \(3.36\pm 0.64\) \\ \hline \end{tabular} \({}^{\dagger}\)The uncertainties in ww prob predominantly source from the uncertainties in the radius and mass of the planets. Note—Probabilities of being water worlds (ww prob) are calculated with the help of our migration-A + photo-evapoartion models. The transmission spectroscopic metric (TSM) values are calculated by following Kempton et al. (2018). (This table is available in its entirety in machine-readable form.) \end{table} Table 3: List of planets with high mean probability (ww prob) of being a water world (\(>40\%\) for bare and \(>60\%\) for gas-rich) Figure 11: Mass-radius distributions of the simulated planets from our migration + photo-evaporation model around G-type host stars showcasing that the water-rich planets are abundant beyond orbital period of 10 days. The water-rich planets are enlarged for clarity. water-rich planets within the orbital periods of 100 days while the in-situ models dictates that all of them are rocky. * Impact stripping alone can explain the size bimodality only with the migrating planets from the Genesis database, for a certain distribution of core mass and position. In such a case, impact stripping would result in a very high number of bare water planets which would be an indicator of this process. * Migration + photo-evaporation models, on the other hand, predict an intermediate fraction of bare planets to be water-rich (\(\sim 10\)-30 % around G-dwarfs and \(\sim 20\)-35 % around M-dwarfs) that are consistent with maximum possible fraction of water worlds from current observed sample of planets with precise mass and radius measurements (\(24.2\pm 6.1\) % around G-dwarfs and \(23.3\pm 5.7\) % around M-dwarfs). * However most bare water worlds are predicted from the migration models at orbital periods of 10-50 days where few RV follow-up efforts are concentrated. Similarly, the fraction of water worlds among the sub-Neptunes also increases significantly outside of \(\sim 10\) days. We propose that follow-up radial velocity and spectroscopic surveys target planets at longer orbital periods to test the water world hypothesis, and provide a list of probable targets with high TSM and RV semi-amplitudes for follow-up. The code, Genesis Population Synthesis (_GPS_), that can be used to develop population synthesis models based on the Genesis database of formation models and to reproduce the results of this paper can be found at: [https://github.com/arcunique/GPS](https://github.com/arcunique/GPS). A.C. acknowledges support from ANID - Millennium Science Initiative - ICN12_009 - Data Observatory Foundation. G.D.M. acknowledges support from FONDECYT project 11221206, from ANID Figure 12: Fractional occurrence of the water worlds as functions of the orbital period predicted by our migration + photo-evaporation model for the G-type host star. While the bare and gas-rich water planets are likely to be concentrated beyond the orbital periods of 10 days, it is more likely to find a water planet around an M-dwarf than around a G-dwarf. -- Millennium Science Initiative -- ICN12_009, and the ANID BASAL project FB210003. The results reported herein benefited from collaborations and/or information exchange within NASA's Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA's Science Mission Directorate and project "Alien Earths" funded under Agreement No. 80NSSC21K0593. This work also benefited from the 2023 Exoplanet Summer Program in the Other Worlds Laboratory (OWL) at the University of California, Santa Cruz, a program funded by the Heising-Simons Foundation and NASA.
2307.11558
Advancing Visual Grounding with Scene Knowledge: Benchmark and Method
Visual grounding (VG) aims to establish fine-grained alignment between vision and language. Ideally, it can be a testbed for vision-and-language models to evaluate their understanding of the images and texts and their reasoning abilities over their joint space. However, most existing VG datasets are constructed using simple description texts, which do not require sufficient reasoning over the images and texts. This has been demonstrated in a recent study~\cite{luo2022goes}, where a simple LSTM-based text encoder without pretraining can achieve state-of-the-art performance on mainstream VG datasets. Therefore, in this paper, we propose a novel benchmark of \underline{S}cene \underline{K}nowledge-guided \underline{V}isual \underline{G}rounding (SK-VG), where the image content and referring expressions are not sufficient to ground the target objects, forcing the models to have a reasoning ability on the long-form scene knowledge. To perform this task, we propose two approaches to accept the triple-type input, where the former embeds knowledge into the image features before the image-query interaction; the latter leverages linguistic structure to assist in computing the image-text matching. We conduct extensive experiments to analyze the above methods and show that the proposed approaches achieve promising results but still leave room for improvement, including performance and interpretability. The dataset and code are available at \url{https://github.com/zhjohnchan/SK-VG}.
Zhihong Chen, Ruifei Zhang, Yibing Song, Xiang Wan, Guanbin Li
2023-07-21T13:06:02Z
http://arxiv.org/abs/2307.11558v1
# Advancing Visual Grounding with Scene Knowledge: Benchmark and Method ###### Abstract Visual grounding (VG) aims to establish fine-grained alignment between vision and language. Ideally, it can be a testbed for vision-and-language models to evaluate their understanding of the images and texts and their reasoning abilities over their joint space. However, most existing VG datasets are constructed using simple description texts, which do not require sufficient reasoning over the images and texts. This has been demonstrated in a recent study [27], where a simple LSTM-based text encoder without pretraining can achieve state-of-the-art performance on mainstream VG datasets. Therefore, in this paper, we propose a novel benchmark of Scene Knowledge-guided Visual Grounding (SK-VG), where the image content and referring expressions are not sufficient to ground the target objects, forcing the models to have a reasoning ability on the long-form scene knowledge. To perform this task, we propose two approaches to accept the triple-type input, where the former embeds knowledge into the image features before the image-query interaction; the latter leverages linguistic structure to assist in computing the image-text matching. We conduct extensive experiments to analyze the above methods and show that the proposed approaches achieve promising results but still leave room for improvement, including performance and interpretability. The dataset and code are available at [https://github.com/zhjohnchan/SK-VG](https://github.com/zhjohnchan/SK-VG). ## 1 Introduction Visual grounding (VG), aiming to locate an object referred to by a description phrase/text in an image, has emerged as a prominent attractive research direction. It can be applied to various tasks (e.g., visual question answering [4, 13, 38, 51] and vision-and-language navigation [1, 11, 35]) and also be treated as a proxy to evaluate machines for open-ended scene recognition. Typically, VG requires models to reason over vision and language and build connections through single-modal understanding and cross-modal matching. Yet, current VG benchmarks (e.g., RefCOCO [47], RefCOCO+ [47], RefCOCOg [29], ReferItGame [17], and CLEVR-Ref+ [23]) can not serve as a good test bed to evaluate the reasoning ability since they only focus on simple vision-language alignment. In addition to the simple nature of constructed referring expressions, this can be reflected in the recent state-of-the-art study [27], where they showed that _VG models are less affected by language modeling through extensive empirical analyses_. In this paper, we believe that the intrinsic difficulty of VG lies in the difference between perceptual representations of images and cognitive representations of texts. Specifically, visual features are obtained through perceptual learning, which Figure 1: An example from the proposed SK-VG dataset for scene knowledge-guided visual grounding. The task requires a model to reason over the (image, scene knowledge, query) triple to locate the target object referred to by the query. only maps visual appearances in images to semantic concepts. However, open-ended queries might require VG models to understand the whole scene knowledge before performing reasoning to locate the target object. As shown in Figure 1, the perceptual features can encode the information about "_a wine glass_", but it would struggle to locate "_Jake's wine glass_" without the scene knowledge about "_who is Jake?_". This is a challenging task owing to two facts: (i) From the dataset perspective, there are no relevant benchmarks for the VG researchers to evaluate their models; (ii) From the model/algorithm perspective, it is not easy to design models to perform reasoning among images, scene knowledge, and open-ended querying texts. Therefore, we propose to break this limitation of current VG research and construct a new benchmark requiring VG to perform reasoning over Scene Knowledge (i.e., text-based stories). The benchmark named SK-VG contains \(\sim\)40,000 referring expressions and 8,000 scene stories from 4,000 images, where each image contains 2 scene stories with 5 referring expressions for each story. Moreover, to evaluate the difficulty levels of queries, we curate the test set by splitting the samples into easy/medium/hard categories to provide a detailed evaluation of the vision-language models. Under this new setting, we develop a one-stage approach (i.e., Knowledge-embedded Vision-Language Interaction (KeViLI)) and a two-stage approach (i.e., Linguistic-enhanced Vision-Language Matching (LeViLM)). In KeViLI, the scene knowledge is firstly embedded into the image features, and then the interaction between the image and the query is performed; In LeViLM, the image features and the text features are first extracted, and then the matching between the (image) regions and the (text) entities are computed, assisted by the structured linguistic information. Through extensive experiments, we show that the proposed approaches can achieve the best performance but still leave room for improvement, especially in the hard split. It challenges the models from three perspectives: First, it is an open-ended grounding task; Second, the scene stories are long narratives consisting of multiple sentences; Third, it might require the multi-hop reasoning ability of the models. In summary, the contributions of this paper are three-fold: * We introduce a challenging task that requires VG models to reason over (image, scene knowledge, query) triples and build a new dataset named SK-VG on top of real images through manual annotations. * We propose two approaches to enhance the reasoning in SK-VG, i.e., one one-stage approach KeViLI and one two-stage approach LeViLM. * Extensive experiments demonstrate the effectiveness of the proposed approaches. Further analyses and discussions could be a good starting point for future study in the vision-and-language field. ## 2 Background ### Taxonomy of Visual Grounding Datasets In the past few years, a variety of datasets have been proposed for visual grounding. We propose a taxonomy of existing (generalized) VG datasets along with the proposed dataset based on types of queries, as shown in Figure 2. The datasets of the first type use fixed categories as queries.1 Grounding categories in images are a fundamental task in computer vision and has attracted much attention. One of the most representative examples is the MS-COCO dataset [20], which contains 80 categories. Besides, PASCAL VOC 2007 [12], Visual Genome [18], and Object365 [34] are also popular datasets of this type. Footnote 1: Generally, it is called object detection. We generalize it to visual grounding to summarize and classify existing grounding datasets better. The Flickr30K Entities dataset2[30] belongs to the second type, where the queries are short phrases. Similar to the Figure 2: Illustrations of four categories of grounding tasks, including categories, phrases, linguistic expressions, and linguistic expression+scene knowledge. The height of the input green and blue rectangles denotes its relative information. first type, an image might contain multiple objects referred to by a phrase following the one-to-many mappings. The most distinct characteristic of this type from the first type is that it is an open-vocabulary grounding problem instead of using fixed categories. Most recently, researchers constructed relevant datasets of this type, i.e., PhraseCut [39] and LVIS [14], with a larger scale. The third type aims at localizing a specific object in the image based on an expression in the form of natural language. In the narrow sense, the term _visual grounding_ refers to this type of dataset in previous studies. Various benchmark datasets (e.g., RefCOCO [47], RefCOCO+ [47], RefCOCOg [29], and CLEVR-Ref+ [23]) have been constructed to test the ability to refer expression comprehension of existing vision-language models. In general, expressions in these datasets are written according to the visual appearance and spatial location of an object, where the visual appearance includes visual categories, color, and other visual attributes, and the spatial location describes the absolute or relative location. Different from the aforementioned two types, an expression in this type of dataset points to a unique object in the image following the one-to-one mapping. Our proposed SK-VG is the first dataset of the fourth type, where for each image, we provide human-written scene knowledge to describe its content. By doing so, the VG models need to have a good understanding of the scene stories and then locate the queried object in the image according to both querying expressions and scene stories. Although there exists a dataset [36] introducing knowledge to the visual grounding model, it only focuses on the commonsense knowledge, which interprets the concept in the referring expressions, e.g., the interpretation of the target object 'banana'. There are also some datasets on grounding complex/compositional visual description, e.g., the human-centric HumanCog dataset [45] and the Cops-Ref dataset [5]. HumanCog requires the model to understand human-centric commonsense (e.g., the mental aspect), and Cops-Ref proposed a difficult task to require a model to identify an object described by a compositional referring expression from a curated set of images. We can still classify them into the first three categories since the knowledge is more about referring expressions, while the knowledge in our dataset is a comprehensive description of the scene. ### Visual Grounding Models Existing methods can be categorized into two classes: (i) two-stage methods [2, 21, 37, 46, 37] and (ii) one-stage methods [15, 44, 42, 28, 44, 50].3 The former generates region proposals first and then exploits the language expression to select the best-matching region; The latter directly predicts the bounding boxes through vision-and-language interaction to avoid the computation-intensive object proposal generation and region feature extraction in the two-stage paradigm. Among these methods, some work [40, 41, 37, 10, 6] perform explicit reasoning by modeling the attributes of objects and the relations between objects to improve interpretability. However, limited by the simplicity of existing datasets, they can not take full advantage of their algorithms and do not model complicated semantic relations in images and texts. Besides, pretraining-based methods [43, 49, 16, 19] have been applied to VG to improve the open-vocabulary grounding ability. Footnote 3: Grounding categories is a very hot topic, where there are many research works [31, 22, 24, 32]. Yet we mainly discuss existing studies of grounding phrases/expressions, which are more related to this work. ## 3 Dataset Construction In this section, we present the SK-VG dataset. Compared to existing VG tasks, the key difference is that each image is paired with scene knowledge to describe its content. We detail the image collection, the annotation process, the dataset statistics, and splits in the following subsections. ### Image Collection To facilitate and ease the writing of a text story, we identify three significant aspects a qualified image should fulfill: * **Humans** are the main body of a story. A satisfying image is better full of multiple characters with interactions to create complex and dramatic stories. * **Objects** are also essential and necessary to complement the details of a story. The number and category of objects also impact our SK-VG task, making it more challenging and interesting. * **Scenes** are the third factors we can not ignore since the scenes determine the background and starting point of stories. Complex and real scenes (e.g., theaters, classrooms, and parks) can inspire diverse stories. Based on the above consideration, we select the existing Visual Commonsense Reasoning dataset [48], which is designed for the visual question answering task and contains more than 110,000 movie scene images. Thanks to the movie attribute of these images, they are more likely to meet our requirements and are suitable for our task. Therefore, through careful manual filtering and selection, 4,000 images serve as the ingredients of our SK-VG dataset. ### Image Annotation To facilitate the annotation process, we develop software. For each image, the annotation mainly includes two phases: (i) Annotators are asked to create two different story descriptions based on a given image; (ii) Given each story, the annotators are asked to write five referring expressions related to the given image and story and annotate the corresponding object bounding boxes as the ground truth. The required rules for each step are detailed as follows: (i) Knowledge Annotation serves as a foundation stone of our annotation, determining the scope and quality of query sentences. A satisfying story should be related but beyond the image content. Specifically, the story should cover the person who occurred in one image with accurate visual descriptions, thus providing significant clues and evidence to match the image object and knowledge entity. Besides, the story is required to contain more context beyond the image, such as background, character relationship, mental state, and emotion, so as to promote the design of more challenging and flexible query expression. (ii) Query Expression Annotation plays an essential role in our task and ought to obey the following criteria: * **Knowledge Relevance:** The main insight of our SK-VG task is to advance the traditional VG task by introducing extra scene knowledge descriptions. Based on this consideration and prospect, the first principle is that query sentences must be highly relevant to knowledge instead of directly visually distinguishable. Taking Figure 1 as an example, "_The black glasses_" is not qualified since it does not involve knowledge information. * **Uniqueness:** To give a unique bounding box of the referred object, the query should be clear and unambiguous. For instance, queries like "_The person holding a wine glass_" or "_Jake's friend_" are not satisfied in Figure 1 since they involve several objects in the image. * **Diversity:** For one thing, the referring objects should be diverse; For another, the lexical expression of the query sentence is also required to be diversified. For example, the general terms (e.g., "_person_") could be replaced by other specific alternatives (e.g., "_colleague_"). ### Dataset Statistics To further dive into the proposed SK-VG dataset, we demonstrate its characteristics from three aspects: * **Length of scene knowledge**: As shown in Figure 3(i), the word-based length of most stories ranges from 50 to 70. This puts high demands on models to capture long-range dependency to understand text content. * **Categories of referred objects**: As an open-world task, referred objects of our dataset are not limited to a fixed number of categories. Figure 3(ii) exhibits 100 referred object classes with the highest frequency. Benefiting from the diverse stories and scenes, we introduce extensive referred targets with various expressions, increasing the difficulty of recognition and localization. * **Size of referred objects**: We report the size of referred objects in Figure 3(iii), which indicates that the objects in our dataset fall into a wide range of sizes. Further, we define small, medium, and large concepts according to the area of the objects, following the boundary of \(64\times 64\) and \(128\times 128\). We can observe that large objects dominate our dataset, while small and medium instances hold a small proportion. Figure 3: Statistics of the proposed SK-VG dataset: (i) the length distribution of the knowledge description; (ii) the referred objects of high-frequency; (iii) the size distribution of referred objects. ### Dataset Splits We randomly sample 60% of images and their annotations as the training set. For the remaining (image, scene knowledge, query) triples, we sample parts of them for annotating their difficulty levels and use them as the test set while the remaining triples are used as the validation set.4 We follow the following rule to annotate the difficulty level. The core principle is that more knowledge-related but less visual-distinguishable expressions deserve a higher difficulty level: (i) Easy: The referring expression contains obvious appearance, object relationship, or other visual clues; (ii) Medium: The expression only mentions weak visual information; (iii) Hard: The answer is required to be entirely derived from the scene knowledge without visual bias. We show examples of different difficulty levels in SS4.5. Footnote 4: The images in the training set have no overlap with those in the validation and test sets. ## 4 Algorithmic Analysis ### Algorithm 1: KeViLI To perform SK-VG, we introduce a one-stage algorithm: Knowledge-embedded Vision-Language Interaction (KeViLI). Given an image \(I\) and its corresponding scene knowledge \(K\), the goal is to locate the object referred to by a querying text \(T\) by predicting the coordinates of its corresponding bounding box directly. In detail, given \(I\), we use an image encoder to encode \(I\) to the image patch features \(H_{I}\); given \(K\) and \(T\), we use a language encoder to encode them to the knowledge features \(H_{K}\) and the text subword features \(H_{T}\), respectively. Afterward, we embed the scene knowledge into the image features before the image-query interaction. As an intuitive illustration, in Figure 1, the visual features of "_person_" in the image is not only about the concept "_person_" but also about the specific refer "_Jake_" after embedding knowledge, which can assist in grounding "_Jake's wine glasses_". The embedding procedure is implemented using a cross-attention Transformer, which is stacked by self-attention, cross-attention, and feed-forward sub-layers. The attention mechanism is applied in the self-attention and cross-attention sub-layers and is defined as \[\text{ATTN}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{Softmax}\left(\mathbf{Q}\mathbf{K}^{\top}/ \sqrt{D_{k}}\right)\cdot\mathbf{V}, \tag{1}\] In the self-attention sub-layer, the patch features interact with each other through \(H^{I}=\text{ATTN}(H^{I},H^{I},H^{I})\); In the cross-attention sub-layer, the knowledge is embedded into the image features by \(H^{I}=\text{ATTN}(H^{I},H^{K},H^{K})\).5 Subsequently, \(H_{I}\) and \(H_{T}\) are input to a Transformer with a learnable regression token [REG] to perform the image-query interaction. The output of [REG] is input to a two-layer multilayer perceptron (MLP) to produce to predict the coordinates directly without region proposals. The model is trained to minimize the generalized IoU loss [33] (GIoU loss): Footnote 5: We overload the notations here for simplicity. \[L=L_{\text{smooth,11}}(b,\hat{b})+L_{\text{giou}}(b,\hat{b}), \tag{2}\] where \(b\) and \(\hat{b}\) refer to the ground-truth and prediction boxes, respectively and \(L_{\text{smooth,11}}(\cdot)\) and \(L_{\text{giou}}(\cdot)\) are the smooth L1 loss and GIoU loss, respectively. ### Algorithm 2: LeViLM We further introduce a two-stage algorithm: Linguistic-enhanced Vision-Language Matching (LeViLM). In LeViLM, we follow GLIP [19] to initialize the backbone model, which had been trained from large-scale datasets to detect objects of open-vocabulary classes. In detail, the grounding process is disentangled into two stages, i.e., region proposal and scoring, where the former aims to find all the objects in the image, and the latter aims to score the proposal regions. In the region proposal stage, given the scene knowledge \(K\) and the query \(T\), we construct a manual prompt text \(P\): "Query: \(T\). Knowledge: \(K\).". Then we use a language encoder to Figure 4: Illustration of the proposed approaches: (i) the one-stage algorithm, where the knowledge is embedded into the image features before the image-query interaction; (ii) the two-stage algorithm, where the image features and text features are firstly extracted, and then the structured linguistic information is leveraged to assist in computing the region-entity similarities. encode \(P\) into the prompt features \(H_{P}\) and an image encoder to encode \(I\) to the image features \(H_{I}\). Afterward, we perform the image-text fusion using a stack of \(L\) layers. In each layer, there is one self-attention layer for text encoding and one Dynamic Head layer [7] for image encoding, and two cross-attention layers for cross-modal fusion. The text encoding and image encoding process can be formulated as \(H_{P}=\mathrm{ATTN}(H_{P},H_{P},H_{P})\) and \(H_{I}=\mathrm{DynamicHead}(H_{I})\), respectively. The cross-modal information fusion process can be formalized as \(H_{P}=\mathrm{ATTN}(H_{P},H_{I},H_{I}),H_{I}=\mathrm{ATTN}(H_{I},H_{P},H_{P})\), where the cross-attention mechanism is applied to exchange the image and text information. Subsequently, a region proposal layer is applied to \(H_{I}\) to obtain the region features. For simplicity, we denote the after-fusion image (region) features and text (subword) features as \(Z_{I}\in\mathrm{R}^{N\times d}\) and \(Z_{P}\in\mathrm{R}^{M\times d}\), where \(N\) refers to the number of proposed regions and \(M\) represents the number of subwords. In the region scoring stage, we extract structured linguistic information from the query \(T\) and the scene knowledge \(K\). Specifically, given \(T\), we perform syntactic dependency parsing to obtain its dependency tree and apply a set of rules to extract the subject of \(T\), which we denote as the head entity \(E_{h}\). Besides, we also build the connection between \(T\) and \(K\) through coreference resolution to find all mentions \(E_{m}\) in \(K\) refer to the same underlying entity \(E_{h}\). Therefore, during the training procedure, we have the bounding box annotation for \(E_{h}\) and its co-referred \(E_{m}\) since \(E_{h}\) and \(E_{m}\) share the same object. Then we can take the representations of \(E_{h}\) and \(E_{m}\) from \(Z_{P}\), denoted as \(Z_{E}\in\mathrm{R}^{(E+1)\times d}\), where \(E\) represents the number of co-referred mentions. Afterward, we can compute the alignment scores between the image regions and the entities in the prompt: \[Score=Z_{I}Z_{e}^{\top}, \tag{3}\] where \(Score\in\mathrm{R}^{N\times(E+1)}\). Finally, the model is trained to minimize the following loss: \[L=L_{xe}(Score,Target), \tag{4}\] where \(L_{xe}\) is the cross entropy loss and \(Target\in\mathbb{R}^{N\times(E+1)}\), where each element indicates if a region and an entity are matched or not. ### Implementation Details For KeViLI, the input image is resized to \(640\times 640\), and the max (token-based) length for \(T\) and \(K\) are set to 32 and 256, respectively. During training, the model is optimized with the batch size set to 64 using AdamW optimizer [26], where the initial learning rate of the vision encoder and language encoder is set to \(10^{-5}\) and the learning rate of the remaining parameters are set to \(10^{-4}\). Similar to [8], the vision encoder is initialized from the DETR model [3], and the language encoder is initialized with the BERT model [9]. The model is trained for 90 epochs with a learning rate dropped by a factor of 10 after the 60th epoch. For LeViLM, we initialize the backbone model from [19]. We train the model with the batch size set to 32. Similarly, the learning rate is set to \(10^{-5}\) for the text encoder and \(10^{-4}\) for the remaining parameters. During training, the learning rate is decayed at 67% and 89% of the total training steps. To evaluate LeViLM on the SK-VG dataset, we testify different experimental settings: * **Data**: Query-only (Q), Query-knowledge (Q+K), and Query-knowledge-linguistic-structure (Q+K+S); * **Training**: (i) Zero-shot (ZS): We directly evaluated the pre-trained model without finetuning; (ii) Linear-probing (LP): We fixed the backbone model; (iii) Fine-tuning (FT): We tuned all the parameters of the model. * **Evaluation6**: (i) Selecting the prediction with the highest score (H), (ii) Randomly picking the prediction whose score is over 0.5 (R), (iii) Selecting the ground-truth one if its score is larger than 0.5 (U).7 Footnote 6: Since the LeViLM model might predict multiple bounding boxes, we adapt different evaluation strategies for analysis. Footnote 7: The U strategy is adopted to analyze the reasoning error of the model. For the evaluation metric, we adopt Intersection-over-Union (IoU), which measures the overlap degree between the prediction and the ground truth. Following previous studies [8, 29, 47], we use [email protected] as the prediction accuracy. ### Experimental Results and Analyses To analyze the performance of different baselines, we consider the following questions and conduct analyses to answer them with the results reported in Table 1 and 2. _Q1: Is SK-VG a hard task for traditional VG models?_ As shown in Table 1, existing models did not achieve promising results An interesting finding is that ReSC, which uses texts to recursively refine the text-conditional visual features, achieved the best result (\(\sim\)36%) among these existing models, which matches our intuition that it can better use long-form story information. _Q2: Which one is better, KeViLI or LeViLM?_ It can be observed in Table 2 that the performance of LeViLM (ID 3-26) is consistently better than KeViLI (ID 1-2), even without any finetuning (ID 3-4). We can explain this by the reason that the task's inherent difficulty is understanding open-ended stories, queries, and their relations with the images. For KeViL, the one-stage optimization to directly output bounding boxes of such open-ended target objects could be difficult. Instead, \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Method & [46] & [21] & [25] & [44] & [42] & KeViLI & LeViLM \\ \hline Acc & 25.28 & 25.24 & 26.08 & 16.3 & 36.68 & 30.01 & 72.57 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparisons of our approaches with existing studies. for LeViLM, after dividing and conquering the process (i.e., region proposing and scoring), it is easier to ensure each stage works well, e.g., to guarantee its basic detection ability before complex grounding using a pre-trained VG backbone. _Q3: Are linear-probing or finetuning necessary for LeViLM?_ We can investigate the effects of linear probing and finetuning by comparing the results (ID 3-8, ID 9-14, and ID 18-23). When adapting LeViLM on this dataset, the performance follows this pattern: finetuning \(>\) linear-probing \(>\) zero-shot. The reason behind this is that finetuning can guide the model to use the scene knowledge in a better way. _Q4: Is the scene knowledge critical for accurate prediction?_ To answer this question, we need to take the different evaluation strategies into account. Specifically, in the ZS and LP setting, it can be observed that the knowledge is harmful to the model performance by comparing ID 3-5 and ID 6-8 (or comparing ID 9-11 and ID 12-17). This is due to two reasons: (i) The texts of the pretraining datasets of LeViLM are relatively short, yet the length of scene knowledge in our dataset is much longer than that; (ii) The majority of the LeViLM pretraining datasets are about perception, i.e., detecting all the objects in the images instead of reasoning over the images and texts. Therefore, it is not enough to exploit the knowledge under the zero-shot and linear-probing settings. On the contrary, the knowledge has a considerably positive effect when full-finetuning LeViLM on the proposed dataset, which can be explained by the fact that LeViLM learns to reason over the images, knowledge, and querying texts after adaptation. Besides, bridging the scene knowledge and the queries in an appropriate way (ID 24-25) can further promote performance. The conclusion is that knowledge is critical for finetuning but can not be exploited appropriately in the zero-shot and linear-probing settings. _Q5: What is the advantage of exploiting the knowledge?_ For this question, there are two interesting observations from the results. First, when using the knowledge, the model can achieve a higher upper-bound result (comparing ID 20, 23, and 26), which means that the model can detect more objects in the images. We can explain this phenomenon by one of the possible instances: the querying text might contain different names of persons, and the model might not know the name refers to a person, which can be inferred from the \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Text**} & \multirow{2}{*}{**Criteria**} & \multirow{2}{*}{**ID**} & \multirow{2}{*}{**Overall Acc**} & \multicolumn{3}{c}{**Diffeulty-level**} & \multicolumn{3}{c}{**Area-level**} \\ & & & & & **Acc\({}_{de}\)** & **Acc\({}_{dm}\)** & **Acc\({}_{dh}\)** & **Acc\({}_{as}\)** & **Acc\({}_{am}\)** & **Acc\({}_{al}\)** \\ \hline \multirow{2}{*}{KeViLLI} & Q & - & 1 & 28.71 & 32.53 & 25.23 & 25.70 & 0.80 & 14.44 & 34.02 \\ & Q + K & - & 2 & 30.01 & 33.75 & 26.55 & 27.14 & 1.20 & 12.85 & 35.94 \\ \hline \multirow{6}{*}{LeViLM (ZS)} & \multirow{6}{*}{Q} & H & 3 & 29.75 & 49.97 & 18.23 & 6.71 & 24.20 & 33.33 & 29.64 \\ & & R & 4 & 29.77 & 48.28 & 18.88 & 9.01 & 23.20 & 33.12 & 29.79 \\ & & U & 5 & 38.13 & 54.23 & 29.56 & 19.16 & 30.20 & 39.38 & 38.67 \\ \cline{2-10} & \multirow{6}{*}{Q + K} & H & 6 & 7.55 & 13.08 & 4.38 & 1.26 & 2.20 & 5.94 & 8.36 \\ & & R & 7 & 7.78 & 12.88 & 4.71 & 2.12 & 2.20 & 5.73 & 8.69 \\ & & U & 8 & 8.79 & 13.34 & 6.02 & 3.79 & 2.20 & 6.05 & 9.93 \\ \hline \multirow{6}{*}{LeViLM (LP)} & \multirow{6}{*}{Q} & H & 9 & 44.97 & 72.03 & 31.86 & 11.70 & 50.60 & 57.54 & **42.13** \\ & & R & 10 & 44.82 & 66.91 & 32.68 & 19.16 & 48.20 & 56.48 & **42.36** \\ & & U & 11 & 63.09 & 77.51 & 54.90 & 46.64 & 52.60 & 64.86 & 63.79 \\ \cline{2-10} & \multirow{6}{*}{Q + K} & H & 12 & 35.71 & 60.40 & 25.07 & 3.96 & 41.20 & 48.51 & **32.84** \\ & & R & 13 & 35.89 & 57.00 & **24.41** & 11.24 & **39.40** & **47.13** & **33.49** \\ & & U & 14 & 47.71 & 64.40 & 41.43 & 25.30 & 43.20 & 55.52 & 46.72 \\ \cline{2-10} & \multirow{6}{*}{Q + K + S} & H & 15 & 37.25 & 62.09 & 26.98 & 4.88 & 42.20 & 49.47 & **34.54** \\ & & R & 16 & 36.91 & 58.03 & 25.83 & 11.82 & 40.20 & 46.92 & **34.76** \\ & & U & 17 & 50.47 & 66.61 & 44.77 & 28.40 & 44.40 & 56.69 & 49.92 \\ \hline \multirow{6}{*}{LeViLM (FT)} & \multirow{6}{*}{Q} & H & 18 & 57.18 & 80.35 & 46.80 & 27.83 & 65.00 & 66.77 & **54.67** \\ & & R & 19 & 57.29 & 80.15 & 46.63 & 28.74 & 65.00 & 65.39 & **55.06** \\ \cline{1-1} & & U & 20 & 63.79 & 83.45 & 55.17 & 38.67 & 68.60 & 71.23 & 61.97 \\ \cline{1-1} \cline{2-10} & \multirow{6}{*}{Q + K} & H & 21 & 70.70 & 84.51 & 63.16 & 54.62 & 68.20 & 72.51 & **70.62** \\ \cline{1-1} & & R & 22 & 70.49 & 84.28 & 62.67 & 54.73 & 68.80 & 72.51 & 70.29 \\ \cline{1-1} & & U & 23 & 74.95 & 86.49 & 68.20 & 61.96 & 71.00 & 76.11 & 75.12 \\ \cline{1-1} \cline{2-10} & \multirow{6}{*}{Q + K + S} & H & 24 & 72.57 & 84.08 & 65.52 & 59.95 & **70.00** & **71.02** & **73.10** \\ \cline{1-1} & & R & 25 & 71.93 & 83.72 & **64.97** & 58.75 & **70.00** & **71.44** & **72.21** \\ \cline{1-1} & & U & 26 & 77.31 & 86.59 & 71.59 & 67.18 & 72.60 & 76.96 & 77.83 \\ \hline \hline \end{tabular} \end{table} Table 2: The performance of two proposed approaches. In the text column, Q, K, and S represent query, knowledge, and linguistic structure, respectively. In the criteria column, H, R, and U represent the criteria to pick the detected bounding boxes by adopting the boxes with the highest scores, the random boxes, and the upper-bound scores that can be achieved, respectively. For the metrics, the overall accuracy, the difficulty-level accuracy, and the area-level accuracy are shown. knowledge. Second, with the knowledge, the model is able to perform more accurate reasoning, which can be observed by comparing the reasoning errors (the results of \(U-H\)) of Q and Q+K (or Q+K+S). This is because the knowledge can alleviate the reasoning uncertainty when grounding the objects. The answer is that scene knowledge can not only assist in detecting more objects but also reduce the uncertainty of locating/reasoning the target objects. _Q6: What do the approaches still struggle to do?_ Before answering this question, we can investigate the effects of the area of objects. As shown in Table 2, it is not challenging for LeViLM to detect small objects. By observing the difficulty-level accuracy, we can obtain the message that LeViLM is not capable of performing complicated (multi-hop) reasoning over the scene knowledge and producing accurate predictions. Besides, the prediction process is black-box and can not be explainable, which can be further studied in the future. The answer is that (i) The current baselines can only achieve strong results on easy or medium tasks and are unable to perform well on the hard task; (ii) The interpretability of the baselines is poor. ### Case Study To further investigate the effects of knowledge, we perform qualitative analysis on four cases in the SK-VG dataset. Figure 5 shows the grounding results of four baselines on four referring expressions. It is observed that in the first case, all the baselines can ground the "_cane_" in the image even without the knowledge since there is only one cane presented. In the second case, the finetuned LeViLM can detect the target object even without knowledge, while it can not detect the "_Brandon's servant_" without knowledge in the third case. In the last case, all the baselines can not ground the referred object correctly, and the last three baselines all treat the "_Spider-Man_" as the "_enemy_". This shows that the baseline models can not perform accurate reasoning in some complicated cases, demonstrating the challenges. ## 5 Concluding Remarks The visual grounding field has emerged as a prominent attractive research direction, where the models are required to reason over vision and language to ground the target objects. Yet, the language part of the existing VG benchmarks is only simple description texts, which can not evaluate the reasoning capability of the models comprehensively. To take a step in this direction, we propose a new benchmark dataset called SK-VG, which requires models to reason over the (image, scene knowledge, query) triples to perform accurate reasoning. We propose two approaches to perform this new task: Knowledge-embedded Vision-Language Interaction and Linguistic-enhanced Vision-Language Matching. Experimental results confirm the validity of the proposed approaches but also show that there is still substantial room for improvement, e.g., reasoning and interpretability. ## Acknowledgement This work was supported in part by the Chinese Key-Area Research and Development Program of Guangdong Province (2020B0101350001), in part by the Guangdong Basic and Applied Basic Research Foundation (NO. 2020B1515020048), in part by the National Natural Science Foundation of China (NO. 61976250), in part by the Shenzhen Science and Technology Program (NO. JCYJ20220530141211024, NO. JCYJ20220818103001002), in part by the Fundamental Research Funds for the Central Universities under Grant 22lgqb25 and in part by the Guangdong Provincial Key Laboratory of Big Data Computing, The Chinese University of Hong Kong, Shenzhen. This work was also sponsored by Tencent CCF Open Fund (NO. RBFR2022009). Figure 5: The illustration of samples from the proposed SK-VG dataset, where a scene story and its four referring expressions are shown with the grounding results from four baseline methods.
2304.10316
Search-Map-Search: A Frame Selection Paradigm for Action Recognition
Despite the success of deep learning in video understanding tasks, processing every frame in a video is computationally expensive and often unnecessary in real-time applications. Frame selection aims to extract the most informative and representative frames to help a model better understand video content. Existing frame selection methods either individually sample frames based on per-frame importance prediction, without considering interaction among frames, or adopt reinforcement learning agents to find representative frames in succession, which are costly to train and may lead to potential stability issues. To overcome the limitations of existing methods, we propose a Search-Map-Search learning paradigm which combines the advantages of heuristic search and supervised learning to select the best combination of frames from a video as one entity. By combining search with learning, the proposed method can better capture frame interactions while incurring a low inference overhead. Specifically, we first propose a hierarchical search method conducted on each training video to search for the optimal combination of frames with the lowest error on the downstream task. A feature mapping function is then learned to map the frames of a video to the representation of its target optimal frame combination. During inference, another search is performed on an unseen video to select a combination of frames whose feature representation is close to the projected feature representation. Extensive experiments based on several action recognition benchmarks demonstrate that our frame selection method effectively improves performance of action recognition models, and significantly outperforms a number of competitive baselines.
Mingjun Zhao, Yakun Yu, Xiaoli Wang, Lei Yang, Di Niu
2023-04-20T13:49:53Z
http://arxiv.org/abs/2304.10316v1
# Search-Map-Search: A Frame Selection Paradigm for Action Recognition ###### Abstract Despite the success of deep learning in video understanding tasks, processing every frame in a video is computationally expensive and often unnecessary in real-time applications. Frame selection aims to extract the most informative and representative frames to help a model better understand video content. Existing frame selection methods either individually sample frames based on per-frame importance prediction, without considering interaction among frames, or adopt reinforcement learning agents to find representative frames in succession, which are costly to train and may lead to potential stability issues. To overcome the limitations of existing methods, we propose a Search-Map-Search learning paradigm which combines the advantages of heuristic search and supervised learning to select the best combination of frames from a video as one entity. By combining search with learning, the proposed method can better capture frame interactions while incurring a low inference overhead. Specifically, we first propose a hierarchical search method conducted on each training video to search for the optimal combination of frames with the lowest error on the downstream task. A feature mapping function is then learned to map the frames of a video to the representation of its target optimal frame combination. During inference, another search is performed on an unseen video to select a combination of frames whose feature representation is close to the projected feature representation. Extensive experiments based on several action recognition benchmarks demonstrate that our frame selection method effectively improves performance of action recognition models, and significantly outperforms a number of competitive baselines. ## 1 Introduction Videos have proliferated online in recent years with the popularity of social media, and have become a major form of content consumption on the Internet. The abundant video data has greatly encouraged the development of deep learning techniques for video content understanding. As one of the most important tasks, action recognition aims to identify relevant actions described in videos, and plays a vital role to other downstream tasks like video retrieval and recommendation. Due to the high computational cost of processing frames in a video, common practices of action recognition involve sampling a subset of frames or clips uniformly [31] or densely [26, 37] from a given video a serve as the input to a content understanding model. However, since frames in a video may contain redundant information and are not equally important, simple sampling methods are often incapable of capturing such knowledge and hence can lead to sub-optimal action recognition results. Prior studies attempt to actively select relevant video frames to overcome the limitation of straightforward sampling, achieving improvements to model performance. Heuristic methods are proposed to rank and select frames according to the importance score of each frame/clip calculated by per-frame prediction [15, 19]. Despite the effectiveness, these methods heavily rely on per-frame features, without considering the interaction or diversity among selected frames. Reinforcement learning (RL) has also been proposed to identify informative frames by formulating frame selection as a Markov decision process (MDP) [8, 9, 33, 35, 36]. However, existing RL-based methods may suffer from training stability issues and rely on a massive amount of training samples. Moreover, RL methods make an MDP assumption that frames are selected sequentially depending on observations of already selected frames, and thus cannot adjust prior selections based on new observations. In this work, we propose a new learning paradigm named Search-Map-Search (SMS), which directly searches for the best combination of frames from a video as one entity. SMS formulates the problem of frame selection from the perspective of heuristic search in a large space of video frame combinations, which is further coupled with a learnable mapping function to generalize to new videos and achieve efficient inference. Specifically, we propose a hierarchical search algorithm to efficiently find the most favorable frame combinations on training videos, which are then used as explicit supervision information to train a feature mapping function that maps the feature vectors of an input video to the feature vector of the desirable optimal frame combination. During inference on an unseen query video, the learned mapping function projects the query video onto a target feature vector for the desired frame combination, where another search process retrieves the actual frame combination that approximates the target feature vector. By combining search with learning, the proposed SMS method can better capture frame interactions while incurring a low inference cost. The effectiveness of SMS is extensively evaluated on both the long untrimmed action recognition benchmarks, i.e., ActivityNet [2] and FCVID [18], and the short trimmed UCF101 task [27]. Experimental results show that SMS can significantly improve action recognition models and precisely recognize and produce effective frame selections. Furthermore, SMS significantly outperforms a range of other existing frame selection methods for the same number of frames selected, while can still generate performance higher than existing methods using only 10% of all labeled video samples for training. ## 2 Related Work **Action Recognition**. 2D ConvNets have been widely utilized for action recognition, where per-frame features are first extracted and later aggregated with different methods such as temporal averaging [31], recurrent networks [7, 20, 37], and temporal channel shift [10, 21, 23]. Some studies leveraged both the short-term and long-term temporal relationships by two-stream architectures [12, 13]. To jointly capture the spatio-temporal information of videos, 3D ConvNets were proposed, including C3D [28], I3D [3] and X3D [11]. Transformer architecture [29] have also been applied to video understanding by modeling the spatio-temporal information with attention [1, 22]. In this paper, we follow the previous frame selection work and apply our method mainly on the temporal averaging 2D ConvNets. **Frame Selection.** The problem of selecting important frames within a video has been investigated in order to improve the performance and reduce the computational cost. Many researchers focused on selecting frames based on the per-frame heuristic score. SCSampler [19] proposed to select frames based on the predicted scores of a lightweight video model as the usefulness of frames. SMART [15] incorporated an attention module that takes randomly selected frame pairs as input to model the relationship between frames. However, these methods select frames individually regardless of interactions between selected frames, which may lead to redundant selections. Reinforcement learning (RL) approaches are widely adopted in frame selection to find the effective frames in a trail-and-error setting. FastForward [9] and AdaFrame [35] adopted a single RL agent to generate a decision on the next frame, and updated the network with policy gradient. MARL [33] formulated the frame sampling process as multiple parallel Markov Decision Processes, and adopted multiple RL agents each responsible for determining a frame.Although the RL-based approaches are effective, the training stability issue and the requirement of huge amount of training samples with high computational overhead remain a problem. Recent studies combined frame selection with other techniques to improve the model efficiency. LiteEval [34] adopted a two-level feature extraction procedure, where fine expensive features were extracted for important frames, and coarse frames were used for the remaining frames. ListenToLook [14] proposed to use audio information as an efficient video preview for frame selection. AR-Net [24] aimed at selecting the optimal resolution for each frame that is needed to correctly recognize the actions, and learns a differentiable policy using Gumbel Softmax trick [17]. Our work focuses on the classic task of selecting a subset of frames based on visual information. Different from existing methods, our work incorporates a new "Search-Map-Search" paradigm that leverages efficient search and supervised feature mapping to explicitly find the best frame combinations, and achieves excellent performance outperforming other frame selection methods. ## 3 Methodology ### Overall Architecture Figure 1 gives an overview of our proposed framework, which consists of three stages: a search stage, a feature mapping stage, and another search stage. The training process of our method involves the first two stages, while the inference process involves the last two. Specifically, the goal of the first search stage is to find the best frame combinations in training videos with the lowest model losses, which serve as the supervisory target information for the feature mapping stage. We design an efficient hierarchical algorithm coupled with Guided Local Search [30] to identify the effective frame combinations at a low search cost. In the second stage, a feature extractor is employed to extract input frame features from the training video frames, and the feature of the best combination from search results. Then, a feature mapping function is trained via supervised learning by taking the input frame features as input, and transforming it to the target feature of the best combination. In the third stage, we incorporate another search process to infer the effective frame combination whose feature is closest to the predicted feature from the well-trained feature mapping function. ### Stage 1: Search for Best Frame Combinations Given an action recognition task with a pre-trained model \(M\) and a training dataset \(D^{tr}=\{X,y\}^{|D^{tr}|}\), where \(X=\{x_{i}\}_{i=1}^{m}\) represents a video sample made up of a collection of \(m\) frames, and \(y\) is the action label for the video, our goal of stage 1 is to find the best frame combination \(\tilde{X}^{*}\) with \(n\) frames for each training video \(X\) that minimizes the model loss: \[\begin{split}\tilde{X}^{*}&=\operatorname*{arg\, min}_{\tilde{X}}\mathcal{L}\big{(}M(\tilde{X}),y\big{)},\\ \text{where}&\tilde{X}=\{x_{k}|x_{k}\in X\}^{n}, \end{split} \tag{1}\] where \(\mathcal{L}\) is the loss function of the action recognition task. Note that repetitive selection of the same frame is allowed in our setting, as we believe that repeated important frames are better than meaningless frames. In order to efficiently find the best frame combinations, we have designed a hierarchical search algorithm which exploits the high similarities of adjacent frames by performing search hierarchically on coarse-grained clips first, and then on the fine-grained frames. Besides, we incorporate Guided Local Search [30] in our algorithm to exploit per-frame losses as prior information for a good starting search point, which further reduces search costs and empirically outperforms other strong search algorithms such as Genetic Algorithm [32]. Figure 1: An overview of the proposed “Search-Map-Search” method. It contains three stages, where in the first stage, an efficient hierarchical search algorithm is used to derive the best frame combinations with lowest losses, which are utilized as the supervised information to train a feature mapping function in the second stage. In the third stage, for a query video, we incorporate another search process to infer the frame combination whose features are closest to the combination feature predicted with the trained feature mapping function. The overall workflow of our hierarchical search is summarized in Algorithm 1. To begin with, the video is first split into coarse-grained clips each consisting of a collection of non-overlapped frames. Then, we calculate the model loss for each clip by representing it with the averaged feature vector of all frame inside it, and utilize the information to create an initial solution composed of the clip with the lowest model loss repeated \(n\) times. On top of the initial searching point, we adapt Guided Local Search to find the best clip combination by defining a problem-specific penalty to escape from local optimum points. The details of Guided Local Search will be introduced in Appendix. After searching on coarse-grained clips, we again incorporate Guided Local Search to find the best fine-grained frame combinations by replacing each derived clip with a frame inside it. Via hierarchical search, we have greatly reduced the search space and lowered the search cost significantly, while obtaining satisfying solutions. ``` Input: Video \(X\), Clip Length \(K\), Combination Length \(n\) Output: Frame Combination \(\tilde{X}^{*}\) /* clip search phase */ 1 Split the video into clips each consisting of \(K\) frames 2 Prepare an initial solution \(\tilde{C}_{0}\) containing the clip with the lowest loss repeated \(n\) times 3 Perform Guided Local Search to improve \(\tilde{C}_{0}\) and get \(\tilde{C}^{*}=\{C_{k}\}_{k=1}^{n}\) /* frame search phase */ 4 Define search space for each position in combination \(S^{F}=\{S_{k}|S_{k}=\{x_{j}|x_{j}\in C_{k}\}\}_{k=1}^{n}\) on top of \(\tilde{C}^{*}\) 5 Randomly initialize solution \(\tilde{X}_{0}\) from \(S^{F}\) 6 Perform Guided Local Search to improve \(\tilde{X}_{0}\) and get \(\tilde{X}^{*}=\{x_{k}\}_{k=1}^{n}\) ``` **Algorithm 1**Hierarchical Search ### Stage 2: Feature Mapping Function The goal of the second stage is to identify the best frame combination produced in stage 1, given the input video frames by learning a feature mapping function \(\mathcal{F}\). Specifically, the feature mapping function \(\mathcal{F}\) takes in the features of input frames \(H_{0}\) generated by a pre-trained feature extractor \(\theta\), and outputs a predicted feature \(\hat{h}\in\mathcal{R}^{d}\) representing a frame combination where \(d\) denotes the feature dimension: \[\hat{h} =\mathcal{F}(H_{0}) \tag{2}\] \[\textit{where}\quad H_{0} =\{h_{i}|h_{i}=\theta(x_{i})\}_{i=1}^{m},\] where \(\theta\) is the feature extractor, and \(h_{i}\in\mathcal{R}^{d}\) is the extracted feature vector for frame \(x_{i}\). For the network structure of mapping function \(\mathcal{F}\), we choose to incorporate transformer layers [29] to construct the spatio-temporal representations of the input frame features, and an aggregation function to aggregate the representations of variable lengths into a predicted feature vector \(\hat{h}\): \[H_{l} =\textit{transformer}(H_{l-1}) \tag{3}\] \[\hat{h} =\textit{aggr}(H_{l}),\] where \(l\) is the number of transformer layers in the mapping function. The objective of the mapping function is to minimize the distance between the predicted feature \(\hat{h}\) and the aggregated feature vector of the searched frame combination \(h^{*}\): \[\min \quad\textit{dist}(\hat{h},h^{*}) \tag{4}\] \[\textit{where}\quad h^{*}= \textit{aggr}(\{h_{k}|h_{k}=\theta(x_{k}),x_{k}\in\tilde{X}^{*}\}).\] In our implementation, we incorporate cosine distance and mean-pooling as the distance function and aggregation function respectively, while other function choices can be further explored. ### Stage 3: Search to Infer Frame Combinations After the mapping function is learned, it can accurately predict the features of the best frame combinations for unseen videos without relying on the ground truth labels. The goal of this stage is to incorporate another search process to infer the frame combinations from the predicted features. Formally, the objective of the search is to find a frame combination \(\hat{X}\) whose aggregated feature \(h^{\prime}\) is closest to the given predicted feature \(\hat{h}\): \[\hat{X} =\operatorname*{arg\,min}_{\hat{X}}(\textit{dist}(h^{\prime},\hat {h})) \tag{5}\] \[\textit{where}\quad h^{\prime} =\textit{aggr}(\{h_{k}|h_{k}=\theta(x_{k}),x_{k}\in\tilde{X}\}).\] As the evaluation in the search only involves the calculation of the cosine distance between vectors, which requires little computation, we directly apply Guided Local Search on the fine-grained frame level without the hierarchical setting applied in stage 1. ## 4 Experiments In this section, we conduct extensive experiments aiming at investigating the following research questions: * **RQ1:** Can the proposed SMS improve model performance over the base frame sampling method? * **RQ2:** How does SMS perform compared to other state-of-the-art frame selection methods? * **RQ3:** What's the computation efficiency of SMS for video inference? * **RQ4:** How do the different components affect the performance of the proposed SMS? * **RQ5:** Can SMS generalize well to spatio-temporal models, e.g., transformer based video models? ### Experimental Setup #### 4.1.1 Datasets. We evaluate our SMS method on 3 action recognition benchmarks including ActivityNet V1.3 [2], FCVID [18] and UCF101 [27]. The videos in ActivityNet and FCVID are untrimmed with average video lengths of \(117\) and \(167\) seconds respectively, while UCF101 dataset contains trimmed short videos with an average length of \(7.21\) seconds. Table 1 summarizes the detailed information of the experimental datasets. #### 4.1.2 Baselines. We compare the proposed SMS with the base selection method and the following state-of-the-art frame selection methods: * **Base** is a sparse sampling method proposed in TSN [31], where videos are divided into segments of equal length, and frames are randomly sampled within each segments. * **AdaFrame**[35] incorporates reinforcement learning to adaptively select informative frames with a memory-augmented LSTM. At testing time, AdaFrame selects different number of frames for each video observed. * **MARL**[33] adopts multiple RL agents each responsible for adjusting the position of a selected frame. * **SCSampler**[19] proposes to select frames based on the prediction scores produced by a lightweight video model. * **SMART**[15] combines the single-frame predictive score with pair-wise interaction score to make decision on the frame selections. * **LiteEval**[34] selects important frames to extract fine features and adopts coarse features for the remaining frames. * **ListenToLook**[14] proposes to use audio information as video preview for frame selection. For a fair comparison, we follow AR-Net [24] and include the variant with only the visual modality. * **AR-Net**[24] aims to select the optimal resolutions for frames that are needed to correctly recognize the actions. #### 4.1.3 Evaluation metrics. Following previous studies, we evaluate the performance of models using mean Average Precision (mAR), which is a commonly adopted metric in action recognition tasks, calculated as the mean value of the average precision over all action classes. #### 4.1.4 Implementation details. In the first stage of hierarchical searching, the clip length \(K\) is set to \(30\). For the feature mapping network, we adopt a two-layer transformer network with a hidden dimension of \(2,048\). The feature extraction network used in our implementation is a ResNet-50 network [16] pre-trained on the Kinetics dataset [3]. For the data pre-processing for action recognition tasks, we decode the video at \(1\) fps for long videos in ActivityNet and FCVID and \(55\) fps for short videos in UCF101 to retrieve the rgb frames, which are augmented during training by resizing the short side to \(256\), random cropped and resized to \(224^{2}\), after which a random flip with a probability of \(0.5\) is applied. For inference, we resize all frames to \(256^{2}\) and perform three-crop. For the training of action recognition models, we choose ResNet-50 as the backbone and run \(100\) epochs using an SGD optimizer with a momentum of \(0.9\), and a step learning rate schedule which decays the learning rate by a factor of \(10\) every \(40\) epochs. For ActivityNet and FCVID, we use an initial learning rate of \(0.005\), and a batch size of \(64\). For UCF101 with shorter videos, we increase the batch size to \(128\) and adjust the initial learning rate to \(0.00256\). All models are trained using code adapted from MMAction2 [4] on eight Nvidia-A100 GPUs. And we report the average performance and standard deviation of three runs. ### Effectiveness Analysis (RQ1) To validate the effectiveness of the proposed SMS, we make a comprehensive comparison with the base method on both long-video dataset ActivityNet and FCVID, and short-video dataset UCF101. As the video lengths of different datasets vary in a large range, we choose to select different \begin{table} \begin{tabular}{c c c c c} \hline \hline Dataset & Train & Val & Actions & Avg. Duration \\ \hline ActivityNet & \(10,024\) & \(4,926\) & \(200\) & \(117\)s \\ FCVID & \(45,611\) & \(45,612\) & \(239\) & \(167\)s \\ UCF101 & \(9,537\) & \(3,783\) & \(101\) & \(7.21\)s \\ \hline \hline \end{tabular} \end{table} Table 1: Description of evaluation datasets. number of frames for each video in these datasets, where 8, 16 and 25 frames are selected for long videos in ActivityNet and FCVID, and 3 and 8 frames are selected for short videos in UCF101. Furthermore, we conduct experiments to select frames with SMS only on test data during the inference of pre-trained base models, and incorporate SMS in both the training and inference process. The experimental results are shown in Table 2, from which we can observe that: * When applied to test data, compared to the base sampling method, SMS (infer only) significantly improves the average mAP on different number of selected frames by \(4.53\%\) and \(1.85\%\) on ActivityNet and FCVID, respectively. While SMS (infer only) significantly improves model performance, SMS (train & infer) improves the mAP by \(0.75\%\) and \(0.34\%\) mAP on ActivityNet and FCVID. This observation demonstrates that the frames selected with the SMS method are beneficial to both model training and inference. * While most other frame selection methods only focus on long untrimmed video tasks, SMS is also effective on short trimmed video tasks. Despite that in UCF101, the irrelevant parts of videos are trimmed off, which to a great degree limits the potential of frame selection, SMS still achieves steady improvement of \(1.42\%\) average mAP over the base method with the same number of selected frames. ### Performance Comparison (RQ2) In order to justify the benefit of our new learning paradigm of frame selection, we compare SMS with other classic frame selection methods on ActivityNet and FCVID. We choose ResNet as the backbone architecture of the action recognition model and report the performance with similar number of selected frames ranges from \(8\) to \(10\). As the implementation and training details are different among the frame selection methods, the performances of the base models are inconsistent. Therefore, directly comparing the reported performances may be unfair. Moreover, due to the lack of source codes and models, we are unable to compare all methods under the same settings, especially for the RL-based methods whose results are difficult to reproduce. To make a fairer comparison, in addition to the absolute performance, we also compare the relative improvements of the frame selection methods over the base sampling method. Also, we have implemented the most recent frame selection method, SMART [15], using the same features and implementation settings as our proposed SMS, and include its results as SMART*. The comparison results are demonstrated in Table 3, from which we can see that with the least number of selected frames and the smallest backbone model, the proposed SMS achieves the best performance of \(83.72\%\) and \(86.54\%\) mAP on ActivityNet and FCVID respectively. Moreover, even with the higher-performance base models \begin{table} \begin{tabular}{c|c c c|c c c|c c} \hline & \multicolumn{3}{c|}{ActivityNet} & \multicolumn{3}{c|}{FCVID} & \multicolumn{3}{c}{UCF101} \\ \# Frames & \(8\) & \(16\) & \(25\) & \(8\) & \(16\) & \(25\) & \(3\) & \(8\) \\ \hline Base1 & \(77.34\pm 0.06\) & \(79.41\pm 0.05\) & \(80.04\pm 0.24\) & \(83.84\pm 0.05\) & \(85.34\pm 0.03\) & \(85.65\pm 0.04\) & \(90.65\pm 0.13\) & \(90.70\pm 0.10\) \\ SMS (infer only) & \(82.76\pm 0.15\) & \(83.78\pm 0.08\) & \(83.85\pm 0.17\) & \(86.35\pm 0.03\) & \(86.94\pm 0.06\) & \(87.08\pm 0.04\) & \(91.45\pm 0.12\) & \(91.58\pm 0.09\) \\ SMS (train \& infer) & \(\mathbf{83.72\pm 0.05}\) & \(\mathbf{84.35\pm 0.08}\) & \(\mathbf{84.56\pm 0.08}\) & \(\mathbf{86.54\pm 0.06}\) & \(\mathbf{87.25\pm 0.11}\) & \(\mathbf{87.59\pm 0.01}\) & \(\mathbf{91.94\pm 0.15}\) & \(\mathbf{92.25\pm 0.10}\) \\ \hline \end{tabular} \end{table} Table 2: Performance comparison of the proposed SMS and the base method. In SMS (infer only), frames are selected with our method only during inference. In SMS (train & infer), frames are selected with SMS during both training and inference. \begin{table} \begin{tabular}{c c|c c c|c c c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{3}{c|}{ActivityNet} & \multicolumn{3}{c}{FCVID} \\ & & \# Frames & mAP & impr. & \# Frames & mAP & impr. \\ \hline SCSampler & ResNet-50 & \(10\) & \(72.9\) & \(0.4\) & \(10\) & \(81.0\) & \(0.0\) \\ AdaFrame & ResNet-101 & \(8.65\) & \(71.5\) & \(3.7\) & \(8.21\) & \(80.2\) & \(1.8\) \\ MARL & ResNet-101 & \(8\) & \(72.9\) & \(0.4\) & - & - & - \\ SMART & ResNet-101 & \(10\) & \(73.1\) & - & \(10\) & \(82.1\) & - \\ SMART* & ResNet-50 & \(8\) & \(80.67\) & \(3.33\) & \(8\) & \(83.35\) & \(-0.49\) \\ \hline SMS\_10\% & ResNet-50 & \(8\) & \(82.12\) & \(4.78\) & \(8\) & \(85.69\) & \(1.85\) \\ SMS & ResNet-50 & \(8\) & \(\mathbf{83.72}\) & \(\mathbf{6.38}\) & \(8\) & \(\mathbf{86.54}\) & \(\mathbf{2.70}\) \\ \hline \end{tabular} \end{table} Table 3: Performance comparison of the proposed SMS and other state-of-the-art frame selection methods. We show both their performance and the their reported improvements over the base sampling method, due to the inconsistent base performance among different methods. Results of baselines are retrieved from literature, except for SMART*, which is implemented using the same features and implementation settings as SMS. We have also included the results of SMS learned only on 10% training data as SMS_10%. which are harder to improve, SMS still obtains the largest improvements of \(6.48\%\) and \(2.70\%\) among all frame selection methods. In a fair comparison with the same implementation settings, our method significantly outperforms SMART* by \(3.05\%\) and \(3.19\%\) respectively on ActivityNet and FCVID. The reason of our success is due to our novel training paradigm of "Search-Map-Search", where an efficient hierarchical search method is incorporated to find the best frame combinations, which better models the frame interactions. Moreover, in SMS, a feature mapping function is learned to map an input video directly to the optimal frame combination, which is theoretically superior to the one-by-one frame selection adopted by existing methods. ### Efficiency Analysis (RQ3) We now analyze the efficiency of SMS and compare it with other frame selection methods in terms of action recognition performance versus inference cost (evaluated by the model size per video. For a fair comparison, we only include the results that use ResNet as the backbone model. In order to evaluate the tradeoff between performance and inference efficiency, we first uniformly sub-sample \(m\) candidate frames that constitute the search space for each video from which \(n=8\) best frames are to be selected. As \(m\) increases, features are extracted from more candidate frames, and the resulting frame selection becomes more effective, while the inference cost becomes larger. As demonstrated in Figure 2, the action recognition performance of SMS grows rapidly from \(m=8\) to \(m=25\), while further increasing \(m\) to \(50\) or \(100\) incurs only slight increase in performance but huge inference overhead. In practice, one can easily achieve a good tradeoff between performance and cost accordingly by tuning the number of candidate frames \(m\). In comparison with other approaches, SMS achieves higher action recognition performance and beats other methods under different computation resource constraints, showing the effectiveness of the proposed search-mapping-search paradigm in frame selection. ### Ablation Study (RQ4) This subsection aims to analyze the effects of different components designed in our proposed SMS. **Search algorithm.** We have conducted experiments to compare the performance and efficiency of our designed Hierarchical Guided Local Search algorithm with other powerful search algorithms such as Fast Genetic Algorithm [6] and the frame-level Guided Local Search [30]. In Figure 3, we show the best loss averaged on \(100\) randomly selected videos with different search algorithms, and their corresponding search cost measured by the number of evaluations. By adopting different clip length \(K\), our proposed hierarchical search algorithm can trade off between the search performance and the search cost. From Figure 3, we can see that using the same number of evaluations, our hierarchical search achieves better results compared to Fast Genetic Algorithm, by more effectively exploiting the prior knowledge of the per-frame loss information. The original Guided Local Search without hierarchical design is extremely costly which requires nearly \(3,000\) evaluations per video. Contrastively, our algorithm is more efficient with the hierarchical design, and achieves comparable performance with far less computation cost. **Feature mapping network.** The process of feature mapping aims to transform the input frame features to the feature of target combination. We have conducted experiments to explore the impact of different network architectures and Figure 3: The evaluation of different search algorithms on 100 videos, given by the average loss over the number of evaluations. Figure 2: Comparison of SMS with other approaches in terms of performance vs. inference cost (model size per video) evaluated on ActivityNet. We control the inference cost by varying \(m\), the number of candidate frames in a video from which \(n=8\) best frames are to be selected. training data sources for feature mapping. For the network architecture, we adopt transformers to sequentially modeling the frame features with their spatio-temporal relations taken into consideration. Another simple applicable design is to adopt a mean-pooling layer that aggregates all the features of frames into a single feature vector, followed by a simpler two-layer MLP network. In Table 4, row 1 and 2 compares the performance and the inference efficiency of the two designs. As we can see, using transformer model as the mapping function outperforms MLP design by \(0.5\%\) mAP due to the better representation ability, while the inference cost of both designs are small and negligible (less than 2 GFLOPs) compared to the cost of frame feature extractions (tens or hundreds GFLOPs). **Feature extractor.** We have analyzed the effect of different feature extractor settings by trying smaller model structure, e.g., MobileNet-V2 [25], and different pre-trained data sources including ImageNet [5], Kinetics [3] and ActivityNet [2]. Comparing the results of row 1, 3 and 4 in Table 4, we can see that the pre-training data of feature extractor can make a difference. The feature extractor trained on the largest Kinetics dataset achieves the best performance as it better captures the semantics of actions by training on more related samples, compared to the ones trained on smaller ActivityNet dataset and out-domain ImageNet dataset. Besides, as shown in row 5 of Table 4, using smaller models such as MobileNet-V2 for extractor can lead to performance decline. In general, the representation capability of feature extractors is valuable for frame selection to recognize the important frames and find the best frame combinations. ### Generalizability Analysis (RQ5) It is a natural question to ask if the frames selected by SMS can also be beneficial to spatio-temporal video models. To find out the answer to this question, we have conducted an experiment to apply the SMS selected frames on TimeSFormer [1], which incorporates the transformer architecture and is one of the most advanced video models. The results are shown in Table 5. The dense frame sampling strategy incorporated in the original TimeSFormer implementation randomly samples a clip containing \(8\) successive frames from videos, and achieves \(84.33\%\) mAP on ActivityNet. However, in untrimmed video dataset, dense sampling can only captures a small part of the video and may miss important information. In contrast, the base sampling strategy selects frames uniformly from videos and achieves \(90.11\%\) mAP, while SMART* achieves \(90.53\%\) mAP. By using the input frames selected by SMS, we obtain a significant performance gain of \(1.86\%\) over the base sampling strategy and \(1.44\%\) over SMART*, and achieve \(91.97\%\) mAP. This improvement demonstrates the strong generalizability of SMS across different model architectures, and that SMS is not only effective on 2D video modeling, but can also capture the spatio-temporal relationship among frames and is beneficial to 3D video model learning. ## 5 Conclusion In this paper, we propose a new learning paradigm for frame selection, called "Search-Map-Search", which consists of a search stage to efficiently find the best frame combination with a hierarchical search algorithm, a feature mapping stage that learns to transform the input frame features directly into the feature of the searched combination, and another search stage that selects frames based on the mapped feature. Compared with existing frame selection methods, SMS is a more accurate learning paradigm that takes advantage of efficient search and supervised feature mapping to directly select the best combination of frames as one entity, which better captures the frame interactions. Experimental results show that SMS achieves significant performance gains on multiple action recognition benchmarks, and outperforms other strong baseline methods. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Feature Extractor (Training Source) & Mapping Model Arch & Performance & Inference GFLOPs \\ \hline ResNet-50 (Kinetics) & Transformer & \(83.72\) & \(1.50\) \\ ResNet-50 (Kinetics) & MLP & \(83.22\) & \(0.01\) \\ ResNet-50 (ImageNet) & Transformer & \(79.97\) & \(1.50\) \\ ResNet-50 (ActivityNet) & Transformer & \(81.06\) & \(1.50\) \\ MobileNet-V2 (Kinetics) & Transformer & \(79.53\) & \(0.93\) \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation analysis results in mAP (%) on ActivityNet using \(8\) frames with different feature extractors and feature mapping model architectures. The feature mapping inference GFLOPs per video is also provided. \begin{table} \begin{tabular}{c c|c|c} \hline \hline Backbone & \# Frames & Method & Performance \\ \hline \multirow{4}{*}{TimeSFormer} & \multirow{4}{*}{\(8\)} & Dense & \(84.33\) \\ & & Base & \(90.11\) \\ & & SMART* & \(90.53\) \\ & & SMS & \(\mathbf{91.97}\) \\ \hline \hline \end{tabular} \end{table} Table 5: The ActivityNet evaluation results in mAP (%) using different frame sampling strategies on TimeSFormer.
2306.11117
On the Rényi index of random graphs
Networks (graphs) permeate scientific fields such as biology, social science, economics, etc. Empirical studies have shown that real-world networks are often heterogeneous, that is, the degrees of nodes do not concentrate on a number. Recently, the R\'{e}nyi index was tentatively used to measure network heterogeneity. However, the validity of the R\'{e}nyi index in network settings is not theoretically justified. In this paper, we study this problem. We derive the limit of the R\'{e}nyi index of a heterogeneous Erd\"{o}s-R\'{e}nyi random graph and a power-law random graph, as well as the convergence rates. Our results show that the Erd\"{o}s-R\'{e}nyi random graph has asymptotic R\'{e}nyi index zero and the power-law random graph (highly heterogeneous) has asymptotic R\'{e}nyi index one. In addition, the limit of the R\'{e}nyi index increases as the graph gets more heterogeneous. These results theoretically justify the R\'{e}nyi index is a reasonable statistical measure of network heterogeneity. We also evaluate the finite-sample performance of the R\'{e}nyi index by simulation.
Mingao Yuan
2023-06-19T18:37:58Z
http://arxiv.org/abs/2306.11117v1
# On the Renyi index of random graphs # On the Renyi index of random graphs Mingao Yuan\({}^{1}\) \({}^{1}\)Department of Statistics, North Dakota State University, e-mail:[email protected] **Abstract:** Networks (graphs) permeate scientific fields such as biology, social science, economics, etc. Empirical studies have shown that real-world networks are often heterogeneous, that is, the degrees of nodes do not concentrate on a number. Recently, the Renyi index was tentatively used to measure network heterogeneity. However, the validity of the Renyi index in network settings is not theoretically justified. In this paper, we study this problem. We derive the limit of the Renyi index of a heterogeneous Erdos-Renyi random graph and a power-law random graph, as well as the convergence rates. Our results show that the Erdos-Renyi random graph has asymptotic Renyi index zero and the power-law random graph (highly heterogeneous) has asymptotic Renyi index one. In addition, the limit of the Renyi index increases as the graph gets more heterogeneous. These results theoretically justify the Renyi index is a reasonable statistical measure of network heterogeneity. We also evaluate the finite-sample performance of the Renyi index by simulation. **MSC2020 subject classifications:** 60K35; 05C80. **Keywords and phrases:** Renyi index, random graph, heterogeneity, network data. ## 1 Introduction A network (graph) consists of a set of individuals (nodes) and a set of interactions (edges) between individuals. It has been widely used to model and analyze many complex systems. For example, in social science and economics, networks play a central role in the transmission of information, the trade of many goods and services, and determining how diseases spread, etc. [15, 21, 30]; in biology, network is a method of representing the physical contacts between proteins [8]. In the past decade, network data analysis has been a primary research topic in statistics and machine learning [1, 2, 3, 18, 33, 34]. In many fields of science and engineering, one of the elemental problems is to measure the statistical heterogeneity of datasets. For instance, in statistical physics the entropy was devised to measure the randomness of systems ([22]). In economics, various inequality indices were designed to gauge the evenness of the distribution of wealth in human populations([14]). Motivated by entropy and inequality indices, [16] recently introduced the Renyi index to measure statistical heterogeneity of probability distributions defined on the positive half-line. The Renyi index takes values in the range \([0,1]\). Larger value represents higher level of heterogeneity. Its properties were systematically studied in [16, 17] and the Renyi index of several well-known distributions (such as Pareto distribution, Gamma distribution, Beta distribution, etc.) are calculated in [16, 17]. Empirical studies have shown that many real-world networks are heterogeneous, that is, the degrees of individuals do not concentrate on a number [10, 28, 31]. It is important to be able to compare networks according to heterogeneity that they exhibit, and thus to have a stable summary statistic that provides insight into the structure of a network. Recently the Renyi index was tentatively used to measure heterogeneity of financial networks and interesting findings were obtained [24, 25, 26, 27]. However, the validity of the Renyi index in network settings is not theoretically justified, and some of the fundamental questions are not studied in [24]. For instance, whether the Renyi index of a homogeneous network is actually close to zero, whether the Renyi index of heterogeneous network is indeed large, and how the Renyi index depends on network model parameters. In this paper, we shall answer the above mentioned questions and provide a theoretical justification for the Renyi index as a network heterogeneity measure. To this end, we derive the limit of the Renyi index of a heterogeneous Erdos-Renyi random graph and a power-law random graph, as well as the convergence rates. Based on our results, the Erdos-Renyi random graph (homogeneous) has asymptotic Renyi index zero, while the well-known power-law random graph (highly heterogeneous) has asymptotic Renyi index one. Moreover, the limit of the Renyi index explicitly depends on model parameters, from which it is clear that the Renyi index increases as the model gets more heterogeneous. These results theoretically justify the Renyi index is a reasonable statistical measure of network heterogeneity. In addition, we run simulations to evaluate finite-sample performance of the Renyi index. The structure of the article is as follows. In Section 2 we collect the main results. Specifically, in Section 2.1, we present the limit of the Renyi index of a heterogeneous Erdos-Renyi random graph; in Section 2.2, we present the limit of the Renyi index of a power-law random graph. Simulation studies are given in Section 3. All proofs are deferred to Section 4. **Notation**: Let \(c_{1},c_{2}\) be two positive constants. For two positive sequences \(a_{n}\), \(b_{n}\), denote \(a_{n}\asymp b_{n}\) if \(c_{1}\leq\frac{a_{n}}{b_{n}}\leq c_{2}\); denote \(a_{n}=O(b_{n})\) if \(\frac{a_{n}}{b_{n}}\leq c_{2}\); \(a_{n}=o(b_{n})\) if \(\lim_{n\to\infty}\frac{a_{n}}{b_{n}}=0\). Let \(X_{n}\) be a sequence of random variables. We use \(X_{n}\Rightarrow F\) to denote \(X_{n}\) converges in distribution to a probability distribution \(F\). \(X_{n}=O_{P}(a_{n})\) means \(\frac{X_{n}}{a_{n}}\) is bounded in probability. \(X_{n}=o_{P}(a_{n})\) means \(\frac{X_{n}}{a_{n}}\) converges to zero in probability. \(\mathcal{N}(0,1)\) stands for the standard normal distribution. Let \(I[E]\) be the indicator function of an event \(E\). We adopt the convention \(0\log 0=0\). Let \(n\) be a positive integer. ## 2 The Renyi index of network A graph or network \(\mathcal{G}\) consists of a pair \((\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=[n]:=\{1,2,\ldots,n\}\) denotes the set of vertices and \(\mathcal{E}\) denotes the set of edges. For \(i<j\), denote \(A_{ij}=1\) if \(\{i,j\}\in\mathcal{E}\) is an edge and \(A_{ij}=0\) otherwise. Suppose \(A_{ii}=0\), that is, self loops are not allowed. Then the symmetric matrix \(A=(A_{ij})\in\{0,1\}^{\otimes n^{2}}\) is called the adjacency matrix of graph \(\mathcal{G}\). A graph is said to be random if the elements \(A_{ij}(1\leq i<j\leq n)\) are random. Given a positive constant \(\alpha\), the Renyi index of a graph ([16, 17, 24, 25]) is defined as \[\mathcal{R}_{\alpha}=\begin{cases}1-\left[\frac{1}{n}\sum_{i=1}^{n}\left( \frac{d_{i}}{d}\right)^{\alpha}\right]^{\frac{1}{1-\alpha}},&\text{if $\alpha\neq 1$;}\\ 1-\exp\left(-\frac{1}{n}\sum_{i=1}^{n}\frac{d_{i}}{d}\log\frac{d_{i}}{d} \right),&\text{if $\alpha=1$,}\end{cases} \tag{1}\] where \(d_{i}\) is the degree of node \(i\), that is, \(d_{i}=\sum_{j\neq i}A_{ij}\) and \(d\) is the average of degree, that is, \(d=\frac{\sum_{i=1}^{n}d_{i}}{n}\). The Renyi index includes several popular indexes as a special case. When \(\alpha=1\), the Renyi index \(\mathcal{R}_{1}\) is a function of the Theil's index. When \(\alpha=2\), the Renyi index \(\mathcal{R}_{2}\) is function of the Simpson's index. For \(0<\alpha\leq 1\) the Renyi index \(\mathcal{R}_{\alpha}\) is the Atkinson's index. The parameter \(\alpha\) allows researchers to tune the Renyi index to be more sensitive to different populations. In practice, commonly used values are \(\alpha=1,2,3\) ([24, 25]). **Proposition 2.1**.: _For any fixed \(\alpha>0\), the Renyi index \(\mathcal{R}_{\alpha}\) is between 0 and 1._ The Renyi index takes values in \([0,1]\). It is tentatively used to measure degree heterogeneity of graphs ([24]). We shall derive an asymptotic expression of the Renyi index of two random graphs. Note that \(\mathcal{R}_{\alpha}\) is a non-linear function of the degrees \(d_{i}\) (\(1\leq i\leq n\)), the degrees are not independent and may not be identically distributed. This fact make studying asymptotic properties of the Renyi index a non-trivial task. ### The Renyi index of a heterogeneous Erdos-Renyi random graph In this section, we study the asymptotic Renyi index of a heterogeneous Erdos-Renyi random graph. Let \(f(x,y)\) be a symmetric function from \([0,1]^{2}\) to \([0,1]\). Define the heterogeneous Erdos-Renyi random graph \(\mathcal{G}(n,p_{n},f)\) as \[\mathbb{P}(A_{ij}=1)=p_{n}f\left(\frac{i}{n},\frac{j}{n}\right),\] where \(p_{n}\in[0,1]\) may depend on \(n\) and \(A_{ij}\) (\(1\leq i<j\leq n\)) are independent. If \(f\equiv c\) for some constant \(c\), then \(\mathcal{G}(n,p_{n},f)\) is simply the Erdos-Renyi random graph with edge appearance probability \(cp_{n}\). For non-constant \(f\), the random graph \(\mathcal{G}(n,p_{n},f)\) is a heterogeneous version of the Erdos-Renyi graph. The spectral properties of this random graph have been extensively studied in [11, 12, 13]. We point out that our proof strategy works for other graph models such as the \(\beta\)-model in [29] and the degree-corrected model in [23] with mild modifications. #### 2.1.1 Asymptotic Renyi index when \(\alpha\neq 1\) In this subsection, we study asymptotic Renyi index of \(\mathcal{G}(n,p_{n},f)\) with \(\alpha\neq 1\). For convenience, denote \[f_{ij}=f\left(\frac{i}{n},\frac{j}{n}\right),\hskip 28.452756ptf_{i}=\frac{1}{n }\sum_{j\neq i}^{n}f\left(\frac{i}{n},\frac{j}{n}\right),\hskip 14.226378pt \lambda_{k,l}=\frac{\sum_{i\neq j}f_{i}^{k}f_{ij}^{l}}{n^{2}}.\] Note that \(f_{ij}\), \(f_{i}\) and \(\lambda_{k;l}\) depend on \(n\). We will focus on \(f(x,y)\geq\epsilon\) for a constant \(\epsilon\in(0,1)\) as assumed in [12]. Later we will provide examples of such functions. **Theorem 2.2**.: _Let \(\alpha\neq 1\) be a fixed positive constant, \(np_{n}\to\infty\) and \(f(x,y)\geq\epsilon\) for some constant \(\epsilon\in(0,1)\). Then the Renyi index \(\mathcal{R}_{\alpha}\) of \(\mathcal{G}(n,p_{n},f)\) has the following expression_ \[\mathcal{R}_{\alpha}=1-\left[\frac{\lambda_{\alpha,0}}{\left(\lambda_{0,1}+O_ {P}\left(\frac{1}{n\sqrt{p_{n}}}\right)\right)^{\alpha}}+O_{P}\left(\frac{1}{ np_{n}}\right)\right]^{\frac{1}{1-\alpha}}, \tag{2}\] _and the error rates \(\frac{1}{np_{n}}\) and \(\frac{1}{n\sqrt{p_{n}}}\) cannot be improved. Asymptotically, \(\mathcal{R}_{\alpha}\) has the following concise expression:_ \[\mathcal{R}_{\alpha}=1-\left(\frac{\lambda_{\alpha,0}}{\lambda_{0,1}^{\alpha} }\right)^{\frac{1}{1-\alpha}}+o_{P}(1). \tag{3}\] Theorem 2.2 provides an asymptotic expression of \(\mathcal{R}_{\alpha}\) as an explicit function of \(\alpha\) and the model parameter \(f\), along with the error rates. It is interesting that \(\mathcal{R}_{\alpha}\) mainly depends on \(f\) and \(\alpha\) through the ratio \(\frac{\lambda_{\alpha,0}}{\lambda_{0,1}^{\alpha}}\). The quantities \(\lambda_{\alpha,0}\) and \(\lambda_{0,1}^{\alpha}\) may or may not converge to some limits as \(n\) goes to infinity. Later we will present two examples where \(\lambda_{\alpha,0}\) and \(\lambda_{0,1}^{\alpha}\) converge. We point out that even though empirical degree distributions are widely studied in literature, it is not immediately clear how to obtain the asymptotic expression of the Renyi index as in Theorem 2.2 from the empirical degree distributions. Specifically, suppose \(Y_{n}\) is a random variable with distribution \(F_{emp}(x)=\frac{1}{n}\sum_{i=1}^{n}I[d_{i}\leq x]\). The term \(\frac{1}{n}\sum_{i=1}^{n}d_{i}^{\alpha}\) in the Renyi index is equal to \(\mathbb{E}(Y_{n}^{\alpha}|d_{1},\ldots,d_{n})\). Suppose \(F_{emp}(x)\) converges almost surely or in probability to some distribution function \(F(x)\) and let \(Y\) follow the distribution \(F(x)\). The convergence of \(F_{emp}(x)\) to \(F(x)\) does not necessarily imply the convergence of \(\mathbb{E}(Y_{n}^{\alpha}|d_{1},\ldots,d_{n})\) to \(\mathbb{E}(Y^{\alpha})\) for arbitrary \(\alpha>0\). Generally speaking, uniform integrability conditions are required to guarantee the convergence of \(\mathbb{E}(Y_{n}^{\alpha}|d_{1},\ldots,d_{n})\) to \(\mathbb{E}(Y^{\alpha})\). Note that \(\mathbb{E}(Y_{n}^{\alpha}|d_{1},\ldots,d_{n})\) is random. It is not immediately clear what kind of uniform integrability conditions are needed. Moreover, even if we can conclude that \(\mathbb{E}(Y_{n}^{\alpha}|d_{1},\ldots,d_{n})\) converges to \(\mathbb{E}(Y^{\alpha})\) by assuming some uniform integrability conditions, it does not provide the error rates (that cannot be improved) as in Theorem 2.2. Next we provide two examples of random graphs satisfying the conditions of Theorem 2.2 and calculate the ratio explicitly. The first example is the Erdos-Renyi random graph, that is, \(f(x,y)\equiv 1\). Since each node of the Erdos-Renyi random graph has the same average degree, the Erdos-Renyi graph is homogeneous. It is clear that \(\frac{\lambda_{\alpha,0}}{\lambda_{0,1}^{\alpha}}=1+o(1)\), hence \(\mathcal{R}_{\alpha}=o_{P}(1)\). This shows the Renyi index of homogeneous network is actually close to zero. Now we provide a family of non-constant \(f(x,y)\) that is bounded away from zero. This model can attain any heterogeneity level, that is, the limit of \(\frac{\lambda_{\alpha,0}}{\lambda_{0,1}^{\alpha}}\) can take any value in \((0,1)\). Let \(f(x,y)=e^{-\kappa x}e^{-\kappa y}\) with a non-negative constant \(\kappa\). Then \(e^{-2\kappa}\leq f(x,y)\leq 1\) for \(0\leq x,y\leq 1\). Intuitively, smaller \(\kappa\) would produce less heterogeneous models. In the extreme case \(\kappa=0\), the random graph is simply the Erdos-Renyi random graph. Given a function \(f\), denote the expected degree of node \(i\) as \(\mu_{i}:=p_{n}\sum_{j\neq i}f_{ij}\). Then for \(f(x,y)=e^{-\kappa x}e^{-\kappa y}\), \(\mu_{i}\) is equal to \[np_{n}e^{-\kappa\frac{i}{n}}(1-e^{-\kappa}+o(1)).\] Note that \(\frac{\mu_{1}}{\mu_{n}}=e^{\kappa\left(1-\frac{1}{n}\right)}\). Large \(\kappa\) will enlarge the difference between the degrees of node \(1\) and node \(n\). Hence, the random graph with larger \(\kappa\) should be more heterogeneous. Simple calculations yield \[\lambda_{\alpha,0}=\left(\frac{1}{\kappa}-\frac{1}{\kappa e^{\kappa}}\right) ^{\alpha}\left(\frac{1}{\kappa\alpha}-\frac{1}{\kappa\alpha e^{\kappa\alpha} }\right)+o(1),\hskip 28.452756pt\lambda_{0,1}=\left(\frac{1}{\kappa}-\frac{1}{ \kappa e^{\kappa}}\right)^{2}+o(1).\] Plugging them into (3) yields \[\mathcal{R}_{\alpha}=1-\left(\frac{(e^{\kappa\alpha}-1)\kappa^{\alpha-1}}{ \alpha(e^{\kappa}-1)^{\alpha}}\right)^{\frac{1}{1-\alpha}}+o_{P}(1),\hskip 14.226378pt \alpha>0,\ \alpha\neq 1. \tag{4}\] Note that \(\lim_{\kappa\to\infty}\left(\frac{(e^{\kappa\alpha}-1)\kappa^{\alpha-1}}{ \alpha(e^{\kappa}-1)^{\alpha}}\right)^{\frac{1}{1-\alpha}}=0\) and \(\lim_{\kappa\to 0^{+}}\left(\frac{(e^{\kappa\alpha}-1)\kappa^{\alpha-1}}{ \alpha(e^{\kappa}-1)^{\alpha}}\right)^{\frac{1}{1-\alpha}}=1\) for any \(\alpha>0\) and \(\alpha\neq 1\). Asymptotically, \(\mathcal{R}_{\alpha}\) with large \(\kappa\) would be close to \(1\) and \(\mathcal{R}_{\alpha}\) with small \(\kappa\) would be close to \(0\). This justifies that the Renyi index of heterogeneous graph is actually non-zero. In addition, the limit of \(\mathcal{R}_{\alpha}\) can assume any value in \((0,1)\) by changing \(\kappa\). In this sense, this random graph can achieve any heterogeneity level with suitably selected \(\kappa\). #### 2.1.2 Asymptotic Renyi index when \(\alpha=1\) In this subsection, we study the asymptotic Renyi index of \(\mathcal{G}(n,p_{n},f)\) with \(\alpha=1\). For convenience, denote \[f_{ij}=f\left(\frac{i}{n},\frac{j}{n}\right),\ \ \mu_{i}:=p_{n}\sum_{j\neq i}f_{ij},\ \ l_{i}=\log\left(\frac{\mu_{i}}{np_{n}\lambda_{0,1}}\right),\ \ s_{k}=\sum_{i<j}(2+l_{i}+l_{j})^{k}f_{ij}(1-p_{n}f_{ij}),\] **Theorem 2.3**.: _Let \(\mathcal{G}(n,p_{n},f)\) be the random graph with \(np_{n}\to\infty\) and \(f(x,y)\geq\epsilon\) for some constant \(\epsilon\in(0,1)\). If \(s_{2}\asymp n^{2}\), then the Renyi index has the asymptotic expression as_ \[\mathcal{R}_{1}=1-e^{-r_{n}+O_{P}\left(\frac{1}{n\sqrt{p_{n}}}\right)},\ \ \ \ \ \ \ r_{n}=\frac{1}{n}\sum_{i=1}^{n}\frac{\mu_{i}}{np_{n}\lambda_{0,1}}\log \left(\frac{\mu_{i}}{np_{n}\lambda_{0,1}}\right), \tag{5}\] _where the error rate \(\frac{1}{n\sqrt{p_{n}}}\) cannot be improved._ Based on Theorem 2.3, \(\mathcal{R}_{1}\) mainly depends on \(r_{n}\). For the Erdos-Renyi random graph, that is, \(f(x,y)\equiv 1\), it is obvious that \(\lambda_{0,1}=1\), \(\mu_{i}=(n-1)p_{n}\) and hence \(\mathcal{R}_{1}=o_{P}(1)\). For \(f(x,y)=e^{-\kappa x}e^{-\kappa y}\) with a positive constant \(\kappa\), \(\mu_{i}=np_{n}e^{-\frac{\kappa i}{n}}\lambda_{1,0}(1+o(1))\asymp np_{n}\), then \(s_{2}\asymp n^{2}\). The assumption of Theorem 2.3 are satisfied. Straightforward calculation yields \[r_{n}=g(\kappa)+o(1), \tag{6}\] where \[g(\kappa)=-1+\frac{\kappa}{e^{\kappa}-1}-\log\left(\frac{e^{\kappa}-1}{ \kappa e^{\kappa}}\right).\] Note that \(\lim_{\kappa\to 0^{+}}g(\kappa)=0\), \(\lim_{\kappa\to\infty}g(\kappa)=\infty\). Hence larger \(\kappa\) produces more heterogeneous random graph. This is consistent with the case \(\alpha\neq 1\). The assumption that \(f(x,y)\geq\epsilon\) for a constant \(\epsilon\in(0,1)\) in Theorem 2.2 and Theorem 2.3 can be relaxed and replaced by less restrictive assumptions. However, the alternative assumptions are difficult to state and interpret and would lead to more complex proofs. For simplicity, we do not pursue this relaxation. In addition, Theorem 2.2 and Theorem 2.3 hold for sparse networks, since they allow \(p_{n}=o(1)\) as long as \(np_{n}\to\infty\). ### The Renyi index of a power-law random graph Empirical studies have shown that many real-world networks are highly heterogeneous, that is, the degrees of nodes follow a power-law distribution ([10, 28, 31]). This motivates us to study whether the Renyi index of power-law random graph is actually close to one. Given a positive constant \(\tau\), let \(W\) be a random variable following a distribution with power-law tail as \[\mathbb{P}(W>x)=x^{-\tau},\ \ x\geq 1. \tag{7}\] This distribution has heavy tail and the \(k\)-th moment of \(W\) exists if and only if \(k<\tau\). The distribution (7) is widely used to define power-law random graphs ([7, 19, 20]). Given independent and identically distributed random variables \(\omega_{1},\ldots,\omega_{n}\) from distribution (7), define a power-law random graph \(\mathcal{G}(n,\tau)\) as \[\mathbb{P}(A_{ij}=1|W)=p\frac{\tilde{\omega}_{i}\tilde{\omega}_{j}}{n},\] where \(W=(\omega_{1},\ldots,\omega_{n})\), \(\tilde{\omega}_{i}=\min\{\omega_{i},\sqrt{n}\}\), \(p\in(0,1)\) is a constant and \(A_{ij}\) (\(1\leq i<j\leq n\)) are independent conditional on \(W\). The random graph \(\mathcal{G}(n,\tau)\) was first defined in [4, 5] and the order of large cliques was studied there. The cutoff \(\sqrt{n}\) in \(\tilde{\omega}_{i}\) guarantees the edge appearance probability is less than \(1\). This cutoff is common in algorithm analysis and random graph theory ([6, 9, 32]). We focus on the interesting regime \(\tau\in(1,2)\) as in literature ([7, 19, 20]). Note that the edges \(A_{ij}(1\leq i<j\leq n)\) are not independent and higher moments of \(\tilde{\omega}_{i}\) are not bounded. It is more challenging to derive the limit of the Renyi index \(\mathcal{R}_{\alpha}\) of \(\mathcal{G}(n,\tau)\) for arbitrary \(\alpha>0\). In this paper, we only study \(\mathcal{R}_{2}\). **Theorem 2.4**.: _Let \(\mathcal{G}(n,\tau)\) be the power-law random graph with \(\tau\in(1,2)\). Then_ \[\mathcal{R}_{2}=1-O_{P}\left(\frac{1}{n^{1-\frac{\tau}{2}}}\right), \tag{8}\] _where the rate \(\frac{1}{n^{1-\frac{\tau}{2}}}\) cannot be improved._ According to Theorem 2.4, the Renyi index \(\mathcal{R}_{2}\) of \(\mathcal{G}(n,\tau)\) converges to one in probability at rate \(n^{\frac{\tau}{2}-1}\). This indicates \(\mathcal{G}(n,\tau)\) is extremely heterogeneous, consistent with empirical observations ([10, 28, 31]). Note that nodes of \(\mathcal{G}(n,\tau)\) have the same expected degree \(p\left(\mathbb{E}[\omega_{1}]\right)^{2}\). In this sense, it seems \(\mathcal{G}(n,\tau)\) is homogeneous as the Erdos-Renyi random graph. However, the correlation between \(A_{ij}\) and \(A_{ik}\) (\(1\leq i\leq n,j\neq k\)) and the power-law tail property of \(W\) jointly make the degrees extremely different so that \(\mathcal{R}_{2}=1+o_{P}(1)\). Theorem 2.4 provides an alternative justification that power-law random graph can be used as a generative model of extremely heterogeneous networks. To conclude this section, we comment that Theorem 2.2, Theorem 2.3 and Theorem 2.4 jointly provide a theoretical justification that the Renyi index is a reasonable measure of heterogeneity of networks. For homogeneous network, the Renyi index is asymptotically zero. For extremely heterogeneous network, the Renyi index is asymptotically one. For moderately heterogeneous network, the Renyi index resides between zero and one. ## 3 Simulation In this section, we conduct simulation study to evaluate finite-sample performance of the Renyi index. In this simulation, 20 graphs were generated from each random graph model described in Section 2, and the Renyi index of each graph was calculated with \(\alpha\in\{0.5,1,2,2.5,3,10\}\). Then the mean and standard deviation (sd) of the Renyi indexes were computed, as well as the limit specified in Theorem 2.2 or Theorem 2.3. Firstly, we consider the heterogeneous Erdos-Renyi random graph \(\mathcal{G}(n,p_{n},f)\) with \(f(x,y)=e^{-\kappa x}e^{-\kappa y}\) for a positive constant \(\kappa\). The limit of the Renyi index has a closed form given in (4) for \(\alpha\neq 1\) and (6) for \(\alpha=1\). With a little abuse of notation, we denote the limit as \(\mathcal{R}_{\alpha}\). The model parameters we used to generate graphs, \(\mathcal{R}_{\alpha}\), and the mean and standard deviation of the Renyi indexes are listed in Table 1, Table 2, Table 3. As \(n\) increases, the mean gets closer to the limit \(\mathcal{R}_{\alpha}\), and \(p_{n}\) highly affects the convergence speed. These findings coincide with the results in Theorem 2.2 and Theorem 2.3. For homogeneous model (\(\kappa=0.1\)), the mean and limit \(\mathcal{R}_{\alpha}\) almost vanish, while for heterogeneous model (\(\kappa=25\)) both are pretty large (greater than 0.8). This confirms that the Renyi index can effectively measure heterogeneity of networks. In addition, the Renyi indexes increase as \(\alpha\) increases. Now we consider the power-law random graph in Section 2.2. The means and standard deviations (in parentheses) are summarized in Table 4. We point out that although the rate \(\frac{1}{n^{1-\frac{\alpha}{2}}}\) in Theorem 2.4 only depends on \(\tau\), the term \(O_{P}\left(\frac{1}{n^{1-\frac{\alpha}{2}}}\right)\) does involve constant \(p\) in a complex way (see proof of Theorem 2.4). As a result, the values of \(p,\tau\) may significantly affect how close is the mean to the limit \(\mathcal{R}_{2}=1\) in finite-sample case. Table 4 shows all the means of the Renyi indexes are larger than 0.6, indicating the power-law random graph is indeed heterogeneous. When \(n=10,000\), most of the means are close to or larger than 0.90. of \(d_{i}\) plus a remainder term. Then we carefully bound the remainder term and identify the limit and exact order of the polynomial terms. For convenience, let \[\gamma_{k,l}=\frac{\sum_{i\neq j}f_{i}^{k}f_{j}^{k}f_{ij}^{l}}{n^{2}}.\] **Proof of Theorem 2.2:** The main challenge is that the degrees \(d_{i}(1\leq i\leq n)\) are not independent and may not be identically distributed. The classical tools such as the law of large number and the central limit theorem can not be directly applied to derive the limit of \(\mathcal{R}_{\alpha}\). Our proof strategy is: (a) use the Taylor expansion to expand \(\sum_{i=1}^{n}\left(\frac{d_{i}}{n}\right)^{\alpha}\) at \(\sum_{i=1}^{n}\left(\frac{\mu_{i}}{n}\right)^{\alpha}\) as a sum of polynomials in \(d_{i}\) and a reminder term; (b) find the exact order of the polynomial terms; (c) show the reminder term is bounded by the polynomial terms. The key step is (c). We will use a truncation technique to control the reminder term. To fix the idea, we consider the case \(\alpha\in(0,3]\backslash\{1\}\) first. Let \(\mu_{i}=\mathbb{E}(d_{i})\). By Taylor \begin{table} \begin{tabular}{|c||c|c|c|c||c||c|c|c|} \hline \(n\) & \(p_{n}\) & \(\kappa\) & \(\mathcal{R}_{0.5}\) & \(mean\) & \(sd\) & \(\mathcal{R}_{1}\) & \(mean\) & \(sd\) \\ \hline \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1: The Rényi index of heterogeneous Erdős-Rényi random graph with \(\alpha=0.5,1\). expansion, we have \[\left(\frac{d_{i}}{n}\right)^{\alpha}-\left(\frac{\mu_{i}}{n}\right)^ {\alpha} = \alpha\left(\frac{\mu_{i}}{n}\right)^{\alpha-1}\left(\frac{d_{i}- \mu_{i}}{n}\right)+\frac{\alpha(\alpha-1)}{2!}\left(\frac{\mu_{i}}{n}\right)^{ \alpha-2}\left(\frac{d_{i}-\mu_{i}}{n}\right)^{2} \tag{9}\] \[+\frac{\alpha(\alpha-1)(\alpha-2)}{3!}X_{n,i}^{\alpha-3}\left( \frac{d_{i}-\mu_{i}}{n}\right)^{3}.\] where \(X_{n,i}\) is a random variable between \(\frac{d_{i}}{n}\) and \(\frac{\mu_{i}}{n}\). Summing both sides of (9) over \(i\in[n]\) yields \[\sum_{i=1}^{n}\left(\frac{d_{i}}{n}\right)^{\alpha}-\sum_{i=1}^{n }\left(\frac{\mu_{i}}{n}\right)^{\alpha} = \alpha\sum_{i=1}^{n}\left(\frac{\mu_{i}}{n}\right)^{\alpha-1} \left(\frac{d_{i}-\mu_{i}}{n}\right)+\frac{\alpha(\alpha-1)}{2!}\sum_{i=1}^{n }\left(\frac{\mu_{i}}{n}\right)^{\alpha-2}\left(\frac{d_{i}-\mu_{i}}{n}\right) ^{2} \tag{10}\] \[+\frac{\alpha(\alpha-1)(\alpha-2)}{3!}\sum_{i=1}^{n}X_{n,i}^{ \alpha-3}\left(\frac{d_{i}-\mu_{i}}{n}\right)^{3}.\] Next, we shall find the order of each term in the right-hand side of (10). We begin with the \begin{table} \begin{tabular}{|c||c|c|c|c||c|c|c|} \hline \(n\) & \(p_{n}\) & \(\kappa\) & \(\mathcal{R}_{2}\) & \(mean\) & \(sd\) & \(\mathcal{R}_{2.5}\) & \(mean\) & \(sd\) \\ \hline \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 2: The Rényi index of heterogeneous Erdős-Rényi random graph with \(\alpha=2,2.5\). first term. For given \(i\in[n]\), simple algebra yields \[\mu_{i}=\sum_{j\neq i}\mathbb{E}(A_{ij}) = \sum_{j\neq i}p_{n}f\left(\frac{i}{n},\frac{j}{n}\right)=np_{n}f_{i},\] \[\sum_{i=1}^{n}\left(\frac{\mu_{i}}{n}\right)^{\alpha} = p_{n}^{\alpha}\sum_{i=1}^{n}f_{i}^{\alpha}.\] Then \[\sum_{i=1}^{n}\left(\frac{\mu_{i}}{n}\right)^{\alpha-1}\left( \frac{d_{i}-\mu_{i}}{n}\right) = p_{n}^{\alpha-1}\sum_{i=1}^{n}f_{i}^{\alpha-1}\frac{\sum_{j\neq i }(A_{ij}-f_{ij}p_{n})}{n} \tag{11}\] \[= p_{n}^{\alpha-1}\frac{\sum_{i<j}(f_{i}^{\alpha-1}+f_{j}^{\alpha- 1})(A_{ij}-f_{ij}p_{n})}{n}.\] \begin{table} \begin{tabular}{|c||c|c|c|c||c|c|c|} \hline \(n\) & \(p_{n}\) & \(\kappa\) & \(\mathcal{R}_{3}\) & \(mean\) & \(sd\) & \(\mathcal{R}_{10}\) & \(mean\) & \(sd\) \\ \hline \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 3: The Rényi index of heterogeneous Erdös-Rényi random graph with \(\alpha=3,10\). Since \(A_{ij},(1\leq i<j\leq n)\) are independent and \(\mathbb{E}[A_{ij}-f_{ij}p_{n}]=0,\) then by (11) one has \[\mathbb{E}\left[\sum_{i=1}^{n}\left(\frac{\mu_{i}}{n}\right)^{ \alpha-1}\left(\frac{d_{i}-\mu_{i}}{n}\right)\right]^{2} \tag{12}\] \[= p_{n}^{2\alpha-2}\mathbb{E}\left[\frac{\sum_{i<j}(f_{i}^{\alpha -1}+f_{j}^{\alpha-1})(A_{ij}-f_{ij}p_{n})}{n}\right]^{2}\] \[= p_{n}^{2\alpha-2}\frac{\sum_{i<j}\mathbb{E}(f_{i}^{\alpha-1}+f_ {j}^{\alpha-1})^{2}(A_{ij}-f_{ij}p_{n})^{2}}{n^{2}}\] \[= p_{n}^{2\alpha-2}\left(\frac{\sum_{i\neq j}(f_{i}^{\alpha-1}+f_ {j}^{\alpha-1})^{2}f_{ij}p_{n}}{2n^{2}}-\frac{\sum_{i\neq j}(f_{i}^{\alpha-1}+ f_{j}^{\alpha-1})^{2}f_{ij}^{2}p_{n}^{2}}{2n^{2}}\right)\] \[= p_{n}^{2\alpha-1}\frac{\sum_{i\neq j}f_{i}^{2\alpha-2}f_{ij}+ \sum_{i\neq j}f_{i}^{\alpha-1}f_{j}^{\alpha-1}f_{ij}}{n^{2}}\] \[-p_{n}^{2\alpha-1}\frac{p_{n}\sum_{i\neq j}f_{i}^{2\alpha-2}f_{ij }^{2}+p_{n}\sum_{i\neq j}f_{i}^{\alpha-1}f_{j}^{\alpha-1}f_{ij}^{2}}{n^{2}}\] \[= p_{n}^{2\alpha-1}\left[(\lambda_{2\alpha-2,1}+\gamma_{\alpha-1, 1})-p_{n}\left(\lambda_{2\alpha-2,2}+\gamma_{\alpha-1,2}\right)\right]. \tag{13}\] Since \(f(x;y)\geq\epsilon>0,\) then \(\lambda_{2\alpha-2,1}\asymp 1,\)\(\gamma_{\alpha-1,1}\asymp 1,\)\(\lambda_{2\alpha-2,2}\asymp 1,\)\(\gamma_{\alpha-1,2}\asymp 1.\) Hence the first term in the right-hand side of (10) is bounded by order \(p_{n}^{\alpha-1}\sqrt{p_{n}}.\) By Lemma 4.1, this is the exact order. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(n\) & \(p\) & \(\tau=1.05\) & \(\tau=1.50\) & \(\tau=1.95\) \\ \hline \hline & 0.01 & 0.886(0.019) & 0.939(0.013) & 0.961(0.011) \\ 500 & 0.05 & 0.744(0.017) & 0.811(0.021) & 0.855(0.017) \\ & 0.25 & 0.645(0.013) & 0.656 (0.025) & 0.653(0.033) \\ & 0.95 & 0.622(0.016) & 0.611(0.023) & 0.541(0.044) \\ \hline \hline & 0.01 & 0.877(0.007) & 0.939(0.009) & 0.961(0.008) \\ 1000 & 0.05 & 0.766(0.012) & 0.826(0.018) & 0.855(0.018) \\ & 0.25 & 0.702(0.010) & 0.691(0.024) & 0.671(0.026) \\ & 0.95 & 0.686(0.006) & 0.662(0.026) & 0.549(0.064) \\ \hline \hline & 0.01 & 0.886(0.009) & 0.941(0.006) & 0.964(0.006) \\ 2000 & 0.05 & 0.794 (0.011) & 0.831(0.013) & 0.858(0.013) \\ & 0.25 & 0.746(0.008) & 0.731(0.016) & 0.692(0.028) \\ & 0.95 & 0.741(0.011) & 0.703(0.023) & 0.606(0.032) \\ \hline \hline & 0.01 & 0.927(0.002) & 0.946(0.002) & 0.964 (0.002) \\ 10000 & 0.05 & 0.901(0.003) & 0.911(0.010) & 0.903 (0.010) \\ & 0.25 & 0.898(0.003) & 0.892(0.014) & 0.851 (0.020) \\ & 0.95 & 0.895(0.001) & 0.894(0.011) & 0.825 (0.043) \\ \hline \end{tabular} \end{table} Table 4: The Rényi index of power-law random graph with \(\alpha=2.\) Secondly, we find the order of the second term in the right-hand side of (10). Note that \[\sum_{i=1}^{n}\left(\frac{\mu_{i}}{n}\right)^{\alpha-2}\left(\frac{d _{i}-\mu_{i}}{n}\right)^{2} \tag{14}\] \[= \frac{1}{n^{2}}\sum_{i=1}^{n}p_{n}^{\alpha-2}f_{i}^{\alpha-2}\sum_ {j,k\neq i}(A_{ij}-p_{n}f_{ij})(A_{ik}-p_{n}f_{ik})\] \[= p_{n}^{\alpha-2}\frac{1}{n^{2}}\sum_{i\neq j}(A_{ij}-p_{n}f_{ij} )^{2}f_{i}^{\alpha-2}+p_{n}^{\alpha-2}\frac{1}{n^{2}}\sum_{i\neq j\neq k}(A_{ ij}-p_{n}f_{ij})(A_{ik}-p_{n}f_{ik})f_{i}^{\alpha-2}\] \[= S_{1}+S_{2}.\] We claim \(S_{2}=o_{P}(S_{1})\). To this end, we compute the second moment of \(S_{2}\) and the first moment of \(S_{1}\). Straightforward calculations yield \[\mathbb{E}\left[S_{1}\right] = p_{n}^{\alpha-2}\frac{1}{n^{2}}\sum_{i\neq j}\mathbb{E}(A_{ij}-p _{n}f_{ij})^{2}f_{i}^{\alpha-2} \tag{15}\] \[= p_{n}^{\alpha-2}\left(\frac{1}{n^{2}}\sum_{i\neq j}p_{n}f_{ij}f_ {i}^{\alpha-2}-\frac{1}{n^{2}}\sum_{i\neq j}p_{n}^{2}f_{ij}^{2}f_{i}^{\alpha-2 }\right)\] \[= p_{n}^{\alpha-1}\left(\lambda_{\alpha-2,1}-p_{n}\lambda_{\alpha -2,2}\right).\] Note that \(\lambda_{\alpha-2,1}\asymp 1\), \(\lambda_{\alpha-2,2}\asymp 1\), due to the assumption \(f(x;y)\geq\epsilon>0\). Then \(S_{1}\) is bounded by order \(p_{n}^{\alpha-1}\). By Lemma 4.1, this is the exact order. Since \(0\leq f_{i}\leq 1\) and \(0\leq f_{ij}\leq 1\), then \[\mathbb{E}\left[S_{2}^{2}\right] \leq \frac{p_{n}^{2(\alpha-2)}}{n^{4}}\sum_{\begin{subarray}{c}i\neq j \neq k\\ r\neq s\neq t\end{subarray}}\mathbb{E}(A_{ij}-p_{n}f_{ij})(A_{ik}-p_{n}f_{ik})(A _{rs}-p_{n}f_{rs})(A_{rt}-p_{n}f_{rt}) \tag{16}\] \[= \frac{p_{n}^{2(\alpha-2)}}{n^{4}}O\left(\sum_{i\neq j\neq k} \mathbb{E}(A_{ij}-p_{n}f_{ij})^{2}(A_{ik}-p_{n}f_{ik})^{2}\right)\] \[= \frac{p_{n}^{2(\alpha-2)}}{n^{4}}O\left(\sum_{i\neq j\neq k}p_{n} ^{2}f_{ij}f_{ik}\right)\] \[= O\left(\frac{p_{n}^{2\alpha-2}}{n}\right).\] Then \(S_{2}=O_{P}\left(\frac{p_{n}^{\alpha-1}}{\sqrt{n}}\right)\). Hence the exact order of the second term in the right-hand side of (10) is \(p_{n}^{\alpha-1}\) and \(S_{1}\) is the leading term. Next, we show the third term of (10) is bounded by \(p_{n}^{\alpha-1}O\left(\frac{1}{\sqrt{np_{n}}}\right)\). If \(\alpha=2\), then the third term in (10) vanishes. The desired result holds trivially. We only need to focus on the cases \(\alpha\neq 1,2\). Note that \(X_{n,i}\geq 0\). Then \[\mathbb{E}\left[\left|\sum_{i=1}^{n}X_{n,i}^{\alpha-3}\left(\frac{d_{i}-\mu_{i}}{n }\right)^{3}\right|\right]\leq\sum_{i=1}^{n}\mathbb{E}\left[X_{n,i}^{\alpha-3} \left|\frac{d_{i}-\mu_{i}}{n}\right|^{3}\right]. \tag{17}\] We shall show the right-hand side of (17) is bounded by \(p_{n}^{\alpha-1}O\left(\frac{1}{\sqrt{np_{n}}}\right)\). Consider first the case \(\alpha=3\). In this case, the expansion in Equation (9) holds with \(X_{n;i}=1\), so that the analysis of Equation (17) is simpler. Since \(X_{n,i}=1\) for \(\alpha=3\). By the Cauchy-Schwarz inequality, we have \[\sum_{i=1}^{n}\mathbb{E}\left[\left|\frac{d_{i}-\mu_{i}}{n}\right| ^{3}\right] \leq \sum_{i=1}^{n}\sqrt{\mathbb{E}\left[\left(\frac{d_{i}-\mu_{i}}{n }\right)^{6}\right]} \tag{18}\] \[= \sum_{i=1}^{n}\sqrt{\frac{\sum_{j_{1},j_{2},\ldots,j_{6}\neq i} \mathbb{E}(A_{ij_{1}}-p_{n}f_{ij_{1}})(A_{ij_{2}}-p_{n}f_{ij_{2}})\ldots(A_{ij_ {6}}-p_{n}f_{ij_{6}})}{n^{6}}}\] \[= \sum_{i=1}^{n}\sqrt{\frac{15\sum_{j_{1}\neq j_{2}\neq j_{3}\neq i }\mathbb{E}(A_{ij_{1}}-p_{n}f_{ij_{1}})^{2}(A_{ij_{2}}-p_{n}f_{ij_{2}})^{2}(A_{ ij_{3}}-p_{n}f_{ij_{3}})^{2}}{n^{6}}}\] \[+\sum_{i=1}^{n}\sqrt{\frac{15\sum_{j_{1}\neq j_{2}\neq i} \mathbb{E}(A_{ij_{1}}-p_{n}f_{ij_{1}})^{4}(A_{ij_{2}}-p_{n}f_{ij_{2}})^{2}}{n^ {6}}}\] \[+\sum_{i=1}^{n}\sqrt{\frac{20\sum_{j_{1}\neq j_{2}\neq i} \mathbb{E}(A_{ij_{1}}-p_{n}f_{ij_{1}})^{3}(A_{ij_{2}}-p_{n}f_{ij_{2}})^{3}}{n^ {6}}}\] \[+\sum_{i=1}^{n}\sqrt{\frac{\sum_{j_{1}\neq i}\mathbb{E}(A_{ij_{1}} -p_{n}f_{ij_{1}})^{6}}{n^{6}}}\] \[= O\left(n\frac{\sqrt{n^{3}p_{n}^{3}}+\sqrt{n^{2}p_{n}^{2}}+\sqrt{ np_{n}}}{\sqrt{n^{6}}}\right)\] \[= p_{n}^{2}O\left(\frac{1}{\sqrt{np_{n}}}+\frac{1}{np_{n}}+\frac{ 1}{np_{n}\sqrt{np_{n}}}\right)=p_{n}^{2}O\left(\frac{1}{\sqrt{np_{n}}}\right).\] Hence, for \(\alpha=3\), it follows that \[\mathbb{E}\left[\left|\sum_{i=1}^{n}X_{n,i}^{\alpha-3}\left(\frac{d_{i}-\mu_{i }}{n}\right)^{3}\right|\right]=p_{n}^{\alpha-1}O\left(\frac{1}{\sqrt{np_{n}}} \right). \tag{19}\] Next we assume \(\alpha\in(0,3)\) and \(\alpha\neq 1,2\). Let \(\delta\in(0,1)\) be an arbitrary small constant. Note that \[\mathbb{E}\left[\left|\sum_{i=1}^{n}X_{n,i}^{\alpha-3}\left(\frac {d_{i}-\mu_{i}}{n}\right)^{3}\right|\right]=\mathbb{E}\left[\left|\sum_{i=1}^{ n}X_{n,i}^{\alpha-3}\left(\frac{d_{i}-\mu_{i}}{n}\right)^{3}\left(I[X_{n,i} \geq\delta\frac{\mu_{i}}{n}]+I[X_{n,i}<\delta\frac{\mu_{i}}{n}]\right)\right|\right] \tag{20}\] \[\leq \mathbb{E}\left[\sum_{i=1}^{n}X_{n,i}^{\alpha-3}\left|\frac{d_{i} -\mu_{i}}{n}\right|^{3}I[X_{n,i}\geq\delta\frac{\mu_{i}}{n}]\right]+\mathbb{E }\left[\sum_{i=1}^{n}X_{n,i}^{\alpha-3}\left|\frac{d_{i}-\mu_{i}}{n}\right|^{3 }I\big{[}X_{n,i}<\delta\frac{\mu_{i}}{n}\big{]}\right]\] Note that, when \(\alpha<3\), then \(\alpha-3<0\). If \(X_{n,i}\geq\delta\frac{\mu_{i}}{n}\), then \(X_{n,i}^{\alpha-3}\leq\left(\delta\frac{\mu_{i}}{n}\right)^{\alpha-3}\leq\delta ^{\alpha-3}p_{n}^{\alpha-3}f_{i}^{\alpha-3}\). So, it is possible to use the same approach as for the case \(\alpha=3\). By (17) and a similar calculation as in (18), we get \[\mathbb{E}\left[\left|\sum_{i=1}^{n}X_{n,i}^{\alpha-3}\left(\frac{d_{i}-\mu_{i} }{n}\right)^{3}I[X_{n,i}\geq\delta\frac{\mu_{i}}{n}]\right|\right]\leq\delta^ {\alpha-3}p_{n}^{\alpha-3}\mathbb{E}\left[\sum_{i=1}^{n}\left|\frac{d_{i}-\mu_ {i}}{n}\right|^{3}f_{i}^{\alpha-3}\right]=p_{n}^{\alpha-1}O\left(\frac{1}{ \sqrt{np_{n}}}\right). \tag{21}\] The difficult case is \(X_{n,i}<\delta\frac{\mu_{i}}{n}\). Suppose \(X_{n,i}<\delta\frac{\mu_{i}}{n}\). Recall that \(\frac{d_{i}}{n}\leq X_{n,i}\leq\frac{\mu_{i}}{n}\) or \(\frac{\mu_{i}}{n}\leq X_{n,i}\leq\frac{d_{i}}{n}\). Then \(X_{n,i}<\delta\frac{\mu_{i}}{n}\) implies \(\frac{d_{i}}{n}\leq X_{n,i}\leq\delta\frac{\mu_{i}}{n}\). In this case, \(\frac{d_{i}}{\mu_{i}}\leq\delta\). Dividing both sides of (9) by \(\left(\frac{\mu_{i}}{n}\right)^{\alpha}\) yields \[\left(\frac{d_{i}}{\mu_{i}}\right)^{\alpha}-1=\alpha\left(\frac{d_{i}}{\mu_{i }}-1\right)+\frac{\alpha(\alpha-1)}{2}\left(\frac{d_{i}}{\mu_{i}}-1\right)^{2 }+\frac{\alpha(\alpha-1)(\alpha-2)}{6}\frac{X_{n,1}^{\alpha-3}}{\left(\frac{ \mu_{i}}{n}\right)^{\alpha-3}}\left(\frac{d_{i}}{\mu_{i}}-1\right)^{3},\] from which it follows that \[\frac{\alpha(\alpha-1)(\alpha-2)}{6}\frac{X_{n,i}^{\alpha-3}}{ \left(\frac{\mu_{i}}{n}\right)^{\alpha-3}}\left(\frac{d_{i}}{\mu_{i}}-1\right) ^{3} \tag{22}\] \[= -\frac{(\alpha-1)(\alpha-2)}{2}+\left(\frac{d_{i}}{\mu_{i}} \right)^{\alpha}+\alpha(\alpha-2)\frac{d_{i}}{\mu_{i}}-\frac{\alpha(\alpha-1) }{2}\left(\frac{d_{i}}{\mu_{i}}\right)^{2}.\] Note that \(\frac{d_{i}}{\mu_{i}}\geq 0\). For a fixed \(\alpha\), there exists a sufficiently small constant \(\delta>0\) such that if \(\frac{d_{i}}{\mu_{i}}\leq\delta\) then the right-hand side of (22) is bounded away from zero and infinity. This implies that \(X_{n,i}^{\alpha-3}\leq C\left(\frac{\mu_{i}}{n}\right)^{\alpha-3}\) for some constant \(C>0\) and \(C\) is independent of \(i\in[n]\). Then similar to (21), we have \[\mathbb{E}\left[\left|\sum_{i=1}^{n}X_{n,i}^{\alpha-3}\left(\frac{d_{i}-\mu_{i }}{n}\right)^{3}I[X_{n,i}<\delta\frac{\mu_{i}}{n}]\right|\right]=p_{n}^{ \alpha-1}O\left(\frac{1}{\sqrt{np_{n}}}\right). \tag{23}\] According to (20), (21) and (23), (19) holds for \(\alpha\in(0,3)\) and \(\alpha\neq 1,2\). By (17), (19) and (21), it follows that the third term of (10) is bounded by \(p_{n}^{\alpha-1}O_{P}\left(\frac{1}{\sqrt{np_{n}}}\right)\). Then the first two terms are the leading terms. By (10), we get \[\sum_{i=1}^{n}\left(\frac{d_{i}}{n}\right)^{\alpha}-p_{n}^{\alpha }\sum_{i=1}^{n}f_{i}^{\alpha} = \alpha\sum_{i=1}^{n}\left(p_{n}f_{i}\right)^{\alpha-1}\left(\frac{ d_{i}-\mu_{i}}{n}\right)+\frac{\alpha(\alpha-1)}{2!}p_{n}^{\alpha-2}\frac{1}{n^{2}} \sum_{i\neq j}(A_{ij}-p_{n}f_{ij})^{2}f_{i}^{\alpha-2} \tag{24}\] \[+O_{P}\left(\frac{p_{n}^{\alpha-1}}{\sqrt{np_{n}}}+\frac{p_{n}^{ \alpha-1}}{\sqrt{n}}\right)\] \[= O_{P}\left(p_{n}^{\alpha-1}\sqrt{p_{n}}\right)+O_{P}\left(p_{n} ^{\alpha-1}\right)+O_{P}\left(\frac{p_{n}^{\alpha-1}}{\sqrt{np_{n}}}+\frac{p_{n }^{\alpha-1}}{\sqrt{n}}\right).\] By Lemma 4.1, the rates \(O_{P}\left(p_{n}^{\alpha-1}\sqrt{p_{n}}\right)\) and \(O_{P}\left(p_{n}^{\alpha-1}\right)\) in (24) are optimal. Besides, \(\frac{d}{n}=p_{n}\lambda_{0,1}+O_{P}\left(\frac{\sqrt{p_{n}}}{n}\right)\) and the rate \(\frac{\sqrt{p_{n}}}{n}\) cannot be improved according to Lemma 4.1. Then (2) follows for \(\alpha\in(0,3]\backslash\{1\}\). Now assume \(\alpha\in(k-1,k]\) for any fixed integer \(k\geq 4\). By Taylor expansion, we have \[\sum_{i=1}^{n}\left(\frac{d_{i}}{n}\right)^{\alpha}-\sum_{i=1}^{n} \left(\frac{\mu_{i}}{n}\right)^{\alpha-1}\left(\frac{d_{i}-\mu_{i}}{n}\right)+ \frac{\alpha(\alpha-1)}{2!}\sum_{i=1}^{n}\left(\frac{\mu_{i}}{n}\right)^{\alpha -2}\left(\frac{d_{i}-\mu_{i}}{n}\right)^{2} \tag{25}\] \[\quad+\cdots+\frac{\alpha(\alpha-1)\ldots(\alpha-k+1)}{k!}\sum_{i =1}^{n}X_{n,i}^{\alpha-k}\left(\frac{d_{i}-\mu_{i}}{n}\right)^{k},\] where \(X_{n,i}\) is between \(\frac{d_{i}}{n}\) and \(\frac{\mu_{i}}{n}\). To complete the proof, it suffices to show the first two terms of (25) are the leading terms. More specifically, we show only the first two terms "matter" among the first \(k-1\) terms. Then we show the remainder term is negligible using a truncation argument, analogous to the one used for the case \(\alpha<3\). For integer \(t\) with \(3\leq t\leq k\), we have \[\mathbb{E}\left[\left|\sum_{i=1}^{n}\left(\frac{\mu_{i}}{n}\right) ^{\alpha-t}\left(\frac{d_{i}-\mu_{i}}{n}\right)^{t}\right|\right] \leq \sum_{i=1}^{n}\left(\frac{\mu_{i}}{n}\right)^{\alpha-t}\mathbb{E }\left[\left|\frac{d_{i}-\mu_{i}}{n}\right|^{t}\right] \tag{26}\] \[\leq \sum_{i=1}^{n}\left(\frac{\mu_{i}}{n}\right)^{\alpha-t}\sqrt{ \mathbb{E}\left[\left(\frac{d_{i}-\mu_{i}}{n}\right)^{2t}\right]}\] \[= O\left(\frac{p_{n}^{\alpha-t}}{n^{t-1}}\sqrt{\mathbb{E}\left[ \left(d_{i}-\mu_{i}\right)^{2t}\right]}\right).\] Note that \[\mathbb{E}\left[\left(d_{i}-\mu_{i}\right)^{2t}\right]=\mathbb{E}\left[\sum_{ j_{1},j_{2},\ldots,j_{2t}}(A_{ij_{1}}-p_{n}f_{ij_{1}})\ldots(A_{ij_{2t}}-p_{n}f_{ ij_{2t}})\right].\] Since \(A_{ij}\) and \(A_{il}\) are independent if \(j\neq l\), then \(\mathbb{E}[(A_{ij}-p_{n}f_{ij})(A_{il}-p_{n}f_{il})]=0\) if \(j\neq l\). If there exists an index \(j_{s}\) such that \(j_{s}\) is not equal to \(j_{r}\) for any \(r=1,2,\ldots,s-1,s+1,\ldots,2t\), then \[\mathbb{E}\left[\sum_{j_{1},j_{2},\ldots,j_{2t}}(A_{ij_{1}}-p_{n}f_{ij_{1}}) \ldots(A_{ij_{2t}}-p_{n}f_{ij_{2t}})\right]=0.\] Hence, each index \(j_{s}\) must equal another index \(j_{r}\) with \(r\neq s\). Then \[\mathbb{E}\left[\left(d_{i}-\mu_{i}\right)^{2t}\right]=\sum_{s=1}^{t}\sum_{j_{ 1},j_{2},\ldots,j_{s}:distinct}\mathbb{E}\left[(A_{ij_{1}}-p_{n}f_{ij_{1}})^{ \lambda_{1}}\ldots(A_{ij_{s}}-p_{n}f_{ij_{s}})^{\lambda_{s}}\right],\] where \(\lambda_{r}\geq 2\) are integers and \(\lambda_{1}+\lambda_{2}+\cdots+\lambda_{s}=2t\). It is easy to verify that for \(\lambda_{r}\geq 2\) (\(r=1,2,\ldots,s\)), \[\mathbb{E}\left[(A_{ij_{r}}-p_{n}f_{ij_{r}})^{\lambda_{r}}\right]=(1-p_{n}f_{ ij_{r}})^{\lambda_{r}}p_{n}f_{ij_{r}}+(-p_{n}f_{ij_{r}})^{\lambda_{r}}(1-p_{n}f_{ ij_{r}})=O(p_{n}).\] Then \(\mathbb{E}\left[\left(d_{i}-\mu_{i}\right)^{2t}\right]=O\left(\sum_{s=1}^{t}n^{s}p_{ n}^{s}\right)=O(n^{t}p_{n}^{t})\). By (26), one has \[\mathbb{E}\left[\left|\sum_{i=1}^{n}\left(\frac{\mu_{i}}{n}\right)^{ \alpha-t}\left(\frac{d_{i}-\mu_{i}}{n}\right)^{t}\right|\right] = O\left(\frac{p_{n}^{\alpha-1}}{(np_{n})^{\frac{t}{2}-1}}\right), \hskip 28.452756pt3\leq t\leq k. \tag{27}\] If \(X_{n,i}\leq\delta\frac{\mu_{i}}{n}\) for a small constant \(\delta>0\), by a similar argument as in (22), one can get \(X_{n,i}^{\alpha-k}\leq M\left(\frac{\mu_{i}}{n}\right)^{\alpha-k}\) for a large constant \(M>0\). By (27), (24) holds. Then the proof is complete. **Lemma 4.1**.: _Under the assumption of Theorem 2.2, the following results are true. Then_ \[\frac{\sum_{i=1}^{n}\left(p_{n}f_{i}\right)^{\alpha-1}\left(\frac{d_{i}-\mu_{ i}}{n}\right)}{p_{n}^{\alpha-1}\sqrt{p_{n}(\lambda_{2\alpha-2,1}+\gamma_{ \alpha-1,1})-p_{n}^{2}(\lambda_{2\alpha-2,2}+\gamma_{\alpha-1,2})}}\ \Rightarrow\ \mathcal{N}(0,1), \tag{28}\] \[\frac{1}{n^{2}}\sum_{i\neq j}(A_{ij}-p_{n}f_{ij})^{2}f_{i}^{\alpha-2} = p_{n}(\lambda_{\alpha-2,1}-p_{n}\lambda_{\alpha-2,2})+o_{P}(1), \tag{29}\] _and_ \[\sum_{i<j}\frac{A_{ij}-p_{n}f_{ij}}{n\sqrt{p_{n}\lambda_{0,1}+p_{n}^{2} \lambda_{0,2}}}\Rightarrow\mathcal{N}(0,1). \tag{30}\] **Proof of Lemma 4.1:** (I). By (11) and (13), we get \[\frac{\sum_{i=1}^{n}\left(p_{n}f_{i}\right)^{\alpha-1}\left(\frac{d_{i}-\mu_{ i}}{n}\right)}{p_{n}^{\alpha-1}\sqrt{p_{n}(\lambda_{2\alpha-2,1}+\gamma_{ \alpha-1,1})-p_{n}^{2}(\lambda_{2\alpha-2,2}+\gamma_{\alpha-1,2})}}=\sum_{i<j }\frac{(f_{i}^{\alpha-1}+f_{j}^{\alpha-1})(A_{ij}-f_{ij}p_{n})}{n\sqrt{p_{n}( \lambda_{2\alpha-2,1}+\gamma_{\alpha-1,1})-p_{n}^{2}(\lambda_{2\alpha-2,2}+ \gamma_{\alpha-1,2})}}.\] Note that \(A_{ij},(1\leq i<j\leq n)\) are independent and \(0<\epsilon\leq f(x,y)\leq 1\). Then \(\lambda_{2\alpha-2,2}\asymp 1\), \(\lambda_{2\alpha-2,1}\asymp 1\), \(\gamma_{\alpha-1,1}\asymp 1\), \(\gamma_{\alpha-1,2}\asymp 1\) and \[\sum_{i<j}\mathbb{E}\left[\frac{(f_{i}^{\alpha-1}+f_{j}^{\alpha-1} )(A_{ij}-f_{ij}p_{n})}{n\sqrt{p_{n}(\lambda_{2\alpha-2,1}+\gamma_{\alpha-1,1}) -p_{n}^{2}(\lambda_{2\alpha-2,2}+\gamma_{\alpha-1,2})}}\right]^{4}\] \[= O\left(\frac{\sum_{i<j}(f_{i}^{\alpha-1}+f_{j}^{\alpha-1})^{4}f_ {ij}}{n^{4}p_{n}}\right)=o(1).\] By the Lyapunov central limit theorem, (28) holds. (II). Note that \[\mathbb{E}\left[\frac{1}{n^{2}}\sum_{i<j}(f_{i}^{\alpha-2}+f_{j}^{ \alpha-2})\big{[}(A_{ij}-p_{n}f_{ij})^{2}-p_{n}f_{ij}(1-p_{n}f_{ij})\big{]} \right]^{2}\] \[= \frac{1}{n^{4}}\sum_{i<j}(f_{i}^{\alpha-2}+f_{j}^{\alpha-2})^{2} \mathbb{E}\big{[}(A_{ij}-p_{n}f_{ij})^{2}-p_{n}f_{ij}(1-p_{n}f_{ij})\big{]}^{2}\] \[= O\left(\frac{1}{n^{4}}\sum_{i<j}(f_{i}^{\alpha-2}+f_{j}^{\alpha- 2})^{2}p_{n}f_{ij}\right)\] \[= O\left(\frac{p_{n}}{n^{4}}\sum_{i\neq j}f_{ij}f_{i}^{2(\alpha-2) }+\frac{p_{n}}{n^{4}}\sum_{i\neq j}f_{ij}f_{i}^{\alpha-2}f_{j}^{\alpha-2}\right) =o(1).\] Hence (29) holds. (III). Note that \[\mathbb{E}\left[\frac{\sum_{i<j}(A_{ij}-p_{n}f_{ij})}{n^{2}}\right]^{2}=\frac {\sum_{i<j}\mathbb{E}(A_{ij}-p_{n}f_{ij})^{2}}{n^{4}}=\frac{\sum_{i<j}(p_{n}f _{ij}-p_{n}^{2}f_{ij}^{2})}{n^{4}}=\frac{p_{n}\lambda_{0,1}+p_{n}^{2}\lambda_ {0,2}}{n^{2}}.\] Since \(f(x,y)\geq\epsilon>0\), then \(\lambda_{0,1}\asymp 1\), \(\lambda_{0,2}\asymp 1\) and \[\sum_{i<j}\frac{\mathbb{E}(A_{ij}-p_{n}f_{ij})^{4}}{\big{(}n\sqrt{p_{n} \lambda_{0,1}+p_{n}^{2}\lambda_{0,2}}\big{)}^{4}}=O\left(\frac{\sum_{i<j}f_{ij }}{n^{4}p_{n}}\right)=o(1).\] By the Lyapunov central limit theorem, (30) holds. **Proof of Theorem 2.3:** The proof strategy is the same as the proof of Theorem 2.2. Note that \(\frac{d}{n}=p_{n}\lambda_{0,1}+O_{P}\left(\frac{\sqrt{p_{n}}}{n}\right)\) by Lemma 4.1 and \(d=\frac{1}{n}\sum_{i=1}^{n}d_{i}\). Then we have \[\frac{1}{n}\sum_{i=1}^{n}\frac{d_{i}}{d}\log\frac{d_{i}}{d} = \frac{1}{n}\sum_{i=1}^{n}\frac{d_{i}}{d}\log\left(\frac{np_{n} \lambda_{0,1}}{d}\frac{d_{i}}{np_{n}\lambda_{0,1}}\right) \tag{31}\] \[= \frac{1}{n}\sum_{i=1}^{n}\frac{d_{i}}{d}\log\left(\frac{np_{n} \lambda_{0,1}}{d}\right)+\frac{1}{n}\sum_{i=1}^{n}\frac{d_{i}}{d}\log\left( \frac{d_{i}}{np_{n}\lambda_{0,1}}\right)\] \[= \log\left(\frac{np_{n}\lambda_{0,1}}{d}\right)+\frac{np_{n} \lambda_{0,1}}{d}\frac{1}{n}\sum_{i=1}^{n}\frac{d_{i}}{np_{n}\lambda_{0,1}} \log\frac{d_{i}}{np_{n}\lambda_{0,1}}\] \[= O_{P}\left(\frac{1}{n\sqrt{p_{n}}}\right)+\frac{np_{n}\lambda_{0, 1}}{d}\frac{1}{n}\sum_{i=1}^{n}\frac{d_{i}}{np_{n}\lambda_{0,1}}\log\frac{d_{i }}{np_{n}\lambda_{0,1}}.\] It suffices to get the limit of \(\sum_{i=1}^{n}\frac{d_{i}}{np_{n}\lambda_{0,1}}\log\frac{d_{i}}{np_{n}\lambda_{0,1}}\). Recall that \(\mu_{i}=\mathbb{E}(d_{i})=\sum_{j\neq i}p_{n}f_{ij}\). By the Taylor expansion, we have \[\frac{d_{i}}{np_{n}\lambda_{0,1}}\log\left(\frac{d_{i}}{np_{n} \lambda_{0,1}}\right) = \frac{d_{i}}{np_{n}\lambda_{0,1}}\log\left(\frac{\mu_{i}}{np_{n} \lambda_{0,1}}\right)+\frac{d_{i}}{\mu_{i}}\left(\frac{d_{i}-\mu_{i}}{np_{n} \lambda_{0,1}}\right) \tag{32}\] \[-\frac{np_{n}\lambda_{0,1}d_{i}}{2\mu_{i}^{2}}\left(\frac{d_{i}- \mu_{i}}{np_{n}\lambda_{0,1}}\right)^{2}+\frac{1}{3X_{n,i}^{3}}\frac{d_{i}}{np _{n}\lambda_{0,1}}\left(\frac{d_{i}-\mu_{i}}{np_{n}\lambda_{0,1}}\right)^{3},\] where \(\frac{d_{i}}{np_{n}\lambda_{0,1}}\leq X_{n,i}\leq\frac{\mu_{i}}{np_{n} \lambda_{0,1}}\) or \(\frac{\mu_{i}}{np_{n}\lambda_{0,1}}\leq X_{n,i}\leq\frac{d_{i}}{np_{n}\lambda _{0,1}}\). Summing both sides of (32) over \(i\in[n]\) yields \[\sum_{i=1}^{n}\frac{d_{i}}{np_{n}\lambda_{0,1}}\log\left(\frac{d_ {i}}{np_{n}\lambda_{0,1}}\right) = \sum_{i=1}^{n}\frac{d_{i}}{np_{n}\lambda_{0,1}}\log\left(\frac{\mu _{i}}{np_{n}\lambda_{0,1}}\right)+\sum_{i=1}^{n}\frac{d_{i}}{\mu_{i}}\left( \frac{d_{i}-\mu_{i}}{np_{n}\lambda_{0,1}}\right)\] \[-\sum_{i=1}^{n}\frac{np_{n}\lambda_{0,1}d_{i}}{2\mu_{i}^{2}}\left( \frac{d_{i}-\mu_{i}}{np_{n}\lambda_{0,1}}\right)^{2}+\sum_{i=1}^{n}\frac{1}{3X _{n,i}^{3}}\frac{d_{i}}{np_{n}\lambda_{0,1}}\left(\frac{d_{i}-\mu_{i}}{np_{n} \lambda_{0,1}}\right)^{3}.\] Next we isolate the leading terms in the right-hand side of (33). More specifically, we show the first term is the leading term, and the second term, the third terms and the remainder term are of smaller order. For the remainder term, a truncation technique as in the proof of Theorem 2.2 will be used. Firstly, we consider the second of (33). Note that \[\sum_{i=1}^{n}\frac{d_{i}}{\mu_{i}}\left(\frac{d_{i}-\mu_{i}}{np_{n}\lambda_{0,1}}\right)=\sum_{i=1}^{n}\frac{(d_{i}-\mu_{i})^{2}}{\mu_{i}np_{n}\lambda_{0,1 }}+\sum_{i=1}^{n}\frac{d_{i}-\mu_{i}}{np_{n}\lambda_{0,1}}. \tag{34}\] We find the order of each term in the right-hand side of (34). Recall that \(A_{ij},(1\leq i<j\leq n)\) are independent. Then straightforward calculations yield \[\mathbb{E}\left[\sum_{i=1}^{n}\frac{(d_{i}-\mu_{i})^{2}}{\mu_{i} np_{n}\lambda_{0,1}}\right] = \sum_{i=1}^{n}\frac{\sum_{j\neq i}\mathbb{E}(A_{ij}-p_{n}f_{ij})^ {2}}{\mu_{i}np_{n}\lambda_{0,1}} \tag{35}\] \[= \sum_{i=1}^{n}\frac{\sum_{j\neq i}p_{n}f_{ij}(1-p_{n}f_{ij})}{\mu _{i}np_{n}\lambda_{0,1}}\] \[= \sum_{i=1}^{n}\frac{\sum_{j\neq i}p_{n}f_{ij}-\sum_{j\neq i}p_{n} ^{2}f_{ij}^{2}}{\mu_{i}np_{n}\lambda_{0,1}}\] \[= \frac{1}{p_{n}\lambda_{0,1}}\left(1-p_{n}\frac{1}{n}\sum_{i=1}^{n} \frac{\sum_{j\neq i}f_{ij}^{2}}{\sum_{j\neq i}f_{ij}}\right),\] and \[\mathbb{E}\left[\sum_{i=1}^{n}\frac{d_{i}-\mu_{i}}{np_{n}\lambda_{0,1}}\right]^ {2}=\sum_{i\neq j}^{n}\frac{\mathbb{E}(A_{ij}-p_{n}f_{ij})^{2}}{n^{2}p_{n}^{2} \lambda_{0,1}^{2}}=\frac{\sum_{i\neq j}^{n}p_{n}f_{ij}-\sum_{i\neq j}^{n}p_{n} ^{2}f_{ij}^{2}}{n^{2}p_{n}^{2}\lambda_{0,1}^{2}}=\frac{1}{p_{n}\lambda_{0,1}}- \frac{\lambda_{0,2}}{\lambda_{0,1}^{2}}. \tag{36}\] Note that \(\frac{1}{n}\sum_{i=1}^{n}\frac{\sum_{j\neq i}f_{ij}^{2}}{\sum_{j\neq i}f_{ij}}\leq 1\). Then (34) has order \(O_{P}\left(\frac{1}{p_{n}}\right)\). Next, we get the order of the third term in the right-hand side of (33). Simple algebra yields \[\sum_{i=1}^{n}\frac{np_{n}\lambda_{0,1}d_{i}}{\mu_{i}^{2}}\left(\frac{d_{i}-\mu _{i}}{np_{n}\lambda_{0,1}}\right)^{2}=\frac{1}{np_{n}\lambda_{0,1}}\sum_{i=1}^ {n}\frac{(d_{i}-\mu_{i})^{3}}{\mu_{i}^{2}}+\frac{1}{np_{n}\lambda_{0,1}}\sum_{ i=1}^{n}\frac{(d_{i}-\mu_{i})^{2}}{\mu_{i}}. \tag{37}\] Now we get an upper bound of the two terms in (37). By the Cauchy-Schwarz inequality, one gets \[\frac{1}{np_{n}\lambda_{0,1}}\mathbb{E}\left[\left|\sum_{i=1}^{n} \frac{(d_{i}-\mu_{i})^{3}}{\mu_{i}^{2}}\right|\right] \tag{38}\] \[\leq \frac{1}{np_{n}\lambda_{0,1}}\sum_{i=1}^{n}\mathbb{E}\left[\frac {|d_{i}-\mu_{i}|^{3}}{\mu_{i}^{2}}\right]\] \[\leq \frac{1}{np_{n}\lambda_{0,1}}\sum_{i=1}^{n}\frac{1}{\mu_{i}^{2}} \sqrt{\mathbb{E}(d_{i}-\mu_{i})^{6}}\] \[\leq \frac{1}{np_{n}\lambda_{0,1}}\sum_{i=1}^{n}\frac{1}{\mu_{i}^{2}} \sqrt{\sum_{j_{1}\neq j_{2}\neq i}\mathbb{E}(A_{ij_{1}}-p_{n}f_{ij_{1}})^{2}( A_{ij_{2}}-p_{n}f_{ij_{2}})^{2}(A_{ij_{3}}-p_{n}f_{ij_{3}})^{2}}\] \[+\frac{1}{np_{n}\lambda_{0,1}}\sum_{i=1}^{n}\frac{1}{\mu_{i}^{2}} \sqrt{\sum_{j_{1}\neq j_{2}\neq i}\mathbb{E}(A_{ij_{1}}-p_{n}f_{ij_{1}})^{3}( A_{ij_{2}}-p_{n}f_{ij_{2}})^{3}}\] \[+\frac{1}{np_{n}\lambda_{0,1}}\sum_{i=1}^{n}\frac{1}{\mu_{i}^{2}} \sqrt{\sum_{j_{1}\neq j_{2}\neq i}\mathbb{E}(A_{ij_{1}}-p_{n}f_{ij_{1}})^{4}( A_{ij_{2}}-p_{n}f_{ij_{2}})^{2}}\] \[+\frac{1}{np_{n}\lambda_{0,1}}\sum_{i=1}^{n}\frac{1}{\mu_{i}^{2}} \sqrt{\sum_{j_{1}\neq i}\mathbb{E}(A_{ij_{1}}-p_{n}f_{ij_{1}})^{6}}\] \[= \frac{1}{np_{n}\lambda_{0,1}}O\left(\sum_{i=1}^{n}\frac{1}{\sqrt{ \mu_{i}}}+\sum_{i=1}^{n}\frac{1}{\mu_{i}}+\sum_{i=1}^{n}\frac{1}{\sqrt{\mu_{i }^{3}}}\right)\] \[= \frac{1}{p_{n}}O\left(\frac{1}{\sqrt{np_{n}}}\right).\] Then the first term of (37) is bounded by \(\frac{1}{p_{n}}O_{P}\left(\frac{1}{\sqrt{np_{n}}}\right)\). By (35), the second term is bounded by \(O_{P}\left(\frac{1}{p_{n}}\right)\). Next, we consider the last term in the right-hand side of (33). Let \(\delta\in(0,1)\) be an **arbitrary** small constant. We shall find an upper bound of the last term of (33) in two cases: \(X_{n,i}\geq\delta\frac{\mu_{i}}{np_{n}\lambda_{0,1}}\) and \(X_{n,i}<\delta\frac{\mu_{i}}{np_{n}\lambda_{0,1}}\). If \(X_{n,i}\geq\delta\frac{\mu_{i}}{np_{n}\lambda_{0,1}}\), then \[\frac{1}{3X_{n,i}^{3}}\frac{d_{i}}{np_{n}\lambda_{0,1}}\left|\frac{d_{i}-\mu_{i }}{np_{n}\lambda_{0,1}}\right|^{3}\leq\frac{1}{3\delta^{3}}\frac{d_{i}}{np_{n} \lambda_{0,1}}\left|\frac{d_{i}-\mu_{i}}{\mu_{i}}\right|^{3}. \tag{39}\] Suppose \(X_{n,i}<\delta\frac{\mu_{i}}{np_{n}\lambda_{0,1}}.\) If \(X_{n,i}<\frac{d_{i}}{np_{n}\lambda_{0,1}},\) then \(X_{n,i}\) cannot be between \(\frac{d_{i}}{np_{n}\lambda_{0,1}}\) and \(\frac{\mu_{i}}{np_{n}\lambda_{0,1}}.\) Therefore, \(\frac{d_{i}}{np_{n}\lambda_{0,1}}\leq X_{n,i}<\delta\frac{\mu_{i}}{np_{n} \lambda_{0,1}}.\) Then \(\frac{d_{i}}{\mu_{i}}\leq\delta.\) Since \(-\log x\rightarrow\infty\) as \(x\to 0^{+}\) and \(\frac{d_{i}}{\mu_{i}}\geq 0,\) for small enough \(\delta,\) by (32) we have \[\frac{\left(\frac{\mu_{i}}{np_{n}\lambda_{0,1}}\right)^{3}}{3X_{n,i}^{3}}= \frac{-\log\left(\frac{d_{i}}{\mu_{i}}\right)+\left(\frac{d_{i}}{\mu_{i}}-1 \right)-\frac{1}{2}\left(\frac{d_{i}}{\mu_{i}}-1\right)^{2}}{(1-\frac{d_{i}} {\mu_{i}})^{3}}\leq-2\log\left(\frac{d_{i}}{\mu_{i}}\right).\] Consequently, it follows that \[\frac{1}{3X_{n,i}^{3}}\frac{d_{i}}{np_{n}\lambda_{0,1}}\left|\frac{d_{i}-\mu_ {i}}{np_{n}\lambda_{0,1}}\right|^{3}\leq-2\log\left(\frac{d_{i}}{\mu_{i}} \right)\frac{d_{i}}{\mu_{i}^{2}np_{n}\lambda_{0,1}}.\] Note that \(\lim_{x\to 0^{+}}x\log x=o(1).\) For small enough \(\delta,\) it follows that \(-2\log\left(\frac{d_{i}}{\mu_{i}}\right)\frac{d_{i}}{\mu_{i}}\leq 1\) and hence \[\frac{1}{3X_{n,i}^{3}}\frac{d_{i}}{np_{n}\lambda_{0,1}}\left|\frac{d_{i}-\mu_ {i}}{np_{n}\lambda_{0,1}}\right|^{3}\leq\frac{|d_{i}-\mu_{i}|^{3}}{\mu_{i}^{2} np_{n}\lambda_{0,1}}. \tag{40}\] By (39) and (40), for a fixed small constant \(\delta\in(0,1),\) one has \[\sum_{i=1}^{n}\frac{1}{3X_{n,i}^{3}}\frac{d_{i}}{np_{n}\lambda_{0,1}}\left|\frac{d_{i}-\mu_{i}}{np_{n}\lambda_{0,1}}\right|^{3} = \sum_{i=1}^{n}\frac{1}{3X_{n,i}^{3}}\frac{d_{i}}{np_{n}\lambda_{0,1}}\left|\frac{d_{i}-\mu_{i}}{np_{n}\lambda_{0,1}}\right|^{3}I[X_{n,i}< \delta\frac{\mu_{i}}{np_{n}\lambda_{0,1}}] \tag{41}\] \[+\sum_{i=1}^{n}\frac{1}{3X_{n,i}^{3}}\frac{d_{i}}{np_{n}\lambda_{0,1}}\left|\frac{d_{i}-\mu_{i}}{np_{n}\lambda_{0,1}}\right|^{3}I[X_{n,i}\geq \delta\frac{\mu_{i}}{np_{n}\lambda_{0,1}}]\] \[\leq \frac{1}{np_{n}\lambda_{0,1}}\sum_{i=1}^{n}\frac{|d_{i}-\mu_{i}|^ {3}}{\mu_{i}^{2}}+\frac{1}{3\delta^{3}}\sum_{i=1}^{n}\frac{d_{i}}{np_{n} \lambda_{0,1}}\left|\frac{d_{i}-\mu_{i}}{\mu_{i}}\right|^{3}.\] By (38), it follows that \[\frac{1}{np_{n}\lambda_{0,1}}\mathbb{E}\left[\left|\sum_{i=1}^{n} \frac{d_{i}}{\mu_{i}}\frac{(d_{i}-\mu_{i})^{3}}{\mu_{i}^{2}}\right|\right] \leq \frac{1}{np_{n}\lambda_{0,1}}\sum_{i=1}^{n}\sqrt{\mathbb{E}\left( \frac{d_{i}}{\mu_{i}}\right)^{2}}\sqrt{\frac{\mathbb{E}(d_{i}-\mu_{i})^{6}}{ \mu_{i}^{4}}} \tag{42}\] \[\leq \frac{1}{np_{n}\lambda_{0,1}}\sum_{i=1}^{n}(1+\frac{1}{\sqrt{\mu_ {i}}})\sqrt{\frac{\mathbb{E}(d_{i}-\mu_{i})^{6}}{\mu_{i}^{4}}}\] \[= \frac{1}{p_{n}}O\left(\frac{1}{\sqrt{np_{n}}}\right).\] Hence by(33), (34), (35), (36), (38), (41), (42) and (37), we get that \[\sum_{i=1}^{n}\frac{d_{i}}{np_{n}\lambda_{0,1}}\log\left(\frac{d_ {i}}{np_{n}\lambda_{0,1}}\right)-\sum_{i=1}^{n}\frac{\mu_{i}}{np_{n}\lambda_{0, 1}}\log\left(\frac{\mu_{i}}{np_{n}\lambda_{0,1}}\right)\] \[= \sum_{i=1}^{n}\frac{d_{i}-\mu_{i}}{np_{n}\lambda_{0,1}}\left(1+ \log\left(\frac{\mu_{i}}{np_{n}\lambda_{0,1}}\right)\right)+\frac{1}{2np_{n} \lambda_{0,1}}\sum_{i=1}^{n}\frac{(d_{i}-\mu_{i})^{2}}{\mu_{i}}+\frac{1}{p_{n} }O_{P}\left(\frac{1}{\sqrt{np_{n}}}\right).\] Further, it follows from Lemma 4.2 that \[\frac{1}{\sqrt{s_{2}p_{n}}}\sum_{i=1}^{n}d_{i}\log\left(\frac{d_{i}} {np_{n}\lambda_{0,1}}\right)-\frac{1}{\sqrt{s_{2}p_{n}}}\sum_{i=1}^{n}\mu_{i} \log\left(\frac{\mu_{i}}{np_{n}\lambda_{0,1}}\right) \tag{44}\] \[= \sum_{i=1}^{n}\frac{d_{i}-\mu_{i}}{\sqrt{s_{2}p_{n}}}\left(1+l_{i }\right)+\frac{1}{2}\sum_{i=1}^{n}\frac{\sum_{j\neq i}(A_{ij}-p_{n}f_{ij})^{2} }{\mu_{i}\sqrt{s_{2}p_{n}}}+O_{P}\left(\frac{\sqrt{np_{n}}}{p_{n}\sqrt{s_{2}p_ {n}}}+\frac{\sqrt{n}}{\sqrt{s_{2}p_{n}}}\right)\] \[= O_{P}\left(1\right)+O_{P}\left(\frac{p_{n}\tau_{1}}{\sqrt{s_{2}p _{n}}}\right),\] and the rates \(O_{P}\left(1\right)\) and \(O_{P}\left(\frac{p_{n}\tau_{1}}{\sqrt{p_{n}s_{2}}}\right)\) cannot be improved. Then (5) follows from (31) and (44). **Lemma 4.2**.: _Under the assumptions of Theorem 2.3, the following results are true._ \[\sum_{i=1}^{n}\frac{d_{i}-\mu_{i}}{\sqrt{s_{2}p_{n}}}\left(1+l_{i }\right) \Rightarrow \mathcal{N}(0,1), \tag{45}\] \[\sum_{i=1}^{n}\frac{(d_{i}-\mu_{i})^{2}}{\mu_{i}\sqrt{s_{2}p_{n}}} = \frac{p_{n}\tau_{1}-p_{n}^{2}\tau_{1,2}}{\sqrt{s_{2}p_{n}}}+o_{P}( 1). \tag{46}\] **Proof of Lemma 4.2**. We firstly prove (45). Note that \[\sum_{i=1}^{n}(d_{i}-\mu_{i})(1+l_{i}) = \sum_{i<j}(A_{ij}-p_{n}f_{ij})(2+l_{i}+l_{j}).\] Then \[\mathbb{E}\left[\sum_{i<j}(A_{ij}-p_{n}f_{ij})(2+l_{i}+l_{j}) \right]^{2} = \sum_{i<j}\mathbb{E}(A_{ij}-p_{n}f_{ij})^{2}(2+l_{i}+l_{j})^{2}\] \[= \sum_{i<j}(2+l_{i}+l_{j})^{2}p_{n}f_{ij}(1-p_{n}f_{ij})=s_{2}p_{n}.\] Besides, \[\sum_{i<j}\frac{\mathbb{E}(A_{ij}-p_{n}f_{ij})^{4}(2+l_{i}+l_{j})^{4}}{s_{2}^ {2}p_{n}^{2}}=O\left(\frac{\sum_{i<j}(2+l_{i}+l_{j})^{4}p_{n}f_{ij}}{s_{2}^{2} p_{n}^{2}}\right)=O\left(\frac{s_{4}}{s_{2}^{2}p_{n}}\right)=o(1).\] By the Lyapunov central limit theorem, we have \[\frac{\sum_{i<j}(A_{ij}-p_{n}f_{ij})(2+l_{i}+l_{j})}{\sqrt{s_{2}p_{n}}} \Rightarrow\mathcal{N}(0,1).\] Next we prove (46). Note that \[\sum_{i=1}^{n}\frac{(d_{i}-\mu_{i})^{2}}{\mu_{i}} = \sum_{i=1}^{n}\frac{\sum_{j\neq k\neq i}(A_{ij}-p_{n}f_{ij})(A_{ik} -p_{n}f_{ik})}{\mu_{i}} \tag{47}\] \[+\sum_{i=1}^{n}\frac{\sum_{j\neq i}(A_{ij}-p_{n}f_{ij})^{2}}{\mu_ {i}}.\] Since \[\mathbb{E}\left[\sum_{i=1}^{n}\frac{\sum_{j\neq k\neq i}(A_{ij}-p _{n}f_{ij})(A_{ik}-p_{n}f_{ik})}{\mu_{i}}\right]^{2}\] \[= \sum_{i=1}^{n}\frac{\sum_{j\neq k\neq i}\mathbb{E}(A_{ij}-p_{n}f _{ij})^{2}(A_{ik}-p_{n}f_{ik})^{2}}{\mu_{i}^{2}}=O\left(n\right),\] then \[\sum_{i=1}^{n}\frac{(d_{i}-\mu_{i})^{2}}{\mu_{i}} = \sum_{i=1}^{n}\frac{\sum_{j\neq i}(A_{ij}-p_{n}f_{ij})^{2}}{\mu_ {i}}+O_{P}\left(\sqrt{n}\right).\] Note that \[\sum_{i=1}^{n}\frac{\sum_{j\neq i}(A_{ij}-p_{n}f_{ij})^{2}}{\mu_{i}}=\sum_{i< j}\left(\frac{1}{\mu_{i}}+\frac{1}{\mu_{j}}\right)(A_{ij}-p_{n}f_{ij})^{2},\] \[\sum_{i<j}\left(\frac{1}{\mu_{i}}+\frac{1}{\mu_{j}}\right)\mathbb{E}(A_{ij}-p _{n}f_{ij})^{2}=\sum_{i<j}\left(\frac{1}{\mu_{i}}+\frac{1}{\mu_{j}}\right)p_{ n}f_{ij}(1-p_{n}f_{ij}),\] and \[Var\left(\sum_{i<j}\left(\frac{1}{\mu_{i}}+\frac{1}{\mu_{j}}\right)(A_{ij}-p _{n}f_{ij})^{2}\right)=p_{n}\tau_{2}(1+o(1)).\] Since \(\tau_{2}=o(s_{2})\), then \[\frac{\sum_{i<j}\left(\frac{1}{\mu_{i}}+\frac{1}{\mu_{j}}\right)(A_{ij}-p_{n} f_{ij})^{2}}{\sqrt{s_{2}p_{n}}}=\frac{\sum_{i<j}\left(\frac{1}{\mu_{i}}+ \frac{1}{\mu_{j}}\right)p_{n}f_{ij}(1-p_{n}f_{ij})}{\sqrt{s_{2}p_{n}}}+o_{P}(1).\] **Lemma 4.3**.: _For positive \(k\) with \(k\neq\tau\), we have_ \[\mathbb{E}(\tilde{\omega}_{1}^{k})=n^{\frac{k-\tau}{2}}\frac{k}{k-\tau}-\frac{ \tau}{k-\tau}.\] Proof of Lemma 4.3:.: Recall that \(\tilde{\omega}_{i}=\min\{\omega_{i},\sqrt{n}\}\). By definition, the \(k\)-th moment of \(\tilde{\omega}_{1}\) is equal to \[\mathbb{E}(\tilde{\omega}_{1}^{k})= \int_{1}^{+\infty}(\omega_{i}\wedge\sqrt{n})^{k}\tau\omega_{1}^{- \tau-1}d\omega_{1}\] \[= \int_{1}^{\sqrt{n}}\tau\omega^{k-\tau-1}d\omega+\int_{\sqrt{n}}^ {+\infty}n^{\frac{k}{2}}\tau\omega^{-\tau-1}d\omega\] \[= \frac{\tau}{k-\tau}\omega^{k-\tau}|_{1}^{\sqrt{n}}+\tau n^{\frac {k}{2}}\frac{1}{(-\tau)}\omega^{-\tau}|_{\sqrt{n}}^{+\infty}\] \[= \frac{\tau}{k-\tau}(n^{\frac{k-\tau}{2}}-1)+n^{\frac{k-\tau}{2}}\] \[= n^{\frac{k-\tau}{2}}(\frac{\tau}{k-\tau}+1)-\frac{\tau}{k-\tau}\] \[= n^{\frac{k-\tau}{2}}\frac{k}{k-\tau}-\frac{\tau}{k-\tau},\ \ k\neq\tau.\] **Proof of Theorem 2.4**.: The proof strategy is similar to the proof of Theorem 2.2. Let \(\mu=\frac{\tau^{2}}{(\tau-1)^{2}}\). By Lemma 4.3, \(\mu_{i}=\mathbb{E}(d_{i})=p\mu\). Simple algebra yields \[\sum_{i=1}^{n}\left(\frac{d_{i}}{n}\right)^{2}-\frac{p^{2}\mu^{2} }{n} = 2\frac{p\mu}{n}\sum_{i=1}^{n}\frac{d_{i}-\mu_{i}}{n}+\sum_{i=1}^{n }\left(\frac{d_{i}-\mu_{i}}{n}\right)^{2}. \tag{48}\] We now find the order of each term in the right-hand side of (48). The first term of (48) can be decomposed as \[\sum_{i=1}^{n}\frac{d_{i}-\mu_{i}}{n}=\frac{\sum_{i\neq j}\left(A_{ij}-p^{ \frac{\tilde{\omega}_{i}\tilde{\omega}_{j}}{n}}\right)}{n}+\frac{\sum_{i\neq j }\left(p^{\frac{\tilde{\omega}_{i}\tilde{\omega}_{j}}{n}}-\frac{p\mu}{n} \right)}{n}. \tag{49}\] Note that \(A_{ij}\) (\(1\leq i<j\leq n\)) are conditionally independent given \(W\). Then \[\mathbb{E}\left[\frac{\sum_{i<j}\left(A_{ij}-p^{\frac{\tilde{\omega}_{i} \tilde{\omega}_{j}}{n}}\right)}{n}\right]^{2}=\frac{\sum_{i<j}\mathbb{E}\left( A_{ij}-p^{\frac{\tilde{\omega}_{i}\tilde{\omega}_{j}}{n}}\right)^{2}}{n^{2}} \leq\mathbb{E}\left[p\frac{\tilde{\omega}_{i}\tilde{\omega}_{j}}{n}\right]=O \left(\frac{1}{n}\right). \tag{50}\] The second moment of the second term of (49) can be bounded as \[\mathbb{E}\left[p\frac{\sum_{i<j}(\tilde{\omega}_{i}\tilde{\omega }_{j}-\mu)}{n^{2}}\right]^{2} = \frac{p^{2}}{n^{4}}O\left(\sum_{i\neq j\neq k}\mathbb{E}(\tilde{ \omega}_{i}\tilde{\omega}_{j}-\mu)(\tilde{\omega}_{i}\tilde{\omega}_{k}-\mu)\right) \tag{51}\] \[+\frac{p^{2}}{n^{4}}O\left(\sum_{i<j}\mathbb{E}(\tilde{\omega}_{i }\tilde{\omega}_{j}-\mu)^{2}\right)\] \[= \frac{p^{2}\mu^{2}}{n}O\left(n^{\frac{2-\tau}{2}}\right)+\frac{p ^{2}}{n^{2}}O\left(n^{2-\tau}\right)\] \[= O\left(n^{-\frac{\tau}{2}}p^{2}\mu^{2}\right),\] where we used Lemma 4.3 in the second equality. Hence the first term of (48) is \(O_{P}\left(n^{-1-\frac{\tau}{4}}p\mu\right)\). Now we consider the second term of (48). By (50) and (51), we have \[\sum_{i=1}^{n}\left(\frac{d_{i}-\mu_{i}}{n}\right)^{2} = \frac{1}{n^{2}}\sum_{i=1}^{n}\left(\sum_{i\neq j}\left(A_{ij}-p \frac{\tilde{\omega}_{i}\tilde{\omega}_{j}}{n}\right)\right)^{2}+\frac{p^{2}}{ n^{4}}\sum_{i=1}^{n}\left(\sum_{i\neq j}(\tilde{\omega}_{i}\tilde{\omega}_{j}-\mu) \right)^{2} \tag{52}\] \[+2\frac{p}{n^{3}}\sum_{i=1}^{n}\left(\left[\sum_{i\neq j}\left(A_ {ij}-p\frac{\tilde{\omega}_{i}\tilde{\omega}_{j}}{n}\right)\right]\left[\sum_{ i\neq j}(\tilde{\omega}_{i}\tilde{\omega}_{j}-\mu)\right]\right)\] \[= O_{P}\left(\frac{1}{n}\right)+\frac{p^{2}}{n^{4}}\sum_{i\neq j \neq k}(\tilde{\omega}_{i}\tilde{\omega}_{j}-\mu)(\tilde{\omega}_{i}\tilde{ \omega}_{k}-\mu)+O_{P}\left(\frac{1}{n^{\frac{1}{2}+\frac{\tau}{4}}}\right),\] where the last term \(O_{P}\left(\frac{1}{n^{\frac{1}{2}+\frac{\tau}{4}}}\right)\) follows from the Cauchy-Schwarz inequality, (50) and (51). Note that \(\mathbb{E}\left[\sum_{i\neq j\neq k}\tilde{\omega}_{i}^{2}\tilde{\omega}_{j} \tilde{\omega}_{k}\right]\asymp n^{3+\frac{2-\tau}{2}}\). Then \[\sum_{i\neq j\neq k}(\tilde{\omega}_{i}\tilde{\omega}_{j}-\mu)( \tilde{\omega}_{i}\tilde{\omega}_{k}-\mu) = \sum_{i\neq j\neq k}(\tilde{\omega}_{i}^{2}\tilde{\omega}_{j} \tilde{\omega}_{k}-\tilde{\omega}_{i}\tilde{\omega}_{j}\mu-\tilde{\omega}_{i} \tilde{\omega}_{k}\mu+\mu^{2})\] \[= (1+o_{P}(1))\sum_{i\neq j\neq k}\tilde{\omega}_{i}^{2}\tilde{ \omega}_{j}\tilde{\omega}_{k}\] \[= (1+o_{P}(1))2\sum_{i<j<k}(\tilde{\omega}_{i}^{2}\tilde{\omega}_{j }\tilde{\omega}_{k}+\tilde{\omega}_{i}\tilde{\omega}_{j}^{2}\tilde{\omega}_{k}+ \tilde{\omega}_{i}\tilde{\omega}_{j}\tilde{\omega}_{k}^{2}).\] Hence, by Lemma 4.4, the second term of (52) is the leading term and its exact order is \(O_{P}\left(n^{-\frac{\tau}{2}}\right)\). Moreover, by (49), (50) and (51), we obtain \(\frac{d}{n}=\frac{p\mu}{n}+O_{P}\left(\frac{p\mu}{n^{1+\frac{\tau}{4}}}\right)\). Then the desired result follows. **Lemma 4.4**.: _Let \(\theta_{n}=3\mu\left(n^{\frac{2-\tau}{2}}\frac{2}{2-\tau}-\frac{\tau}{2-\tau}\right)\) and_ \[U_{n}=\frac{1}{\binom{n}{3}}\sum_{i<j<k}\left(\tilde{\omega}_{i}^{2}\tilde{ \omega}_{j}\tilde{\omega}_{k}+\tilde{\omega}_{i}\tilde{\omega}_{j}^{2}\tilde{ \omega}_{k}+\tilde{\omega}_{i}\tilde{\omega}_{j}\tilde{\omega}_{k}^{2}-\theta _{n}\right).\] _Then_ \[\frac{\sqrt{4-\tau}U_{n}}{6\mu n^{\frac{1}{2}-\frac{\tau}{4}}}\Rightarrow \mathcal{N}(0,1). \tag{53}\] **Proof of Lemma 4.4**. Note that \(U_{n}\) is a U-statistic of order 3. We shall use the asymptotic theory of U-statistics to get the desired result (53). Let \(\phi(\tilde{\omega}_{1},\tilde{\omega}_{2},\tilde{\omega}_{3})=\tilde{\omega }_{1}^{2}\tilde{\omega}_{2}\tilde{\omega}_{3}+\tilde{\omega}_{1}\tilde{\omega }_{2}^{2}\tilde{\omega}_{3}+\tilde{\omega}_{1}\tilde{\omega}_{2}\tilde{\omega}_ {3}^{2}\) and \(\phi_{1}(\tilde{\omega}_{1})=\mathbb{E}[\phi(\tilde{\omega}_{1},\tilde{\omega }_{2},\tilde{\omega}_{3})|\tilde{\omega}_{1}]\). Direct calculation yields \(\phi_{1}(\tilde{\omega}_{1})=\tilde{\omega}_{1}^{2}\mu+2\tilde{\omega}_{1}\eta _{n}\) with \(\eta_{n}=\left(n^{\frac{2-\tau}{2}}\frac{2}{2-\tau}-\frac{\tau}{2-\tau}\right) \frac{\tau}{\tau-1}\). Note that \[\mathbb{E}[U_{n}|\tilde{\omega}_{1}] =\frac{1}{\binom{n}{3}}\sum_{1\leq i<j<k\leq n}\mathbb{E}[\phi( \tilde{\omega}_{i},\tilde{\omega}_{j},\tilde{\omega}_{k})-\theta_{n})|\tilde{ \omega}_{1}]=\frac{3}{n}(\phi_{1}(\tilde{\omega}_{1})-\theta_{n}).\] Let \(\tilde{U}_{n}=\frac{3}{n}\sum_{i=1}^{n}(\phi_{1}(\tilde{\omega}_{i})-\theta_{n})\) and \(\sigma_{n}^{2}=Var(\tilde{U}_{n})\). Then \[\sigma_{n}^{2} = \frac{9}{n}\mathbb{E}(\phi_{1}(\tilde{\omega}_{1})-\theta_{n})^{2} \tag{54}\] \[= \frac{9}{n}\Big{[}\mathbb{E}(\tilde{\omega}_{1}^{4})\mu^{2}+4 \eta_{n}^{2}\mathbb{E}(\tilde{\omega}_{1}^{2})+4\mu\eta_{n}\mathbb{E}(\tilde{ \omega}_{1}^{3})-\theta_{n}^{2}\Big{]}\] \[= (1+o(1))\frac{9}{n}\left[\frac{4\mu^{2}}{4-\tau}n^{\frac{4-\tau} {2}}+\frac{8\eta_{n}^{2}}{2-\tau}n^{\frac{2-\tau}{2}}+\frac{12\mu\eta_{n}}{3- \tau}n^{\frac{3-\tau}{2}}-\theta_{n}^{2}\right]\] \[= (1+o(1))\frac{36\mu^{2}}{4-\tau}n^{\frac{4-\tau}{2}-1}.\] Let \(Y_{i}=\frac{3}{n}(\phi_{1}(\tilde{\omega}_{i})-\theta_{n})\). Then \(Y_{i}\) (\(1\leq i\leq n\)) are independent, \(\mathbb{E}(Y_{i})=0\) and \(\tilde{U}_{n}=\sum_{i=1}^{n}Y_{i}\). Since \[\frac{\sum_{i=1}^{n}\mathbb{E}(Y_{i}^{4})}{\sigma_{n}^{4}} = \frac{81}{n^{4}\sigma^{4}}\sum_{i=1}^{n}\mathbb{E}[(\phi_{1}( \tilde{\omega}_{i})-\theta_{n})^{4}]=O\left(\frac{\mathbb{E}(\tilde{\omega}_{ 1}^{8}+\tilde{\omega}_{1}^{4}\eta_{n}^{4})}{n^{5-\tau}}\right)\] \[= O\left(\frac{n^{\frac{8-\tau}{2}}+n^{\frac{4-\tau}{2}+2(2-\tau )})}{n^{5-\tau}}\right)=o(1),\] by the Lyapunov Central Limit Theorem, we get that \(\frac{\tilde{U}_{n}}{\sigma_{n}}\Rightarrow\mathcal{N}(0,1)\). To finish the proof, it suffices to show \(\frac{U_{n}}{\sigma_{n}}=\frac{\tilde{U}_{n}}{\sigma_{n}}+o_{P}(1)\). Note that \[\mathbb{E}[\tilde{U}_{n}U_{n}] =\mathbb{E}\left[\frac{3}{n}\sum_{i=1}^{n}(\phi_{1}(\tilde{\omega }_{i})-\theta_{n})U_{n}\right]\] \[=\frac{3}{n}\sum_{i=1}^{n}\mathbb{E}[(\phi_{1}(\tilde{\omega}_{i })-\theta_{n})\mathbb{E}(U_{n}|\tilde{\omega}_{i})]\] \[=\frac{3^{2}}{n^{2}}\sum_{i=1}^{n}\mathbb{E}[\phi_{1}(\tilde{ \omega}_{i})-\theta_{n}]^{2}\] \[=\frac{3^{2}}{n}\mathbb{E}[\phi_{1}(\tilde{\omega}_{1})-\theta_{n }]^{2}\] \[=\frac{3^{2}}{n}Var(\phi_{1}(\tilde{\omega}_{1}))=Var(\tilde{U}_{ n}).\] Then \[\mathbb{E}\left[\frac{U_{n}-\tilde{U}_{n}}{\sigma_{n}}\right]^{2} = \frac{1}{\sigma_{n}^{2}}\big{[}\mathbb{E}(U_{n})^{2}+\mathbb{E}( \tilde{U}_{n}^{2})-2\mathbb{E}(\tilde{U}_{n}U_{n})\big{]} \tag{55}\] \[= \frac{1}{\sigma_{n}^{2}}[\mathbb{E}(U_{n}^{2})-\mathbb{E}(\tilde{ U}_{n}^{2})].\] Next, we find \(\mathbb{E}(U_{n}^{2}).\) \[\mathbb{E}(U_{n}^{2}) = \frac{1}{{n\choose 3}^{2}}\sum_{\begin{subarray}{c}i<j<k,\\ i_{1}<j_{1}<k_{1}\end{subarray}}\mathbb{E}(\phi(\tilde{\omega}_{i},\tilde{\omega }_{j},\tilde{\omega}_{k})-\theta_{n})(\phi(\tilde{\omega}_{i_{1}},\tilde{\omega }_{j_{1}},\tilde{\omega}_{k_{1}})-\theta_{n}) \tag{56}\] \[= \frac{1}{{n\choose 3}^{2}}\sum_{1\leq i<j<k\leq n}\mathbb{E}(\phi( \tilde{\omega}_{i},\tilde{\omega}_{j},\tilde{\omega}_{k})-\theta_{n})^{2}\] \[+\frac{1}{{n\choose 3}^{2}}\sum_{\begin{subarray}{c}i<j<k,\\ i_{1}<j_{1}<k_{1}\\ |\{i,j,k\}\cap\{i_{1},j_{1},k_{1}\}|=2\end{subarray}}\mathbb{E}(\phi(\tilde{ \omega}_{i},\tilde{\omega}_{j},\tilde{\omega}_{k})-\theta_{n})(\phi(\tilde{ \omega}_{i_{1}},\tilde{\omega}_{j_{1}},\tilde{\omega}_{k_{1}})-\theta_{n})\] \[+\frac{1}{{n\choose 3}^{2}}\sum_{\begin{subarray}{c}i<j<k,\\ i_{1}<j_{1}<k_{1}\\ |\{i,j,k\}\cap\{i_{1},j_{1},k_{1}\}|=1\end{subarray}}\mathbb{E}(\phi(\tilde{ \omega}_{i},\tilde{\omega}_{j},\tilde{\omega}_{k})-\theta_{n})(\phi(\tilde{ \omega}_{i_{1}},\tilde{\omega}_{j_{1}},\tilde{\omega}_{k_{1}})-\theta_{n})\] \[= O\left(\frac{1}{{n\over n^{{3\over 2}}}^{{7}-1}}\right)+O \left(\frac{1}{{n}^{{7}-1}}\right)+\sigma_{n}^{2}(1+o(1)).\] Combining (55) and (56) yields \(\frac{U_{n}}{\sigma_{n}}=\frac{\tilde{U}_{n}}{\sigma_{n}}+o_{p}(1).\) Then the proof is complete. **Proof of Proposition 2.1:** When \(\alpha>1,\) the function \(f(x)=x^{\alpha}\) is convex for \(x>0.\) By Jensen inequality, we have \[\frac{1}{n}\sum_{i=1}^{n}\left(\frac{d_{i}}{d}\right)^{\alpha}=\frac{\frac{1} {n}\sum_{i=1}^{n}\left(\frac{d_{i}}{n}\right)^{\alpha}}{\left(\frac{1}{n}\sum _{i=1}^{n}\frac{d_{i}}{n}\right)^{\alpha}}\geq\frac{\frac{1}{n}\sum_{i=1}^{n} \left(\frac{d_{i}}{n}\right)^{\alpha}}{\frac{1}{n}\sum_{i=1}^{n}\left(\frac{d_ {i}}{n}\right)^{\alpha}}=1.\] Then \(\mathcal{R}_{\alpha}\in[0,1].\) When \(\alpha=1,\) the function \(f(x)=x\log x\) is convex for \(x>0.\) By Jensen inequality, we have \[\frac{1}{n}\sum_{i=1}^{n}\left(\frac{d_{i}}{d}\right)^{\alpha}\geq-\left( \frac{1}{n}\sum_{i=1}^{n}\frac{d_{i}}{d}\right)^{\alpha}=-1.\] Then \(\mathcal{R}_{\alpha}\in[0,1].\) **Acknowledgement** The author is grateful to Editor and anonymous reviewers for valuable comments that significantly improve the manuscript.
2305.08116
The Structure and Dynamics of Knowledge Graphs, with Superficiality
Large knowledge graphs combine human knowledge garnered from projects ranging from academia and institutions to enterprises and crowdsourcing. Within such graphs, each relationship between two nodes represents a basic fact involving these two entities. The diversity of the semantics of relationships constitutes the richness of knowledge graphs, leading to the emergence of singular topologies, sometimes chaotic in appearance. However, this complex characteristic can be modeled in a simple way by introducing the concept of superficiality, which controls the overlap between relationships whose facts are generated independently. With this model, superficiality also regulates the balance of the global distribution of knowledge by determining the proportion of misdescribed entities. This is the first model for the structure and dynamics of knowledge graphs. It leads to a better understanding of formal knowledge acquisition and organization.
Loïck Lhote, Béatrice Markhoff, Arnaud Soulet
2023-05-14T10:16:07Z
http://arxiv.org/abs/2305.08116v4
# The Structure and Dynamics of Knowledge Graphs, with Superficiality ###### Abstract Large knowledge graphs combine human knowledge garnered from projects ranging from academia and institutions to enterprises and crowdsourcing. Within such graphs, each relationship between two nodes represents a basic fact involving these two entities. The diversity of the semantics of relationships constitutes the richness of knowledge graphs, leading to the emergence of singular topologies, sometimes chaotic in appearance. However, this complex characteristic can be modeled in a simple way by introducing the concept of superficiality, which controls the overlap between relationships whose facts are generated independently. Superficiality also regulates the balance of the global distribution of knowledge by determining the proportion of mis-described entities. This is the first model for the structure and dynamics of knowledge graphs. It leads to a better understanding of formal knowledge acquisition and organization. ## Knowledge Graphs A knowledge graph is a knowledge base represented as a directed graph whose vertices are the entities and whose labeled edges are their relationships. Each edge represents a fact similar to an elementary sentence that relates a subject to an object. For example, the fact (Neurotrophin-3, biological process, memory) represents the involvement of the protein Neurotrophin-3 (subject entity) in the biological process (relationship) of memory (object entity). Since the development of the Semantic Web [1], knowledge graphs are often associated with open data projects of the Web of data. This movement has led to the development of scientific and institutional knowledge bases of unprecedented size, notably in cultural heritage [2] and life sciences [3, 4]. At the same time, other projects aim to build encyclopedic knowledge bases such as Yago [5], DBpedia [6] or Wikidata [7], whose collaborative editing has made it possible to group more than 14 billion facts. For the Wikidata relationship biological process alone, there are more than 1.1M stated facts, comparable to the triplet (Neurotrophin-3, biological process, memory). This large semantically rich data source allows for creating new scientific hypotheses by cross-referencing knowledge, manually or through machine learning. In this context, understanding knowledge graphs' topology is fundamental issue to estimate how complete the accumulated knowledge is and predict its evolution. Only this understanding can guarantee that new knowledge induced from these knowledge graphs, manually or through machine learning, is valid with respect to the real world. The topology of knowledge graphs and their evolution remains largely unknown because of the complex interactions between relationships. In network science, it is well known that the preferential attachment mechanism [8] plays a central role in constructing graphs. However, the proposed models apply to most of the networks where all the links obey the same preferential attachment and, in some cases, to graphs with two kinds of relationships, whereas a knowl edge graph contains dozens or even hundreds of relationships. In this paper, we show how the multiplexing of different kinds of relationships (as considered in [9, 10]) brings out complex and unexpected topological phenomena. By exploring knowledge graphs representing diverse knowledge, such as documentary heritage, bioactive molecules [11] or encyclopedic knowledge [7], we observe that their topology does not boil down to a simple power law like simplex networks, such as the Web or citation graphs [12]. Naturally, the superposition of the singular dynamics of each relationship leads to a multimodal probability distribution. More surprisingly, the multiplexing of relationships also creates strong irregularities in the probability distribution, especially for the outgoing connectivity of entities. This phenomenon is crucial because it concerns the entities with a low connectivity, corresponding to the majority of the knowledge graph's entities. To understand the origin of this phenomenon, we introduce the notion of superficiality, that is the probability of adding a new entity in the graph when a relationship must be enriched with an additional entity (otherwise the relationship is enriched with an entity already existing in the graph). The superficiality controls the concentration of kinds of relationships per entity, involving the presence or absence of these perturbations and determining the proportion of misdescribed entities. ## Preferential Attachment and Multiplex Networks Preferential attachment, defined by Barabasi and Albert [8], is one of the main mechanisms to explain the emergence of network structure. It consists in adding links giving a priority to nodes that already have strong connectivity. More precisely, the probability of adding a new link connecting vertex \(i\) will be proportional to \(k_{i}^{\alpha}\) with \(k_{i}\) being the degree of \(i\) (its connectivity) and \(\alpha>0\) being an important parameter to consider the diversity of observed distributions [13]. A linear exponent \(\alpha=1\) infers a graph whose degrees follow a power law. The proportion of nodes of degree \(k\) is proportional to \(k^{-\gamma}\) for \(k\) large. For a sublinear exponent \(0<\alpha<1\) the degree distribution tends to a stretched exponential function \(e^{-k\gamma}\). Several proposals for generalizing the Barabasi-Albert model are also relevant in the context of modeling knowledge graphs. As in directed networks, it is reasonable to distinguish between the preferential attachment of incoming connectivity and outgoing connectivity [14]. For example, the Wikidata relationship biological process describing the involvement of entities (e.g., proteins or genes) in biological processes (e.g., memory or digestion) is asymmetric. The entities involved are unevenly distributed between the different processes with only 1 fact for more than 15 thousand biological processes against several tens of thousands of facts for the main biological processes such as metabolism or regulation of DNA-induced transcription. This corresponds to a quasi-linear preferential attachment (i.e., \(\alpha\approx 1\)) while each entity is involved in a close number of processes reflecting a weak preferential attachment (i.e., \(\alpha\approx 0\)). It is thus necessary to take into account sublinear attachments for some relationships [15]. Of course, these models and more elaborate models [16, 17] dedicated to simplex networks infer laws where multimodality or localized drops in connectivity as observed in knowledge graphs (pointed by the red arrows in the graphs of Figure 1) can not be explained. Conversely, a knowledge graph is similar to a multiplex (also called, multi-layer or multi-dimensional) network where each relationship constitutes a distinct layer [9, 10]. The study of generative models for multiplex networks is a tough and little studied scientific challenge, which explains why our proposal is the first generative model adapted to knowledge graphs. There are only generative models for duplex networks [18, 19] that consider correlations between the two layers. Their analytical results would be difficult to generalize to knowledge graphs which have many relationships. For instance, Wikidata had about 1,500 relationships in 2022. Moreover, this would mean defining at least a quadratic number of parameters with the number of relationships. Instead of considering strong interactions between a small number of layers, our proposal aims at privileging weak interactions between a large number of layers. ## Generative Model with Superficiality In our model, the only link between the different relationships is the sharing of entities which varies according to the level of superficiality. We generate the facts of each relationship separately and independently by distributing them over shared entities. In this way, we take into account the semantics of each relationship by distinguishing its sets of subjects and objects, and by assigning it a specific preferential attachment. More precisely, we add at each time a new fact in the knowledge graph by starting to randomly choose a relationship with a probability proportional to \(\rho_{r}\) (Step 1 of Figure 2). The subject and object entities of this new fact are then drawn by incorporating the preferential attachment specific to the chosen relationship (Step 2 repeated for the choice of subject and object). With probability \(\beta_{r}\), we focus on the semantics of the relationship by randomly drawing an entity \(e\) with a probability proportional to \(k_{e}^{\alpha_{r}}\) where \(k_{e}\) is the degree of \(e\) for the relationship \(r\) (Step 2a). Otherwise, we enrich the relationship \(r\) with a new entity having no facts yet for this relationship by considering two cases. In the first case, with a probability \(\sigma\), we add a new entity that does not yet belong to the knowledge graph (Step 2b). In the second case, with a probability \(1-\sigma\), we uniformly draw an entity among those already existing in the knowledge graph because added for enriching another relationship (Step 2c). In other words, the superficiality \(\sigma\) is the probability of creating a new entity in the graph when it is necessary to add an entity that is not yet described by the facts of this relationship. In practice, we can parameterize our generative model to verify its ability to reproduce existing knowledge graphs. Each probability \(\rho_{r}\) corresponds to the proportion of facts of the relationship \(r\) among all the facts of the knowledge graph. For a relationship \(r\), if the generation process is repeated \(n_{r}\) times, the average number of distinct entities generated for this relationship is then \((1-\beta_{r})\times n_{r}\). Thus, for a given knowledge graph, \(1-\beta_{r}\) is the number of distinct entities for the relationship \(r\) divided by its number of facts. The superficiality parameter \(\sigma\) is then derived by considering both the number of facts \(\ell\) and the number of entities \(m\) present in the knowledge graph. Indeed, the average number of distinct entities generated by our model by repeating it \(\ell\) times will be \(\sum_{r\in\mathcal{R}}\rho_{r}\times(1-\beta_{r})\times\sigma\times\ell\). To obtain \(m\) entities on average, the probability \(\sigma\) must be equal to \(m/(\sum_{r\in\mathcal{R}}\rho_{r}\times(1-\beta_{r})\times\ell)\). With the number of entities in the numerator and the number of facts in the denominator, the superficiality \(\sigma\) might look like a kind of an average number of facts per entity normalized between 0 and 1. But more insights are captured by weighting the number of facts with the proportion and probability of attachment. Our random generative model reproduces well the general shape of the distribution of incoming and outgoing degrees measured in the three major knowledge graphs (Figure 1). In particular, the curvatures induced by the multimodality of our model are visible and close to those observed in real data (e.g., for Wikidata outgoing connectivity). For outgoing connectivity, the drop in the proportion \(P(k)\) for low degrees is perfectly transcribed, especially on Wikidata where the probability drops for \(k=1\). However, our random model misses some of the micro-variations, especially for the outgoing connectivities of ChEMBL. For this graph, as the number of relationships is limited (only 50), localized perturbations due to inter-relationship correlations ignored by our model are more visible. However, except for the outgoing connectivity in ChEMBL, the Kullback-Leibler divergence is small for all distributions indicating high proximity between the real and generated distributions. We also observe for these three knowledge graphs that the superficiality is lower for the outgoing connectivity where precisely, the variations of the proportion \(P(k)\) are the most chaotic. We have conducted an ablation study to compare the generative model that we define to the generative models of Barabasi-Albert, and Bollobas. First, we observed that the parameterization of the preferential attachment is always beneficial. Our second observation is that the use of a multiplex model is relevant. Of course, the simplex approaches do not take into account the oscillations, contrary to our generative model. Finally, on the three knowledge graphs used for experiments (BnF, ChEMBL and Wikidata), it is clear that our model is the most satisfactory. This study and its results are reported in the supplementary materials. At the level of a relationship, the parameters \(\alpha_{r}\) and \(\beta_{r}\) are intrinsic characteristics of the relationship \(r\). The exponent \(\alpha_{r}\) constrains the distribution of facts between a uniform drawing of entities (\(\alpha_{r}=0\)) and a high preferential attachment (\(\alpha_{r}=1\)). The probability \(\beta_{r}\) regulates the average number of facts per entity corresponding to this relationship. The fact acquisition rate for the relationship \(r\) describing the entity \(e\) is as follows: \(k_{e}(t+1)-k_{e}(t)=\beta_{r}\times k_{e}(t)^{\alpha_{r}}/Z(t)\) with the normalization \(Z(t)=\sum_{e\in\mathcal{E}}k_{e}(t)^{\alpha_{r}}\). For a linear exponent \(\alpha_{r}=1\), we find a variant of Barabasi-Albert model [8] because \(Z(t)=t\). Following the same method of resolution, it is possible to compute the probability distribution which follows a power law of exponent \(1+1/\beta_{r}\). For a sublinear exponent \(\alpha_{r}<1\), our model is close to the one studied in [15] with the sublinear connection kernel leading only to explicit results for some limit cases. Nevertheless, whatever the exponent \(\alpha_{r}\) and the probability \(\beta_{r}\), it is clear that the average proportion that an entity has \(k\) facts, \(P_{\alpha_{r},\beta_{r}}(k)\), is always decreasing with the degree \(k\), that does not explain the strong irregularities observed in many knowledge graphs. Indeed, the variations observed in the total connectivity \(P(k)\), defined as the average proportion that an entity has \(k\) facts in total, comes from the combined effect of the average proportions \(P_{\alpha_{r},\beta_{r}}(k)\) of the different relationships \(r\). For example, the incoming connectivity of the entity memory aggregates both the facts of the relationship biological process, but also those of other relationships such as the relationship field of work, indicating people and institutions working on memory. The multimodality of \(P(k)\) is simply explained by the superposition of these different average proportions. In contrast, the observed drops are related to the distribution of the number of distinct relationships for all entities. In the case where a majority of entities would be described by 2 kinds of relationships (say biological process and field of work), the number of entities with only one fact would become lower than the number of entities with 2 facts inducing \(P(1)<P(2)\). To theoretically analyze this multiplexing phenomenon, we now consider that all the relationships have equivalent dynamics with the same proportion \(\rho\), the same attachment probability \(\beta\) and the same exponent \(\alpha\). It is possible to calculate the average proportion \(P(r_{e}(t)=r)\) (or simply, \(P(r)\)) that an entity has \(r\) distinct relationships by observing that the evolution of \(r_{e}(t)\) depends on two terms (Step 2c of Figure 2). First, the probability of adding a new relationship \(r\) to an entity \(e\) at time \(t+1\) is \((n-r_{e}(t))\times\rho\times(1-\beta)\times(1-\sigma)\). Secondly, only an entity created by another relationship than \(r\) can be chosen to receive a new fact for \(r\). At time \(t\), there are \(t\times(n-1)\times\rho\times(1-\beta)\times\sigma\) entities which were added in the graph by a relationship to which it is necessary to withdraw those added by the relationship \(r\) namely \(t\times\rho\times(1-\beta)\times(1-\sigma)\). The probability of choosing a particular entity is therefore \(1/\left[\rho\times(1-\beta)\times(\sigma n-1)\times t\right]\). For this probability to make sense, the generation of a knowledge graph for a superficiality \(\sigma\) imposes a minimal number of relationships: \(n>1/\sigma-1\). Considering these two probabilities, the acquisition rate of a new relationship is the following after simplification (for \(n>1/\sigma-1\)): \[P(r_{e}(t)=r)=\frac{1\times K_{1}\times\cdots\times K_{r-1}}{(1+K_{1})\times(1 +K_{2})\times\cdots\times(1+K_{r})} \tag{1}\] where \(K_{i}=\frac{1-\sigma}{n\sigma-1}\left(n-i\right)\). The theoretical formula of Equation 1 provides a valid justification for the observed perturbations for low degrees when the superficiality is low (in the case of outgoing connectivity). To illustrate this phenomenon, the simulation in Figure 3(a) superimposes the \(P(k)\) and \(P(r)\) distributions. Of course, the average proportion of entities with \(r\) relationships follows perfectly the theoretical values with an increase for \(\sigma=0.05\) and a decrease for \(\sigma=0.95\). The impact of \(P(r)\) on \(P(k)\) is particularly visible for the case where the superficiality is very low with \(\sigma=0.05\). In this case, the proportion of entities described by only a few facts, which indicates a form of ignorance, is much lower than when superficiality is high. ## Impact of Superficiality on Ignorance The quality of the knowledge about an entity is not only summarized by its number of facts because not all facts have the same value and some unrepresented ones can be inferred. Nevertheless, the presence of a large number of facts certainly reflects knowledge. With the open world assumption, it is more difficult to discern the entities for which there is no knowledge to represent and those for which the knowledge to represent is absent from the graph. However, it is reasonable to think that each entity should be described by at least a few facts to specify its type and its basic interactions with other entities. For example, even a protein that is not involved in any biological process should be attached to the protein type, associated with the taxons where it exists, etc. Nevertheless, our model highlights a strong inequality in the distribution of knowledge, as for all systems based on a preferential attachment mechanism. Some entities accumulate more and more facts allowing them to refine their understanding, at the cost of introducing new entities linked to a few facts, thus maintaining a high proportion of misdescribed entities. Indeed, increasing the volume of knowledge does not globally reduce the level of ignorance underlying the graph, i.e. the proportion of entities that are described by a few facts \(P(k_{e}(t)\leq k)\) with \(k\) small. Obviously, the acquisition over time of more facts does not modify this proportion conjugating the two distributions \(P_{\alpha,\beta}(k_{e}(t))\) and \(P(r_{e}(t))\), which are independent of \(t\). More surprisingly, the increase of the number of relationships in the knowledge graph has a marginal effect on the level of ignorance through the proportion of entities described by a low number of relationships \(P(r_{e}(t)\leq r)\). We see that knowing more relationships changes this proportion of misdescribed entities to tend to a limit \(\lim_{n\rightarrow+\infty}P(r_{e}(t)\leq r)=1-(1-\sigma)^{r}\). The proportion of misdescribed entities curiously increases with \(n\) until it reaches the limit \(1-(1-\sigma)^{r}\) when the superficiality \(\sigma\) is low. Although this increase is very slight, it reflects a form of knowledge paradox where the more we learn, the less we know. On the contrary, if the superficiality \(\sigma\) is high, the proportion of misdescribed entities decreases with the addition of relationships to quickly reach the limit. This reduction is misleading since the final proportion converges toward a much higher ignorance than with a low \(\sigma\). Reducing the proportion of mis-described entities does not mean knowing well, but knowing better with the level of expertise set by \(\sigma\). This observation should make one careful when exploiting knowledge graphs to avoid a kind of Dunning-Kruger effect [20]. The fluctuation of the proportion of misdescribed entities does not allow to judge the quality of the knowledge graph; only the level of superficiality \(\sigma\) matters. Figure 3(b) plots \(P(r_{e}(t)\leq 3)\) according to the number of relationships \(n\) and the superficiality \(\sigma\) in knowledge graphs simulated with \(\alpha_{r}=1\) and \(\beta_{r}=0.85\). With the color scale of the figure, an evolution from red to blue corresponds to a reduction of ignorance. Confirming the theoretical results, it appears clearly that the addition of relationships modifies little the level of ignorance. The only way to significantly reduce the proportion of misdescribed entities in a knowledge graph is to reduce its superficiality to concentrate more relationships on each entity. Yet, by repeating longitudinal measurements in the Wikidata knowledge graph between 2016 and 2022, we observed relative stability of superficiality (from 0.348 to 0.284 for outgoing connectivity and from 0.716 to 0.797 for ingoing connectivity). To take this further, it would be feasible to refine the analysis to identify parts of the graph with reasonable superficiality corresponding to the best-informed fields, from which one can safely induce new knowledge. The superficiality is an essential parameter to properly model the construction of knowledge graphs, to understand their evolution and evaluate their quality. In computer science, having a realistic theoretical model for knowledge graphs is crucial to optimize their interrogation by improving data storage and better estimate the cost of queries. In knowledge engineering, our modeling is fundamental to estimate the robustness of the knowledge contained in these large graphs to induce new reliable knowledge, but also to study their vulnerability [21]. We believe that although simple, this description of knowledge graphs not only allows better exploitation of knowledge graphs in the Web of Data with possible opportunities in many domains, but it also opens the way to interdisciplinary research perspectives. Finally, like Wikidata that is a mirror of the Wikipedia encyclopedia, each knowledge graph is a computerized representation of the knowledge of a field. Although this representation bias leads us to be cautious, our model could suggest assumptions about the organization of knowledge by considering our work as a form of computational epistemology.
2305.07225
Edge-Enhanced Microscopy of Comlplex Object using Scalar and Vectorial Vortex Filtering
Recently, $4f$ system containing a q-plate has been used to perform edge detection and enhancement of amplitude and phase objects. However, only few studies have concentrated on edge enhancement of phase-amplitude objects. Here, we experimentally verified the functional difference between scalar and vectorial vortex filtering using an onion cell, the experimental results agree well with theoretical analysis. We verified our experimental results through numerical simulation. Although vectorial vortex filtering successfully enhanced the edges of phase and amplitude objects in the phase-amplitude object, they are indistinguishable due to the equal enhancement of the edges of the phase and amplitude objects. To address this, we propose a method to isolate the edge of the phase object from the edge of the amplitude object using off-axis beam illumination. We theoretically calculated the isolation of the edge of the phase object from the amplitude object, and verified via numerical simulations.
Jigme Zangpo, Tomohiro Kawabe, Hirokazu Kobayashi
2023-05-12T03:37:53Z
http://arxiv.org/abs/2305.07225v3
# Edge-Enhanced Microscopy of Complex Objects using Scalar and Vectorial Vortex Filtering ###### Abstract Recently, \(4f\) system containing a q-plate has been used to perform edge detection and enhancement of amplitude and phase objects. However, only few studies have concentrated on edge enhancement of phase-amplitude objects. Here, we experimentally verified the functional difference between scalar and vectorial vortex filtering using an onion cell, the experimental results agree well with theoretical analysis. We verified our experimental results through numerical simulation. Although vectorial vortex filtering successfully enhanced the edges of phase and amplitude objects in the phase-amplitude object, they are indistinguishable due to the equal enhancement of the edges of the phase and amplitude objects. To address this, we propose a method to isolate the edge of the phase object from the edge of the amplitude object using off-axis beam illumination. We theoretically calculated the isolation of the edge of the phase object from the amplitude object, and verified via numerical simulations. oeurmurm osurmurm ## 1 Introduction Traditional bright field microscope (BFM) generate the contrast of an opaque amplitude object (AO) by the absorption of transmitted light in dense areas of the AO [1]. However, BFM is not very useful for transparent phase objects (POs) such as biological cells because it displays low contrast images [2]. Additionally, phase-contrast microscopes (PCMs) provide suitable contrast for PO samples [2, 3], but this technique cannot highlight the structural details such as edges of samples. PCMs also suffer from phase halos and shade-off in phase-contrast images [4, 5]. Differential interference-contrast microscopes (DICs) have two significant advantages over PCM and BFM. Firstly, it can detect the edges of the PO, and secondly, samples need not be strained [2, 4, 5]. However, DIC detects the edges of samples in only one direction. Hence, the vortex filter employed in \(4f\) systems has attracted attention as it is a simple and efficient method to perform isotropic and anisotropic edge-enhanced imaging of biological and medical samples [6, 7, 8]. Edge enhancement techniques have been widely employed in image processing [9, 10], biological imaging [11, 12, 13], medical field [14], and fingerprint identification [15]. In general, edge detection is conducted by executing a Hilbert transformation on the object [16, 17, 18] using an optical vortex filter to yield isotropic or anisotropic edge enhancement [19, 20]. Recently, studies on edge enhancement of amplitude and phase objects using spiral phase plates have been demonstrated in [21, 22, 23, 24, 25, 26, 27, 28]. An anisotropic edge enhancement has been demonstrated in [7, 8, 12, 13, 22, 25] by changing the topological charge to non-integer values and shifting the center of spiral phase filters. Furthermore, it was revealed that anisotropic edge enhancement can be realized using a superposition of two spiral phase filters [21, 15]. A spatial light modulator (SLM) [6, 8, 21, 28] is placed at the Fourier plane to generate a spiral phase filter to enhance the object's edges. However, SLM makes the overall system bulky and limits the resolution [29]. Other vortex filters used in RCP photonics product such as the vortex phase plate (VPP) [30, 31] are also used in the Fourier plane for edge enhancement. The vortex filter from SLM and VPP renders the \(4f\) system as scalar vortex filtering (SVF) unit. In recent years, vectorial vortex filtering (VVF) has been studied due to its capablility of enhancing phase-amplitude objects (PAOs) [32]. VVF can be generated using a space-variant birefringent optical element, q-plate [32, 33, 34], spatially variable half-waveplate, and s-wave plate [35, 36] placed at the Fourier plane of a \(4f\) system. A q-plate filter is needed as the input polarization-sensitive element may generate both scalar and vector vortex filtering [34]. Moreover, q-plates have the advantage of high conversion efficiency of \(>97\%\)[37, 38, 39] compared to other vortex filters. In this paper, a q-plate is used as a filter in a \(4f\) system to enhance the edges of complex PAOs. In an earlier study, the use of q-plates or s-waveplates filter to enhance the edges of simple disk intensity objects [33], circular aperture [34, 35], amoeba [32], USAF resolution charts as an AO and sprinkled water spots on a glass plate as a transparent PO [36] has been studied. Among the studies, [32] observed the edge enhancement of PAO, but no detailed analysis of PO and AO present in PAO were conducted. We observed PAO using both VVF and SVF, and emphasised on PO and AO present in PAO. The VVF could continuously highlight the edges of PAO and implement the isolation of PO edges from AO edges compare to SVF, which resulted in discontinuity in edge of PO in PAO. Through numerical simulation, we verified the experimental results for VVF and SVF. Despite VVF images offering good edge enhancement, the edge of PO and AO in PAO were indistinguishable because both edges were enhanced equally. Considering this drawback, we calculated and simulated edge isolation PO from AO using an off-axis beam. This paper is organized as follows: Section 2 presents the edge enhancement using q-plate mathematically. Both SVF and VVF achieved by q-plate filter are explained. Section 3 presents the experimental results of simple AO, PO, and PAO (biological cell). Next, Section 4 discusses the simulation supporting the theoretical and experimental results of PAO, in addition to discussing how the edge of PO can be isolated from the edge of AO using off-axis beam illumination to the VVF system. ## 2 Theoretical Analysis The typical BFM technique uses an objective and a tube lens to magnify the sample. If we add another \(4f\) system, as depicted in Fig. 1, to bright field microscopy, it is called optical vortex microscopy. The \(4f\) system comprises of three planes: the object \(f_{\text{in}}(\mathbf{r}_{\perp})\), Fourier \(\mathbf{H}(\mathbf{k}_{\perp})\), and image \(I_{\text{out}}(\mathbf{r}_{\perp})\) planes. Here \(\mathbf{r}_{\perp}=(x,y)\) represents the two-dimensional position vector and \(\mathbf{k}_{\perp}=(k_{x},k_{y})\) is the two-dimensional transverse wavenumber vector. The transmission function of the q-plate [40, 41] with retardance \(\pi\) radians in polar coordinates is given by the Jones matrix as shown below. \[\mathbf{H}_{q}(\theta)=\text{i}\begin{pmatrix}\cos(2q\theta)&\sin(2q\theta)\\ \sin(2q\theta)&-\cos(2q\theta)\end{pmatrix}, \tag{1}\] where \(m=2q\) denotes the topological charge, \(m\), in a q-plate. We used the value of \(q=1/2\) to have a topological charge of 1. The q-plate will generate optical angular momentum with right (or left) circularly polarized light for left (or right) circularly polarized input [38, 39] according to Eq. 1. The transmission function of the q-plate in Cartesian coordinates derived from Eq. 1 in the Fourier domain by setting \(q=1/2\) is shown in the following equation. \[\mathbf{H}_{1/2}(\mathbf{k}_{\perp})=\frac{\text{i}}{k_{r}}\begin{pmatrix}k_{x}&k_{y} \\ k_{y}&-k_{x}\end{pmatrix}, \tag{2}\] where \(k_{r}=\sqrt{{k_{x}}^{2}+{k_{y}}^{2}}\), and \(k_{x}\) and \(k_{y}\) are the spatial angular frequencies. The Jones matrix of the input polarized light at the angle \(\alpha\) is given by \(\mathbf{P}_{\text{in}}=\left(\begin{smallmatrix}\cos(\alpha)\\ \sin(\alpha)e^{i\pi}\end{smallmatrix}\right)\), where \(\pi\) is the phase difference and \(\alpha\) is the azimuth angle of the state of polarization. The polarized light \(\mathbf{P}_{\text{in}}\) illuminates the object \(f_{\text{in}}(\mathbf{r}_{\perp})\), and the Fourier transform occurs at L\({}_{1}\). The object spectrum of the Fourier transform is given by \(F_{\rm in}(\mathbf{k}_{\perp})\mathbf{P}_{\rm in}\), where \(F_{\rm in}(\mathbf{k}_{\perp})\) is the Fourier transform of the input object. When the object spectrum reaches the filter plane, it multiplies with the transmission function of the filter \(\mathbf{H}_{1/2}(\mathbf{k}_{\perp})\) and gives the modulated spectrum as \(F_{\rm in}(\mathbf{k}_{\perp})\mathbf{H}_{1/2}(\mathbf{k}_{\perp})\mathbf{P}_{\rm in}\). Then, the modulated spectrum undergoes inverse Fourier transform \(F^{-1}\) at L\({}_{2}\) as below: \[\mathbf{f}_{\rm out}(\mathbf{r}_{\perp})=F^{-1}\{F_{\rm in}(\mathbf{k}_{\perp})\mathbf{H}_{1/2 }(\mathbf{k}_{\perp})\mathbf{P}_{\rm in}\}=f_{\rm in}(\mathbf{r}_{\perp})*\mathbf{h}(\mathbf{r}_{ \perp}), \tag{3}\] where \(*\) represents the convolution and \(\mathbf{h}(\mathbf{r}_{\perp})=F^{-1}\{\mathbf{H}_{1/2}(\mathbf{k}_{\perp})\mathbf{P}_{\rm in}\}\), inverse Fourier transform of \(H(\mathbf{k}_{\perp})\mathbf{P}_{\rm in}\), represents the point spread function of the optical system with the q-plate as the Fourier filter. A graphical representation of edge enhancement using convolution of Eq. 3 is illustrated in Fig. 2. It demonstrates that the input object \(f_{\rm in}(\mathbf{r}_{\perp})\) convolutes with the convolution kernel \(\mathbf{h}(\mathbf{r}_{\perp})\), with each corresponding pixel of \(f_{\rm in}(\mathbf{r}_{\perp})\) as multiplied with \(\mathbf{h}(\mathbf{r}_{\perp})\) and summed, resulting in edge detection due to regions with different phases or intensities. Eq. 3 is simplified further using the Fourier transform property known as differentiation in the time domain, \(\omega F(\omega)\leftrightarrow-{\rm i}\frac{df(x)}{dx}\). By applying the differentiation in time domain property to Eq. 3, we get \(k_{x}F_{\rm in}(k_{x})=-{\rm i}\partial_{x}f_{\rm in}(x)\), whereby the image \(f_{\rm out}(\mathbf{r}_{\perp})\) is obtained with the convolution term \(1/r\), where \(1/r\) is the inverse Fourier transform of \(1/k_{r}\). \[\mathbf{f}_{\rm out}(\mathbf{r}_{\perp})=\left[(\cos(\alpha)\nabla_{\perp}f_{\rm in}( \mathbf{r}_{\perp})+\sin(\alpha)e^{{\rm i}\tau}R(\pi/2)\nabla_{\perp}f_{\rm in}(\bm {r}_{\perp})\right]*\frac{1}{r}, \tag{4}\] where \(R(\pi/2)\) is the rotational matrix, \(R(-\psi)\), with \(\psi=90^{\circ}\) and \(\mathbf{\nabla}=(\frac{\partial}{\partial x},\frac{\partial}{\partial y})\) is the two-dimensional partial differential (gradient) operator on the \((x,y)\)-plane. Eq. 4 is the general solution for the q-plate filter. ### Scalar Vortex Filtering For SVF, the input light should be circularly polarized and we consider right or left circularly polarized, \(\mathbf{P}_{\rm cir}=\frac{1}{\sqrt{2}}\left(\begin{smallmatrix}1\\ \pm i\end{smallmatrix}\right)\). The general Jones vector, \(\mathbf{P}_{\rm in}\), should have a phase difference \(\tau=\pm 90^{\circ}\) Figure 1: The \(4f\) imaging system with on-axis q-plate. Figure 2: Graphical representation of edge enhancement using optical vortex filter and azimuth angle of state of polarization of \(45^{\circ}\) to become \(\mathbf{P}_{\rm cir}\). Applying \(\mathbf{P}_{\rm cir}\) to Eq. 4 and taking absolute square value, the edge-enhanced output intensity, \(I_{\rm SVF}(\mathbf{r}_{\perp})\), is obtained. The edge-enhanced output intensity is an approximate value as the convolution term is omitted. \[I_{\rm SVF}(\mathbf{r}_{\perp})\approx\frac{1}{2}\bigg{|}\mathbf{\nabla}_{\perp}f_{\rm in }(\mathbf{r}_{\perp})\pm{\rm i}R(\pi/2)\nabla_{\perp}f_{\rm in}(\mathbf{r}_{\perp}) \bigg{|}^{2}. \tag{5}\] Eq. 5 can enhance the edges of the AO and PO [6]. For complex PAOs, we assume \(f_{\rm in}(\mathbf{r}_{\perp})=A(\mathbf{r}_{\perp}){\rm Exp}[{\rm i}B(\mathbf{r}_{\perp})]\) with the amplitude function \(A(\mathbf{r}_{\perp})\), and phase function \(B(\mathbf{r}_{\perp})\). Substituting \(f_{\rm in}(\mathbf{r}_{\perp})\) in Eq. 5 and solving yields \[I_{\rm SVF}(\mathbf{r}_{\perp})=|\nabla A(\mathbf{r}_{\perp})|^{2}+A(\mathbf{r}_{\perp})^ {2}|\nabla B(\mathbf{r}_{\perp})|^{2}\mp 2A(\mathbf{r}_{\perp})\big{(}\nabla B(\mathbf{r}_{ \perp})\cdot R(\pi/2)\nabla A(\mathbf{r}_{\perp})\big{)} \tag{6}\] In Eq. 6, the first and second terms on the right hand side give the edges of the AO and PO, respectively. However, the third term on the RHS is a mixture of edges for both amplitude and phase, which contribute to the previous two terms. Depending on the positive gradient, \((\nabla B(\mathbf{r}_{\perp})\cdot R(\pi/2)\nabla A(\mathbf{r}_{\perp}))>0\), (or negative gradient, \((\nabla B(\mathbf{r}_{\perp})\cdot R(\pi/2)\nabla A(\mathbf{r}_{\perp}))<0\),), the third term will add (or reduces) the size of edges to (or from) first and second term. Therefore, the complex PAO edge is not separated. ### Vector Vortex Filtering The linearly polarized input light is required to achieve q-plate filtering as VVF. We consider horizontal linearly polarized, \(\mathbf{P}_{\rm lin}=\big{(}\begin{smallmatrix}1\\ 0\end{smallmatrix}\big{)}\) to calculate the edge-enhanced output intensity. With a phase difference of \(\tau=180^{\circ}\) and an azimuth angle of state of polarization of \(0^{\circ}\), the general Jones vector, \(\mathbf{P}_{\rm in}\), will transform to \(\mathbf{P}_{\rm Lin}\). Substituting \(\mathbf{P}_{\rm Lin}\) in Eq. 4, taking absolute square value and omitting the convolution term, the edge-enhanced output intensity, \(I_{\rm VVF}(\mathbf{r}_{\perp})\), is obtained as follows. \[I_{\rm VVF}(\mathbf{r}_{\perp})\approx|\mathbf{\nabla}_{\perp}f_{\rm in}(\mathbf{r}_{ \perp})|^{2}. \tag{7}\] Similar to SVF, Eq. 7 of the VVF can enhanced the edges of AO and PO [36]. The same complex PAO, \(f_{\rm in}(\mathbf{r}_{\perp})\), used in the SVF case was considered. Substituting \(f_{\rm in}(\mathbf{r}_{\perp})\) to Eq. (7) and solving, the following equation is obtained. \[I_{\rm VVF}(\mathbf{r}_{\perp})=|\mathbf{\nabla}A(\mathbf{r}_{\perp})|^{2}+|A(\mathbf{r}_{ \perp})\mathbf{\nabla}B(\mathbf{r}_{\perp})|^{2}. \tag{8}\] As Eq. (8) shows, both the amplitude and phase of an object's edge can be enhanced and are separated from each other's edges. There is no other term contributing to AO and PO edges such as in SVF. ## 3 Experimental setup and Results Fig. 3 shows a schematic of the experimental setup of the \(4f\) imaging system with a q-plate to enhance the edge of an object. The setup can serve for both VVF and SVF. The first \(4f\) system is typical BFM, comprising an objective lens (focal length, \(f_{\rm OB}=18\)mm) with a magnification factor of 10 and a tube lens (\(f_{\rm TL}=180\)mm). The second \(4f\) system includes a q-plate (retardance \(\pi\), and topological charge 1) located at the Fourier plane, between the midpoint of rear focal length (\(f_{1}=300\)mm) of Lens 1 and front focal length (\(f_{2}=250\)mm) of Lens 2. We used commercially available q-plates identified as zero-order vortex half-wave retarders (Thorlabs, WPV10L-633). A quarter waveplate (QWP) known as zero-order quarter-wave plate (Thorlabs, WPQ10M-633) was placed before the q-plate. At the image plane, that is at the rear focal length of Lens 2, a complementary metal-oxide-semiconductor (CMOS) camera (Thorlabs, DCC1645C), was placed. The CMOS has a imaging area \(4.6\mathrm{mm}\times 3.7\mathrm{mm}\), array format \(1280\mathrm{H}\times 1024\mathrm{V}\), and pixel size of \(3.6\mathrm{\ \upmu m}\times 3.6\mathrm{\ \upmu m}\). When the QWP is placed at \(0^{\circ}\), the system becomes VVF. The illumination beam is provided by a laser diode (Thorlabs, HLS635) with a wavelength \(635\mathrm{\ nm}\). A polarizing beam splitter (PBS) is used to produce the horizontal linearly polarized light to illuminate the sample. The illuminated object passes through the first \(4f\) system. Then, the beam carries the magnified object incident on the second \(4f\) system. When a magnified object is transmitted via the second \(4f\) system, it is Fourier transformed at Lens 1 and its spectrum is multiplied with the q-plate function at the Fourier plane. Then, the modulated object spectrum undergoes inverse Fourier transform at Lens 2 and the image is recorded by a CMOS camera. The image obtained is the reconstructed image of the target object after spatial filtering, where only the edges are highlighted. In order to change the system to SVF, the fast axis of the QWP was oriented at \(45^{\circ}\). The horizontal linearly polarized light is changed to right circularly polarized light by the QWP. The working system remained same with VVF. We placed a circular aperture at the object plane of the second \(4f\) imaging system to increase the resolution of POs. ### Experimental Results of simple AO and PO To demonstrate our theory, we used commercially available AOs from Thorlabs, Inc. called R1DS1N-Negative 1951 USAF Test Targets. We also used a PO from Benchmark Technologies identified as Quantitative Phase Target (refractive index of 1.52). We used distinct feature heights of the PO of 100, 150, 200, 250, 300, and 350 nm with the phase difference ranging from 0.56 to 1.99 radian. The AO and PO are shown in Figs. 4 (a) and (b), respectively. First, the AO was used to conduct the experiment. For an easy comparison, the q-plate was removed first to obtain the object image, as shown in Fig. 5 (a). Then, the q-plate was inserted at the Fourier plane of the \(4f\) imaging system in the experimental setup to obtain the object image, as shown in Fig. 5 (b). Figure 4: Simple Target object, (a) R1DS1N-Negative 1951 USAF Test Targets, amplitude object and (b) Quantitative Phase Target, phase object. Figure 3: The sketch of the experimental setup. PBS is Polarizing Beam Splitter; OL is objective lens; TL is tube lens; L1 and L2 are Fourier lenses; QWP is quarter wave plate; CMOS is the Complementary Metal-Oxide-Semiconductor camera. \(f_{\mathrm{OB,TL,1,2}}=18,180,300,250\mathrm{mm}\), \(f_{\mathrm{OB}}\) and \(f_{\mathrm{TL}}\) are focal lengths of OL and TL. \(f_{1}\) and \(f_{2}\) are the focal lengths of L1 and L2. For a quantitative comparison of the quality of the filtered image, the square portion in Fig. 5 (a) was extracted and numerically simulated with the q-plate for ideal edge enhancement, which is shown in Fig. 5 (c). Similarly, the wrapped square portion in Fig. 5 (b) was extracted and shown in Fig. 5 (d). The correlation coefficient between Figs. 5 (c) and (d) is 0.919, which indicates that edge enhancement using q-plate is ideal. The horizontal cross-sectional intensity distribution plotted in 5 (e) has green, blue, and red curves that correspond to the pixel-value of the white dashed line of Figs. 5 (a) (wrapped square portion), (d) and (c), respectively. The green curve in Fig. 5 (e) shows that the edge of the object is not detected when no q-plate is used, compared to the blue curve when a q-plate is used. According to Fig. 5 (e), it also suggests edges of experimental results with q-plate, blue curve, and ideal edge, red curve are strongly correlated. However, the edge is wider in the experimental result with the q-plate than the ideal edge, and the intensity inside the square is less in the ideal edge than that in the experimental result. This is due to the presence of the convolution term (\(1/\mathbf{r}_{\perp}\), refer Eq. 4) in the experimental result, which is not present in ideal edge enhancement. Some of the distinctive attributes for good edge enhancement are sharper edge intensity and intensity approximately zero inside the object. Secondly, the PO was used as a target. The experimental results shown in Figs. 6 (a), and (b) are the edge enhancement of PO (height 350 nm) without and with the q-plate, respectively. In an effort to quantify the edge enhancement, we used the same ideal edge from Fig. 5 (c) to compare with the edge enhancement of PO 350-nm, because for the same function and dimension of AO and PO, they produce the same edge enhancement. Figs. 6 (c) and (d) are extracted from the wrapped square portion of Figs. 6 (a) and (b), respectively. The horizontal cross-sectional intensity distribution in Fig. 6 (e) is plotted along the white dashed line of Fig. 6 (c) (green curve), Fig. 6 (d) (blue curve), and ideal edge of Fig. 5 (c) (red curve). Fig. 6 (f) shows the edge of PO 350-nm without q-plate (green curve) is not enhanced clearly compared to blue curve which used a q-plate. The Fig 7 shows the correlation coefficient between the ideal edge Fig. 5 (e) and PO for different heights. As shown in Fig. 7, the correlation coefficient increases as the height increases from 100 to 350 nm, implying that the edge enhancement is better at 350 nm (correlation coefficient: 0.909) than that at 100 nm (correlation coefficient: 0.860). This is because the phase difference at a height of 100 nm, which is 0.56 rad, is lower than the phase difference at 350 nm, which is 1.99 rad. The lower the phase difference, flatter is the slope/gradient of the object, and hence, the \(4f\) system does not enhance the edge clearly. Figure 5: (a) and (b) Edge enhancement without and with q-plate, respectively, (c) ideal edge enhancement, (d) extracted portion of square from Fig. 5 (b), and (e) horizontal cross-sectional intensity distribution along white dashed line of Figs. 5 (a) (green curve), (d) (blue curve), and (c) (red curve). ### Observation of an onion cell SVF is capable of enhancing the edges of AOs and POs. However, as per our calculation, Eq. 6, if a sample is a complex PAO, it adds or reduces the size of edges due to third term on the right hand side of Eq. 6. From Eq. 8, i.e., VVF, there is no extra third term such as in SVF that adds or reduces the size of edges of PAO. Now, in order to verify our theoretical calculation, we changed the sample to an onion cell, which contains a complex PAO. The transparent cell nucleus and an oblique cell wall in onion cell are the PO and AO, respectively. Fig. 8 (a) shows the observation results of onion cells without a q-plate. Figs. 8 (b) and (c) show the edge enhancement using VVF and SVF, respectively. The cell wall, which is the AO, can be seen without the q-plate in Fig. 8 (a), but the edge of the cell nucleus, which is the PO, are difficult to see. However, in Figs. 8 (b) and (c), both the edges of the cell wall and cell nucleus can be seen after inserting the q-plate. Due to the absolute square value of the partial derivative on the cell wall, we can see two lines on the cell wall as the edge in Figs. 8 (b) and (c). The portion wrapped with white dashed square in Figs. 8 (b) and (c) are the edges of cell nucleus continuously highlighted using VVF (Fig. 8 (b)); however, the edge is not continuously highlighted using SVF (Fig. 8 (c)). A surface plot in Figs. 8 (d) and (e) shows the phase edge profile from the portion wrapped Figure 6: (a) and (b) Edge enhancement of PO 350-nm without and with a q-plate, (c) extracted portion of square from Fig. 6 (a), (d) extracted portion of square from Fig. 6 (b) and (e) horizontal cross-sectional intensity distribution along white dashed line of Fig. 6 (c) (green curve), Fig. 6 (d) (blue curve), and Fig. 5 (c) (red curve). Figure 7: Correlation between ideal edge enhancement and edge enhancement for PO for different heights. with white dashed square in Figs. 8 (b) and (c), where the z-axis corresponds to the intensity. The surface plot indicates that the intensity of the phase edge is enhanced by a 4_f_system with a q-plate, because the intensity is higher than that of the surrounding area. If we compare two surface plots, we can see the highest intensity is almost continuous in Fig. 8 (d) compared to that is Fig. 8 (e), in which the highest intensity is broken at the bottom side. The discontinuity of the edges in SVF image is because of the contribution of the positive and negative gradient from the third term on the right hand side of Eq. 6. In order to support our theory and experimental results, we used the numerical simulation described in Section 4.1. ## 4 Discussion In the first part, we will discuss the simulation result obtained to support our theoretical and experimental results on the onion cell. Then, in the second part, we will demonstrate the separation of the PO edge from the AO edge through theoretical calculation and numerical simulation. ### Evidence to support our theoretical calculation and experimental results on the onion cell We prepared a sample similar to an onion cell for simulation. In order to set up simulation conditions, we selected the portion, wrapped in white dashed line of Fig. 8 (b) and collected the information. The radius of the cell nucleus and thickness of the cell wall were 14 and 5, respectively. The distance from the centre of cell nucleus to the centre of the cell wall was 27. With this information, we prepared a PO with radius 14 using a super Gaussian function of order 5. Similarly, an AO with thickness 5 using a one dimensional super Gaussian function of order 5 was prepared. Then, the PO was multiplied with the shifted AO by 27 to make the PAO. Figs. 9 (a) and (b) shows the PAO and its vertical cross-sectional distributions, respectively. The ideal edge enhancement of PAO is depicted in Fig. 9 (c). Figs. 9 (d), and (e) represents edge enhancement of the PAO using VVF and SVF, respectively. Figure 8: Experimental results of onion cell, (a) without q-plate, (b) VVF image, (c) SVF image, and (d) and (e) surface plot of the phase edge of wrapped portion in Figs. 8 (b) and (c). It is obvious from Fig. 9 (d) that the edge of the PO is continuously highlighted compared to Fig. 9 (e), where the intensity decreases at the bottom-right hand side compared to that on the left hand side. This is because as per Eq. 6, the extra third term from the negative gradient, \(\left(\partial_{x}B(r_{\perp})\partial_{y}A(r_{\perp})-\partial_{x}A(r_{\perp}) \partial_{y}B(r_{\perp})\right)<0\), reduces the intensity from the first and second terms. In order to determine the correlation coefficient, only the cell nucleus were selected as shown in Fig. 9 (wrapped with white dotted square for ideal, VVF, and SVF images). The blue scatter in Fig. 10 represent the correlation coefficient between the ideal PAO edge and PAO edges obtained by VVF for varying distance between the cell nucleus and cell wall. Similarly, the orange scatter shows the correlation between the ideal PAO edge and PAO edges obtained by SVF for varying distance between the cell nucleus and cell wall. At 20 \(\upmu\)m, i.e., the distance between the center of the cell nucleus and wall, the correlation coefficient is minimum for both VVF and SVF. Figure 10: Correlation coefficient: Blue scatter shows correlation between ideal PAO edge and PAO edges obtained by VVF, and orange scatter shows correlation between ideal PAO edge and PAO edges obtained by SVF. Figure 9: (a) Distribution of PAO, (b) vertical cross-sectional distributions of the PAO, (c) ideal edge enhancement of PAO, (d) edge enhancement of PAO with VVF, and (e) edge enhancement of PAO with SVF. This is because at the minimum distance, the edges of the cell nucleus and wall overlap, as can be seen in Fig. 10. However, the correlation coefficient remains approximately the same for VVF (slope:0.01), whereas the correlation coefficient increases as the distance increases for SVF (slope:0.04). This indicates that as the cell nucleus (PO) moves away from cell wall (AO), the interference from AO to PO minimized. From the experimental results of the onion cell and simulation result of VVF images, the edges of the PO and AO present in the PAO are enhanced equally, making it difficult to distinguish the two. Therefore, we present a numerical calculation and simulation to eliminate the AO edge from that of the PAO. ### Isolation of PO from AO In order to eliminate the AO from PAO, we use an off-axis beam, which deviates from the propagation direction \(z\). The deviated beam on the \(xz\) and \(yz\) plane at angle, \(\beta\ll 1\), gives extra phase on the propagation plane wave as \(\exp\left(\mathrm{i}\beta\mathbf{k}_{\perp}\cdot\mathbf{r}_{\perp}\right)\exp\left( \mathrm{i}kz\right)\), where \(k\) is the wavenumber, and \(\mathbf{k}_{\perp}=\left(k_{x},k_{y}\right)=\beta k\mathbf{e}_{\perp}\) is the two-dimensional transverse wavenumber vector with \(\mathbf{e}_{\perp}\) being a two-dimensional unit vector describing the transverse component of propagation direction. The extra phase \(\exp\left(\mathrm{i}\beta\mathbf{k}_{\perp}\cdot\mathbf{r}_{\perp}\right)\) will contributes to PO edge at the output intensity, \(I\), as shown below. \[I(\mathbf{r}_{\perp},\mathbf{e}_{\perp},\beta)\approx\left|\mathbf{\nabla}\left[f_{\mathrm{ in}}(\mathbf{r}_{\perp})\mathrm{e}^{\mathrm{i}\beta k\mathbf{e}_{\perp}\cdot\mathbf{r}_{ \perp}}\right]\right|^{2}=\left|\mathbf{\nabla}A(\mathbf{r}_{\perp})\right|^{2}+A(\mathbf{r }_{\perp})^{2}\left|\mathbf{\nabla}B(\mathbf{r}_{\perp})+\beta k\mathbf{e}_{\perp}\right|^ {2}. \tag{9}\] The first and second terms on the right hand side of Eq. 9 correspond to the AO edge, and the PO edge with the contribution of the off-axis beam, respectively. Since only the second term contains the contribution of the off-axis beam in Eq. 9, the first term (AO edge) can be eliminated by the following calculation for four light waves with different propagational directions: \[\begin{split} J(\mathbf{r}_{\perp},\beta)&\equiv\left\{ I(\mathbf{r}_{\perp},\mathbf{e}_{x},\beta)-I(\mathbf{r}_{\perp},\mathbf{e}_{x},-\beta)\right\}^{2}+ \left\{I(\mathbf{r}_{\perp},\mathbf{e}_{y},\beta)-I(\mathbf{r}_{\perp},\mathbf{e}_{y},-\beta) \right\}^{2}\\ &\approx\left|4\beta kA(\mathbf{r}_{\perp})^{2}\mathbf{\nabla}B(\mathbf{r}_{ \perp})\right|^{2},\end{split} \tag{10}\] where \(\mathbf{e}_{x}\) and \(\mathbf{e}_{y}\) are unit vectors along the \(x\) and \(y\) axes, respectively. Two pairs of off-axis beams with positive and negative angles along \(\mathbf{e}_{x}\) and \(\mathbf{e}_{y}\) can generate \(x\)- and \(y\)-directional edge Figure 11: (a) Edge enhancement of PAO with normal incident beam, (b) ideal edge enhancement of the PO, (c) isolated edge enhancement of the PO with off-axis beam, and (d) vertical cross-sectional logarithmic intensity distributions for ideal PO edge, normal incident beam, and off-axis beam. detection of the PO, and they are combined to produce two-dimensional edge enhancement, resulting in isolation of the PO edge from the PAO. As a numerical demonstration, we considered the same sample described in Section 4.1 and Fig. 9 (a), except the distance between cell nucleus and cell wall was reduced from 27 \(\upmu\)m to 25 \(\upmu\)m. This is because we want the cell nucleus and cell wall to be sufficiently close. The edge enhancement with normal incident beam, \(\beta=0^{\circ}\), and ideally isolated edge-enhanced image of the PO is shown in Figs. 11 (a) and (b), respectively. The AO edge observed at the normal incident light as shown in Fig. 11 (a) could be eliminated according to Eq. 10. The isolated edge enhancement of the PO from AO using an off-axis beam is shown in Fig. 11 (c). Here, we assumed the off-axis beam \(\beta=0.1^{\circ}\) and wavelength \(\lambda=2\pi/k=635\) nm. As can be seen from the logarithmic intensity of the horizontal cross-sectional distribution shown in Fig. 11 (c), the AO edge was eliminated using the off-axis beam case (red curve) compared to the normal incident (green curve). The minimum intensity ratio between the AO edge that uses the normal incident beam (green curve) and off-axis beam (red curve) is -23dB, which is suitable for edge reduction. ## 5 Conclusion In this paper, edge enhancement of a PAO using a \(4f\) system with a q-plate was calculated theoretically and verified experimentally. The q-plate filter generates SVF or VVF by using different polarization states of the illumination beam. The experimental results of an edge enhanced onion cell using SVF and VVF indicate the validity of the theory and method. We quantitatively compare between the cell nucleus edge between SVF and VVF images. Furthermore, our numerical simulation results obtained at similar conditions as for the experimental setup agree well with the experimental results. We also proposed a new method to isolate the PO edge from the AO edge using off-axis beam illumination theoretically, and verified it via numerical simulation. The proposed isolation of PO edge could be used in microscopy research or biological edge detection, and we will conduct a proof-of-principle experiment in a future study. ## Funding This work has been partially supported by the Research Foundation for Opt-Science and Technology and partially by JSPS KAKENHI (grant no. 20K05364 and 18KK0079). ## Disclosures The authors declare no conflicts of interest.
2304.10470
3D hydrodynamic simulations of massive main-sequence stars. III. The effect of radiation pressure and diffusion leading to a 1D equilibrium model
We present 3-D hydrodynamical simulations of core convection with a stably stratified envelope of a \unit{25}{\Msun} star in the early phase of the main-sequence. We use the explicit gas-dynamics code \code{PPMstar} which tracks two fluids and includes radiation pressure and radiative diffusion. Multiple series of simulations with different luminosities and radiative thermal conductivities are presented. The entrainment rate at the convective boundary, internal gravity waves in and above the boundary region, and the approach to dynamical equilibrium shortly after a few convective turnovers are investigated. We perform very long simulations on $896^3$ grids accelerated by luminosity boost factors $1000$, $3162$ and $10000$. In these simulations the growing penetrative convection reduces the initially unrealistically large entrainment. This reduction is enabled by a spatial separation that develops between the entropy gradient and the composition gradient. The convective boundary moves outward much more slowly at the end of these simulations. Finally, we present a 1-D method to predict the extent and character of penetrative convection beyond the Schwarzschild bounxdary. The 1-D model is based on a spherically-averaged reduced entropy equation that takes the turbulent dissipation as input from the 3-D hydrodynamic simulation and takes buoyancy and all other energy sources and sinks into account. This 1-D method is intended to be ultimately deployed in 1-D stellar evolution calculations and is based on the properties of penetrative convection in our simulations carried forward through the local thermal timescale.
Huaqing Mao, Paul Woodward, Falk Herwig, Pavel A. Denissenkov, Simon Blouin, William Thompson, Benjamin McDermott
2023-04-20T17:17:42Z
http://arxiv.org/abs/2304.10470v2
D hydrodynamic simulations of massive main-sequence stars. III. The effect of radiation pressure and diffusion leading to a 1D equilibrium model ###### Abstract We present 3D hydrodynamical simulations of core convection with a stably stratified envelope of a 25 M\({}_{\odot}\) star in the early phase of the main-sequence. We use the explicit gas-dynamics code PPMstar which tracks two fluids and includes radiation pressure and radiative diffusion. Multiple series of simulations with different luminosities and radiative thermal conductivities are presented. The entrainment rate at the convective boundary, internal gravity waves in and above the boundary region, and the approach to dynamical equilibrium shortly after a few convective turnovers are investigated. From the results of these simulations we extrapolate to find the entrainment rate at the nominal heating rate and thermal diffusion given by the MESA stellar evolution model on which the 3D stratification is based. Further, to study the effect of radiative diffusion on the thermal timescale, we perform very long simulations accelerated by 10000 times their nominal luminosities. In these simulations the growing penetrative convection reduces the initially unrealistically large entrainment. This reduction is enabled by a spatial separation that develops between the entropy gradient and the composition gradient. The convective boundary moves outward much more slowly at the end of these simulations. Finally, we present a method to predict the extent and character of penetrative convection beyond the Schwarzschild boundary. This method is intended to be ultimately deployed in 1D stellar evolution calculations and is based on the properties of penetrative convection in our simulations carried forward through the local thermal timescale. Astrophysical fluid dynamics (101) -- Hydrodynamics (1963) -- Hydrodynamical simulations (767) -- Stellar oscillations (1617) -- Stellar interiors (1606) -- Stellar convective zones (301) - Massive stars (732) -- Stellar structures (1631) ## 1 Introduction Convective transport can be very efficient in stellar interiors, owing to the high energy densities there (Kippenhahn et al., 1990). At the convective-radiative boundary, it can play a crucial role in mixing chemical species (e.g. Denissenkov et al., 2012, in novae). Yet convection is one major uncertainty in the 1D stellar evolution model (e.g. Sukhbold and Woosley, 2014; Davis et al., 2018; Kaiser et al., 2020, in massive stars), with a set of parameters to calibrate to match with the observations (e.g. Schaller et al., 1992; Ribas et al., 2000; Trampedach et al., 2014; Tkachenko et al., 2020; Higl et al., 2021). For example, the efficiency of convective boundary mixing (CBM) during the main-sequence directly affects the model's brightness and main-sequence lifetime (Salaris and Cassisi, 2017; Higgins and Vink, 2019). The local theory of convection, mixing-length theory (MLT) formalized by Bohm-Vitense (1958) and Cox and Giuli (1968) is widely used in 1D stellar evolution codes (e.g. Paxton et al., 2010). Other sophisticated theories on convection have also been proposed. For example, Xiong (1986) developed a non-local MLT that indicates penetrative convection. Pasetto et al. (2014) removes the mixing length in their convection theory. A spectrum of turbulent eddies instead of a typical rising blob is considered in Canuto and Mazzitelli (1991). Convection is not only an important mechanism to transport energy and species, but also excites internal gravity waves (IGWs) (Lecoanet and Quataert, 2013; Pincon et al., 2016). It is predicted theoretically that radiative diffusion damps travelling IGWs, which carry angular momentum(Rogers and McElwaine, 2017; Aerts et al., 2019). This process leads to deposition of angular momentum where the IGWs are damped, and hence to redistribution of angular momentum (Zahn et al., 1997). Asteroseismological observations help constrain convective boundary mixing and diffusive mixing in the radiative envelope (Moravveji et al., 2015; Michielsen et al., 2019, 2021). Penetrative convection has been investigated in theory and through numerical simulations for decades in various contexts, core convections and shell convections for exapmle (Roxburgh, 1989; Arnett et al., 2015; Anders et al., 2022; Korre and Featherstone, 2021; Blouin et al., 2023). The extent of convective penetration and its dependence on various properties of the Schwarzschild boundary (SB) have been studied (Hurlburt et al., 1994; Baraffe et al., 2021). The temperature gradient in the convective boundary (CB) region may be deduced by asteroseismological observation and modeling (Michielsen et al., 2021). Current treatment of the convective boundary in 1D stellar evolution simulations includes f overshooting (Herwig et al., 2000), instantaneous overshooting (Maeder, 1976) and entrainment (Staritsin, 2013; Scott et al., 2021). In this work, we define the SB to be the location where the rising radiation diffusion energy flux as we go outward in radius in the core convection zone first equals the total luminosity. We find that this is not the location where the entropy gradient first becomes positive and the temperature gradient first becomes subadiabatic, as we will discuss later. Beyond the SB we have a region of penetrative convection leading up to the CB. We here define the CB to be that radius at which the radiative energy flux becomes equal to the total luminosity, the convective entropy flux vanishes, and also the turbulent dissipation of kinetic energy of the convection flow vanishes. Previously, in the first paper of this series, we have introduced the general properties of core-convection simulations of a 25 M\({}_{\odot}\) star approximated with an ideal gas equation of state (Herwig et al., 2023, Paper I). We confirmed earlier results of massive main-sequence star simulations by Meakin and Arnett (2007) and Gilet et al. (2013). These authors reported entrainment rates of envelope material into the convective core that are orders of magnitude larger than what is compatible with stellar models and basic observational properties. Candidate physical mechanisms that may impact the entrainment rate in hydrodynamic simulations include a more realistic thermodynamic stratification, radiative diffusion, rotation and magnetic fields. The properties of IGWs in our 3D PPMstar ideal gas simulations are presented in (Thompson et al., 2023, Paper II). One important aspect of IGWs excited by core convection is the possibility that they may cause material or angular momentum mixing in the radiative layer. Radiative diffusion permits the entropy in the stably stratified envelope to no longer be a constant of the motion. As a consequence, irreversible envelope mixing becomes possible, even though IGW velocity amplitudes are damped by radiative diffusion. Our strategy in this paper is to study the impact of radiation pressure and radiative diffusion on the convection zone in our model star and on the structure of the CB region. We will analyze the spectrum of IGWs that are excited at the CB for the purpose of comparison with the studies of Paper I and Paper II in this series, but we will leave the issue of potential material mixing in the envelope to a forthcoming paper. The main goals of this work are as follows: to test whether adopting a more realistic simulation approach which includes radiation pressure and diffusion can reduce the entrainment rate significantly; to study the effect of radiative diffusion on the spectrum of IGWs in the stable envelope; to investigate the stratification of penetrative convection and develop a method to predict the convective penetration depth. The first 3 sections discuss flow phenomena on a short timescale (convective timescale) and the following two sections investigate the growing penetrative convection on a thermal timescale. Finally, we discuss our results and conclusions in the last section. Specifically, in SS2 we present the simulation method, simulation setup, and assumptions. Section SS3 describes the general flow dynamics from the onset of core convection to a 3D quasi-steady state on a convective timescale, introduces the CBM, excitation of IGWs and their power spectra, and discusses the effect of radiative diffusion on the CBM and IGWs along the way. In SS4, the long-time behaviors of stellar stratification and convective penetration are discussed. The gradual development of the penetration region beyond the SB is observed in a very long duration simulation. In this simulation the development of a positive entropy gradient in the penetration region that is sustained despite efficient species mixing is identified as a key structure that acts to bring the intensity of convective motions down, so that further entrainment and outward motion of the convective boundary is greatly reduced. In SS5, a method to predict the penetration depth and the stratification within the penetration region is presented in terms of a 1-D model of the core convection zone that can be worked out if the kinetic energy dissipation rate up to the SB has either been determined from a short 3-D simulation on a modest grid or has been approximated by interpolating between such simulations under similar conditions. We summarize and discuss our main results and conclusions in SS6. ## 2 Methods and Assumptions To study the effect of radiation, we apply the equation of state that includes radiation pressure in addition to that of a monatomic gas. This allows direct application of the MESA(Paxton et al., 2010, 2013, 2015) model with minimal fitting and approximation in going from 1D to 3D initialization. The base state is constructed from the 25 M\({}_{\odot}\)MESA stellar evolution model (Davis et al., 2018)\(1.64\times 10^{6}\)yr after the start of H burning on the zero-age main sequence. The exponential CBM model is used. In this model, the region outside the SB obeys the radiative temperature gradient. Details on the 1D model can be found in Paper I. Fig. 1 shows the agreement of radial profiles of the initial state on the 3D Cartesian grid with the MESA model. We use the PPMstar gas dynamics code described in Woodward et al. (2015) and applied in Woodward et al. (2015); Jones et al. (2017); Andrassy et al. (2020). The PPMstar tracks the H rich materials in the stable envelope by fractional volume \(f_{\rm V}\), and materials in the convective core by \(1-f_{\rm V}\). In this version, the contribution of radiation is included in the internal energy per unit mass \(e\), pressure \(p\) and specific entropy \(s\) \[e(\rho,T,\mu) = \frac{RT}{(\gamma-1)\mu}+\frac{aT^{4}}{\rho} \tag{1}\] \[p(\rho,T,\mu) = \frac{R\rho T}{\mu}+\frac{aT^{4}}{3}\] (2) \[s(\rho,T,\mu) = -\frac{R}{\mu}\ln\rho+\frac{R}{(\gamma-1)\mu}\ln T+\frac{4aT^{3}} {3\rho}, \tag{3}\] where \(\mu\) is the mean molecular weight,\(\rho\) density, \(T\) temperature, \(R\) gas constant, \(\gamma=5/3\), \(a\) radiation constant. The technique of model equation of state (Woodward, 1986) is applied, \[p=p_{00}+(\tilde{\gamma}-1)\rho\varepsilon \tag{4}\] the coefficients \(p_{00}\) and \(\tilde{\gamma}\) of which are different in each grid cell, that preserves sound speed \(c_{s}\) and energy density \(\rho\varepsilon\) every time step \[\tilde{\gamma}=1+\frac{c_{s}^{2}\rho}{p+\varepsilon\rho},\ p_{00}=p-(\tilde{ \gamma}-1)\rho\varepsilon. \tag{5}\] The radiative flux, \[\mathbf{F}=-k\nabla T \tag{6}\] is implemented explicitly in PPMstar as a part of the energy flux in every time step update, with radiative thermal conductivity (Kippenhahn et al., 1990) \[k=\frac{4acT^{3}}{3\kappa\rho}, \tag{7}\] where \(\kappa\) is the opacity, \(c\) speed of light. Simulations from M200 to M213 (see Table 1) use the following opacity fit as a function of hydrogen mass fraction and temperature: \[\kappa = \min(\frac{c_{es}10^{\sum_{i=0}^{3}(a_{i}(\log_{10}T)^{3-i})}}{ \kappa_{\rm min}c_{\rm corr}},\kappa_{\rm tot}) \tag{8}\] \[c_{\rm es} = 0.2(1+x_{\rm H})\] \[c_{\rm corr} = 1+0.5(\frac{\kappa_{\rm max}}{\kappa_{\rm min}}-1)(1-\tanh(w\log _{10}\frac{T}{T_{0}}))\] where \(x_{\rm H}\) is hydrogen mass fraction, \(\kappa_{\rm min}\), \(\kappa_{\rm max}\), \(\kappa_{tot}\), \(a_{i}\), \(T_{0}\) and \(w\) are fitting parameters. Simulations M284, M250, M251 and M252 use another opacity fit to the OPAL opacity (Iglesias and Rogers, 1996) as a function of density, temperature and hydrogen mass fraction: \[\kappa=\sum_{i=0}^{5}a_{i}(t_{7})^{5-i} \tag{9}\] Figure 1: Comparison of adopted base state for the 3D simulations and the MESA radial profile of density and temperature. Quantities are given in their code units. In Eq. 9, \(t_{7}=\log_{10}T-7\) and \[a_{i} =w_{11}a_{11}^{i}+w_{12}a_{12}^{i}+w_{21}a_{21}^{i}+w_{22}a_{22}^{i}\] \[w_{11} =(r_{2}-r)(x_{2}-x_{h})/((r_{2}-r_{1})(x_{2}-x_{1}))\] \[w_{12} =(r_{2}-r)(x_{h}-x_{1})/((r_{2}-r_{1})(x_{2}-x_{1}))\] \[w_{21} =(r-r_{1})(x_{2}-x_{h})/((r_{2}-r_{1})(x_{2}-x_{1}))\] \[w_{22} =(r-r_{1})(x_{h}-x_{1})/((r_{2}-r_{1})(x_{2}-x_{1}))\] \[\qquad\qquad r=\log_{10}\rho-3\log_{10}T+21\] where \(x_{1}\), \(x_{2}\), \(r_{1}\), \(r_{2}\), \(a_{jk}^{i}\) are fitting parameters. We apply a reflecting boundary condition at radius 2670 Mm, and make the heat fluxes at opposite cell interfaces equal for 3 grid cell widths inside this reflecting sphere. We perform a series of 25M\({}_{\odot}\) simulations (Table 1), with varying driving luminosities and radiative thermal conductivity \(k\). Properties such as the mass entrainment rate at the CB at the nominal luminosity are extrapolated from simulations with boosted luminosities. For a luminosity boosting factor \(X\), we have cases with 0, \(X^{2/3}\) and \(X\) boosting factors for radiative diffusion. Henceforth, we refer to them by no diffusion, intermediate diffusion, and high diffusion. ## 3 From the Initial Transient to a Quasi-Steady 3D Flow Here we briefly describe the dynamics of the initial transient which takes place for the first few convective turn-overs from time 0, and the following quasi-steady 3D flow. In our many cases considered here, we find that the visualization looks qualitatively similar regardless of the boosting factor for luminosity and radiative diffusion. See our representative simulation M252 (luminosity and radiative diffusion boosted by a factor of 10000) at [https://ppmstar.org](https://ppmstar.org). In the discussion below, we will point out the effect of radiative diffusion when it matters qualitatively and quantitatively. ### The development of the fully convective core At time 0, the initial state is in perfect hydrostatic equilibrium. The radiative diffusion is transporting heat according to the stratification and opacity. As in Paper I the nuclear burning is emulated as a time-independent Gaussian volume heating \(\sim\exp(-r^{2}/(2\sigma^{2}))\), \(\sigma=280\,\)Mm. Given the temperature gradient, there is the excess heat in the core accumulating due to insufficient radiative energy transport. The center of the core becomes convectively unstable as a result. The central gas parcels rise because of the buoyancy force and thereby convection starts. Because the convective core is almost adiabatic, the moving fluid elements move effortlessly on the same adiabat. The excess heat unable to be carried by the radiative diffusion is now transported by the emerging convection within the core until the rising, relatively buoyant fluid elements encounter the positive entropy gradient where the stratification becomes convectively stable. Once the rising plumes encounter the entropy gradient, the buoyancy force restrains them from going further outward in radius. The interaction between the plumes and the convective-radiative boundary excites IGWs that propagate in the stable envelope. During the first few convective turnovers, the core convection becomes fully turbulent and excites IGWs of a broad range of wavelengths. An analysis of the power spectrum of the IGWs in the stable envelope after the initial transient adjustment of the flow to its 3D degrees of freedom is presented at the end of this section. The convective core soon develops the characteristic dipole circulation pattern that was first seen in the 3D simulations of Porter et al. (2000). It has been noted by many investigators that convection tends to develop convection cells that extend to the largest vertical scale (Hurlburt et al., 1986; Freytag et al., 1996; Porter et al., 2000; Andrassy et al., 2022). In Fig. 2, when the dipole plume hits the CB and diverges, the flows \begin{table} \begin{tabular}{l r r r r r} \hline ID & grid & \(L/L_{*}\) & \(K/K_{*}\) & \(t_{\rm end}/\)h & \(\dot{M}/[{\rm M}_{\odot}\) yr\({}^{-1}]\) \\ \hline M200 & 768\({}^{3}\) & 1000.0 & 0.0 & 1817.6 & \(6.82\times 10^{-1}\) \\ M201 & 1152\({}^{3}\) & 1000.0 & 0.0 & 3556.3 & \(6.85\times 10^{-1}\) \\ M202 & 1152\({}^{3}\) & 100.0 & 0.0 & 2439.2 & \(3.60\times 10^{-2}\) \\ M203 & 1152\({}^{3}\) & 3162.0 & 0.0 & 1468.1 & \(2.41\times 10^{0}\) \\ M204 & 1152\({}^{3}\) & 1000.0 & 100.0 & 3362.9 & \(6.53\times 10^{-1}\) \\ M205 & 1152\({}^{3}\) & 100.0 & 21.5 & 2648.4 & \(3.91\times 10^{-2}\) \\ M206 & 1152\({}^{3}\) & 3162.0 & 215.4 & 1549.8 & \(2.16\times 10^{0}\) \\ M207 & 1152\({}^{3}\) & 1000.0 & 1000.0 & 3838.4 & \(3.69\times 10^{-1}\) \\ M208 & 1152\({}^{3}\) & 100.0 & 100.0 & 2446.4 & \(2.00\times 10^{-2}\) \\ M209 & 1152\({}^{3}\) & 3162.3 & 3162.3 & 1465.3 & \(1.36\times 10^{0}\) \\ M210 & 1728\({}^{3}\) & 1000.0 & 1000.0 & 3495.3 & \(3.91\times 10^{-1}\) \\ M211 & 768\({}^{3}\) & 1000.0 & 100.0 & 2089.7 & \(6.31\times 10^{-1}\) \\ M212 & 1152\({}^{3}\) & 31.62 & 31.62 & 2297.4 & \(6.03\times 10^{-3}\) \\ M213 & 768\({}^{3}\) & 1000.0 & 1000.0 & 3537.5 & \(3.72\times 10^{-1}\) \\ M284 & 2688\({}^{3}\) & 1000.0 & 1000.0 & 3418.4 & \(3.38\times 10^{-1}\) \\ M250\({}^{\dagger}\) & 896\({}^{3}\) & 3162.3 & 3162.3 & 20769.0 & \(5.77\times 10^{-1}\) \\ M251\({}^{\dagger}\) & 896\({}^{3}\) & 1000.0 & 1000.0 & 18444.4 & \(1.74\times 10^{-1}\) \\ M252\({}^{\dagger}\) & 896\({}^{3}\) & 10000.0 & 10000.0 & 25137.6 & \(1.40\times 10^{-1}\) \\ \hline \end{tabular} \end{table} Table 1: Simulation summary providing the run ID, the grid, luminosity \(L\) boosting factor, thermal conductivity \(k\) boosting factor, end time of the run, and entrainment rate, \(*\) denotes values from the MESA model. The runs labelled by \(\dagger\) are long-duration, the entrainment rates of which decline over time. Hence we fit them by a straight line from dump 5000 to 6000 to compute the corresponding entrainment rates. become mostly horizontal near the boundary, bringing along buoyant materials from the boundary. Entrainment of the fluid from the stable layer into the convection zone is facilitated by the boundary layer separation (Woodward et al., 2015). We define dynamical equilibrium as a state in which the kinetic motions, characterized by kinetic energy density, buoyancy driving, work by pressure field, become statistically time-independent on the convective timescale, demonstrated by the horizontal velocity in Fig. 3. While in dynamic equilibrium the mass entrainment rate decreases as the simulation approaches a state closer to thermal equilibrium. The entrainment analysis can be found in SS3.3 using the same methodology as in Paper I. Figure 2: Images of a thin slice through the center of the star of the horizontal velocity component (top row) of M201 (left column, no radiative diffusion) and M207 (right column, 1000x radiative diffusion), and of the vorticity magnitude (bottom row). Movies of these quantities are available at [https://ppmstar.org](https://ppmstar.org). In stellar evolution models the CB is usually defined as the radius at which the adiabatic gradient is equal to the radiative gradient, also known as the SB. Based on our discussion of a very long-duration simulation in SS4, we choose to define the CB in this work as the radius where, in statistical dynamical and thermal equilibrium, the radial derivatives of the radiative and convective heat fluxes as well as the convective heat flux itself and the kinetic energy dissipation rate all vanish. The CB, thus defined, is different from the SB, because at the SB the radial derivative of the radiative heat flux does not vanish. ### Dynamics and kinematics in dynamical equilibrium The convection rapidly organizes itself such that the total convective flux becomes the luminosity minus the total radiative energy flux (Fig. 4, Eq. 11, Eq. 12). Therefore, our simulated star reaches a dynamical equilibrium over the first few convective turn-overs and stays in dynamical equilibrium thereafter. #### 3.2.1 Effect of radiative diffusion Fig. 5 shows how \(f_{\rm V}\), tangential velocity, and the Brunt-Vaisala (BV) frequency squared \(N^{2}\), Eq. 10, evolve for different strengths of radiative diffusion at 1000x the nominal luminosity. \[N^{2}=\frac{g\delta}{H_{p}}(\nabla_{\rm ad}-\nabla_{\rm star})+\frac{g\delta}{ H_{p}}\frac{\phi}{\delta}\nabla_{\mu}\equiv N_{t}^{2}+N_{\mu}^{2} \tag{10}\] where \[\delta=-(\frac{\partial ln\rho}{\partial lnT})_{p,\mu},\phi=( \frac{\partial ln\rho}{\partial ln\mu})_{p,T}\] \[\nabla_{\rm star}=\frac{dlnT}{dlnp},\nabla_{\rm ad}=(\frac{dlnT} {dlnp})_{S},\nabla_{\mu}=\frac{dln\mu}{dlnp}\] and \(\rho\) is density, T temperature, \(H_{p}\) pressure scale height, \(\mu\) mean molecular weight, \(S\) specific entropy, \(\nabla_{\rm star}\) the actual temperature gradient, \(\nabla_{\rm ad}\) the adiabatic gradient, and \(\nabla_{\rm star}-\nabla_{\rm ad}\) is the superadiabaticity. Outward from the SB by about 120 Mm (10% in radius) in the initial state of the simulation, \(N^{2}\) has a strong, slowly migrating peak reflecting the sudden change of entropy mainly caused by the change in \(\mu\) at that location. Perhaps the most important effect of the radiative diffusion is that, as this is increased, the position of the composition change, traced by the \(f_{\rm V}\) profile, moves outward less rapidly. This effect can also be seen in the position of the \(N^{2}\) peak feature. This behavior can be explained by the fact that when we add radiative diffusion, we introduce into the problem a mechanism for carrying the heat introduced into the convection zone outward through the stably stratified envelope. In the absence of this mechanism, in addition to entraining high entropy materials from the envelope, heat must pile up in the convection zone, and this must cause it to expand. This is analogous to the helium shell flash in that the ignition of helium fusion in a thermal pulse produces more energy temporarily than can be carried away by radiative diffusion, causing the star to expand and brighten (Herwig et al., 2006). In our high diffusion case, heating by nuclear burning is on average, removed by the heat energy flowing through the reflecting sphere at our Figure 4: Total convective (radiative) energy flux for the no-diffusion (M201, 0x), intermediate-diffusion (M204, 100x) and high diffusion (M207, 1000x) simulations with 1000x luminosity enhancement at dump 3500. The curves are smoothed by using moving averages three times over a window 120 Mm wide and time-averaged over 100 dumps \(\sim~{}140\) hr. The fluxes are defined in Eq. 11 and Eq. 12. Temperature, opacity, and density in Eq. 12 are spherical averages. Figure 3: The overall magnitude of horizontal velocities 0.5 \(H_{p}\) below and above the \(N^{2}\) peak (see Eq. 10 ) becomes constant after an initial transient (400 hours) when we average over the persistent fluctuations. outer boundary in the form of radiation (Fig. 4). The total convective flux and total radiative energy flux are calculated by Eq. 11 and Eq. 12. \[F_{\rm conv}(r)=\iint\limits_{\rm sphere\ r}(p+\rho e+\frac{1}{2} \rho u^{2})u_{r}dA \tag{11}\] \[F_{\rm rad}(r)=-\frac{4\pi r^{2}c}{3\kappa\rho}\frac{\partial(aT ^{4})}{\partial r} \tag{12}\] The convective flux is the flux of enthalpy plus the kinetic energy summed over the sphere at radius \(r\). \(c\) is the speed of light and \(\kappa\) is the opacity. In cases of no diffusion, there is no diffusive heat flux across the stably stratified gas in the outer part of our computational region. The heated convective core pushes the envelope resulting in positive convective flux at all radii. We measure that about 55% of the nuclear heating becomes potential energy by expanding the convective core and compressing the stable envelope (i.e. redistributing mass in a static gravitational potential), while 45% becomes internal energy by heating the star up. In the intermediate diffusion case M204, 42% of the nuclear heating expands the core and 46% heats the star up. About 10% of the nuclear heating is transported outward by radiative diffusion in that case. In Fig. 5, the convective velocity is slightly smaller in the high diffusion case but the profile of the tangential velocity remains similar. In all cases, the kinetic energy is negligible, once a dynamical equilibrium is established, it mostly does not change over time and stays negligible. The effect on the motion in the stable envelope, i.e., IGWs, is discussed in SS3. The differences in the heights and shapes of the \(N^{2}\) peaks, between the cases of no diffusion and intermediate diffusion at the same time (1000 or 2000 hours), are very small (Fig. 5), because most of the heat injected (90% and 100%) piles up in the convective core which leads to quantitatively similar dynamics. However, in the case of high diffusion, the change of location and shape in the \(N^{2}\) peak is noticeably smaller than in the other two cases given the same amount of time (Fig. 5). However, the overshoot and undershoot of the convective flux, and the overshoot of radiative flux at 1500 Mm suggest the thermal structure is adjusting, at a small rate. Hence, any significant change in the stratification for the high diffusion cases happens on a longer timescale than no or intermediate diffusion. To reduce the computational cost of studying the evolution on a longer timescale, we investigate the effect of enhancing luminosity in the next section and the possibility of accelerating the evolution by enhancing the luminosity in SS4. Figure 5: Profiles of \(N^{2}\), \(f_{\rm V}\), \(|U_{t}|\) of the simulations with 1000x luminosity enhancement (M201: no diffusion, M204: \(K\sim L^{2/3}\), M207: \(K\sim L\), all at the 1152\({}^{3}\) grid resolution.) The heat piling up in the no or intermediate diffusion cases explains the fact that the star lifts the convective core and compresses the envelope. This process will continue and completely change the stratification because the total energy of our simulation keeps increasing in these two cases. Hence, to simulate a realistic star in thermal equilibrium, the only reasonable scenario is the high diffusion one, and we later discuss the effect of enhancing luminosity using the high diffusion cases only. In addition, as discussed in SS3.3, the entrainment continues at a relatively constant rate, which suggests that the star is still adjusting its stratification and has not yet reached a thermal equilibrium. In such an equilibrium, all the temporal dependence on time scales longer than several large eddy turn-overs in the convection zone could be expected to very nearly vanish. By definition, the total heat content will be radiated away at the rate of the luminosity on a thermal timescale, if there is no nuclear heating. Therefore, it is not feasible to investigate the dynamics on a thermal timescale in the cases of no or intermediate diffusion without disrupting the thermodynamical structure completely. Hence, the discussion on the evolution on a thermal timescale in SS4 and SS5 focusses on the high diffusion cases. #### 3.2.2 Effect of enhancing luminosity Fig. 6 shows the profiles of \(N^{2}\), \(f_{\rm V}\), and horizontal velocity for a series of runs in which we vary the luminosity. For each boosting factor, we also enhance the radiative diffusion by the same factor. Cases of enhancement factors of 31.62, 100, 1000, and 3162 are used. For the two lowest luminosity cases, we observed essentially no change within 2000 hours in the profile of \(f_{\rm V}\), and in the position and the shape of the \(N^{2}\) peak during these simulations. This certainly does not mean that changes would not result were these two simulations run longer in time. Runs M207 and M209, with luminosity enhancement factors of 1000 and 3162, reshape the initial \(f_{\rm V}\) radial profile within relatively short times of less than 2500 hours. After this intial reshaping in these high-power cases, the \(f_{\rm V}\) radial profile translates while maintaining its shape as the gas from above the convection zone is entrained. As will be discussed in SS4, boosting the nuclear heating and the radiative diffusion by a common factor can be regarded as accelerating the time rate of change of the stellar model. In order to probe the long-time behavior of the stellar model, this balanced enhancement of the luminosity and radiation diffusion is appealing for our explicit PPMstar code, because it dramatically lowers the cost of finding the long-time behavior. Fig. 7 confirms that the magnitude of velocity scales with \(L^{1/3}\) in the presence of radiation pressure and radiative diffusion. This scaling is also observed in Paper I. #### 3.2.3 Convergence In Fig. 8, the profiles of \(N^{2}\), \(f_{\rm V}\) and horizontal velocity are presented for a sequence of simulations carried out at different grid resolutions to show the effect of refining our computational grid. These simulations are performed with a luminosity and radiation diffusion enhancement factor of 1000. We see that the \(N^{2}\) peak becomes higher with increasing grid resolution. However, the location of the \(N^{2}\) peak is roughly the same regardless of the resolution. The radial profile of \(f_{\rm V}\) becomes steeper with grid refinement, and it is clear that this steepening is not complete even on the highest resolution grid shown in the figure. Although there is some statistical noise evident in the plots of the horizontal component of the velocity in Fig. 8, it is clear that these simulations have converged upon mesh refinement to a well-defined state. Even the radial profiles of \(f_{\rm V}\) near the CB appear to have converged in terms of the position of the sharp increase in \(f_{\rm V}\) though not in its steepness. The interpretation of the \(N^{2}\) peak and the slope of the \(f_{\rm V}\) not converging on grid refinement is that we have not converged on mixing. In SS4, convergence will be shown for turbulent dissipation measured from the simulations and for vorticity in the stable envelope. #### 3.2.4 Mixing length parameter We first check the efficiency of convection. The mean free path of a photon inside our star is of order of \(1-10\) cm, i.e. our star is opaque and radiative transport of energy can be treated as a diffusion process. We take \(r_{c}=1500\) Mm as the radius of our convective core, the thermal adjustment timescale of the convective core will be \(\tau_{K}=r_{c}^{2}/K\). The convective timescale is \(\tau_{c}=2r_{c}/v_{c}\). Figure 7: Luminosity versus convective velocity magnitude in the convection zone at 1000 Mm averaged over 20 dumps. All cases are high diffusion. For our M207 case, the boosting factor for the radiative diffusion can be interpreted as decreasing the thermal conductivity by a factor of 1000. Given that, \[\frac{\tau_{c}}{\tau_{K}}=\frac{2K}{v_{c}r_{c}}\] is about \(1.06\times 10^{-4}\). Therefore, the convection in our simulations is efficient in transporting excess heat. We measure the super-adiabatic temperature gradient in our simulations and hence can derive numerical values of the standard mixing length parameter \(\alpha\). In MLT, the total convective flux is \[F_{\rm conv}=4\pi r^{2}\rho c_{p}\sqrt{p/\rho}(\nabla_{\rm star}-\nabla_{\rm ad })^{3/2}\alpha^{2} \tag{13}\] where symbols have their usual meaning, is proportional to \(\alpha^{2}\)(Prialnik, 2000). From the superadiabaticity in Fig. 9, we see that the temperature gradient is nearly adiabatic throughout the convective core (\(\nabla_{\rm star}-\nabla_{\rm ad}\sim~{}10^{-4}\)). The convective core is slightly superadiabatic inside 1000 Mm and becomes slightly subadiabatic beyond 1000 Mm. This is where the radial entropy gradient \(dS/dr\) becomes positive and the convective flows start to encounter the marginally stable stratification. Though the convective stability criterion indicates the stratification is stable at 1000 Mm and outward, this slightly subadiabatic temperature gradient cannot bring the convetive motion to a halt. The flows continue before arriving at the very much more significant entropy gradient at the CB. The convective flux is propotional to \((\nabla_{\rm star}-\nabla_{\rm ad})^{3/2}\). However, the temperature gradient is not superadiabatic throughout the entire convective core (Fig. 9). If we take the approach in Porter et al. (2000), redefining the superadiabaticity as \(\nabla_{\rm star}-f\nabla_{\rm ad}\) where \(f=0.999\), we find that the entire convective core is superadiabatic and the mixing length parameter \(\alpha\), solved from Eq. 13, is in the range from 0.4 to 1.2 (Fig. 10). This value of \(f\) is different from the value 0.98 used in Porter et al. (2000). Chan & Sofia (1989) suggest that the superadiabaticity might depend on the aspect ratio of the convective spherical shell and upon the equation of state. We find that \(\nabla_{\rm star}-\nabla_{\rm ad}\) is positive inside 1000 Mm and negative beyond 1000 Mm for all our different heating rates, but its magnitude increases with the boosting factor. This is qualitatively in agreement with the MLT assertion that the convective flux scales with superadiabaticity to the power of 3/2. ### Mass entrainment rate We determine the entrainment rate of the envelope gas of from above the CB into the convection zone using the same methodology as in Paper I. As in Paper I we define the entrained mass as the total mass of the envelope material within \(r_{b}\). \(r_{b}\) is the location of the maximum gradient of \(f_{\rm V}\) less one \(f_{\rm V}\) scale height. This entrained mass evolves linearly with time, and one example is shown in Fig. 11. Compared to the \(P_{\rm gas}\) only case (M114 in Paper I) the entrainment rate is 14% smaller when adding \(P_{\rm rad}\)(M201), and decreases by 50% when also adding radiative diffusion (M207). We estimate the entrainment rate at nominal heating by extrapolating separately from three sets (no, intermediate and high diffusion) of simulations (Fig. 12). The entrainment rates for no diffusion and intermediate dif Figure 10: The mixing length parameter squared \(\alpha^{2}\) of M201, M204, M207 averaged over 100 dumps and 30 Mm. Figure 9: The superadiabaticity (\(\nabla_{\rm star}-\nabla_{\rm ad}\)) of M201 (1000x heating, 0x diffusion), M204 (1000x heating, 100x diffusion) and M207 (1000x heating, 1000x diffusion), averaged over 100 dumps and 30 Mm. fusion are practically the same. The difference between the entrainment rates extrapolated from these two sets are due to the uncertainty of the fitting slope. The extrapolated entrainment rate cannot persist for a significant fraction of the main-sequence lifetime (SS4). We believe instead that the large entrainment rates that we observe after our simulations initially establish a dynamical equilibrium, are the result of thermal non-equilibrium. We will discuss the development of penetrative convection on a longer time scale and the effect on the entrainment of the resulting subadiabatic temperature gradients within the penetrative region between the SB and the CB in SS4 and SS5 ### IGWs One important consequence of radiative diffusion is damping of IGWs in the stably stratified layers of the star (Zahn et al., 1997). We study this effect of radiative diffusion in our model star by observing the wave motions in the envelope surrounding the convective core. Using the same approach as in Paper II, we decompose the radial component of the velocity field into complex spherical harmonics coefficients using the SHTools package (Wieczorek and Meschede, 2018). We then perform a Fourier transform on each coefficient time-series. Then we use the SHTools package to calculate the power spectral density of the radial velocity oscillations normalized by degree l for each frequency bin. The time interval between data dumps in our simulations determines an upper limit to the frequencies that we can observe. This upper limit is about 200 \(\mu\) Hz for the simulations reported here, corresponding to \(\approx 43\) min between dumps. In these simulations, we have located our outer boundary so that the radius of the convective core is about 60% of the boundary radius. The \(l\) index of the spherical harmonics gives the number of nodes going along a meridian from one pole to the other. Hence at the CB (60% of the maximal radius in our computational region), with a \(1152^{3}\) grid, we can resolve, in principle, spherical harmonics up to \(l=\frac{\pi r_{\rm CB}}{4\cdot\Delta x}\approx~{}250\), where \(r_{\rm CB}\) is the radius of the CB and \(\Delta x\) the cell width, because the data we use in this analysis has been averaged over cubical bricks of grid cells 4 on each side before being written to disk (Stephens et al., 2021). As shown by the velocity profile in Fig. 5, the convection in the core is less vigorous (smaller \(|U|\)) in high diffusion. Therefore, the excitation of IGWs (Edelmann et al., 2019) becomes less efficient due to radiative diffusion. Radiative diffusion damps both the IGWs and the excitation of IGWs, resulting in the power spectra we observe. As shown in Fig. 13 for the radial velocity component, most of the power of the wave motions is concentrated at frequencies below the maximum Brunt-Vaisala (BV) frequency in the stable enve Figure 11: The time evolution of the radius of maximal \(d\)fv\(/dr\) minus one \(f_{\rm V}\) scale height, and the entrained masses of M201. Figure 12: Entrainment rates of simulations (hollow symbols); extrapolated entrainment rates at nominal heating (solid symbols). is also concentrated in \(l^{\prime}s\) smaller than 80. Modes with small-scale structures \(l>100\) are damped in simulations with high diffusion, and less so in intermediate diffusion. Fig. 14 shows the damping effect in terms of power ratio of M204 and M207 to M201. Modes of \(l>80\) are reduced in power by more than 95% in high diffusion and by 50% in intermediate diffusion. However, for the more important frequencies below the BV frequencies radiative damping in high-diffusion simulations reduces the wave amplitudes by a factor 2.5 to 5. In Paper I, a formula is considered for predicting the diffusion coefficient that might produce material mixing in the stably stratified envelope due to IGW-induced motions. According to that relation the diffusion coefficient should scale with the square of the vorticity in the envelope, among other factors. In that study, working with simulations without radiative diffusion, it was found that this envelope vorticity shows no sign of convergence under grid refinement. The power spectra in Fig. 14 show that radiative damping of the high \(l\) IGW modes in our high diffusion cases allows the vorticity in the envelopes of these simulations to converge with mesh refinement. In Fig. 15, the vorticity in the envelope does not change when the grid is refined in the presence of radiative diffusion. The amount of radiative damping of the IGWs in the envelope is of interest when we consider the possibility that these IGWs cause material mixing in the envelope. The short wavelength waves that are damped substantially, as seen in Fig. 13, have no effect upon the asteroseismology observations of the waves at the stellar surface of massive stars, as they would be located in the region of white noise (Bowman et al., 2020). However, it is possible that the short wavelength waves have a significant impact on the efficiency of material mixing. This potential for IGW envelope mixing is explored at length in Paper I. Here we see that the short wavelength waves are damped by radiation diffusion. It is generally believed that radiative diffusion can play an essential role in IGW-induced mixing (e.g. Townsend (1958), Zahn (1974), Press (1981), Garaud et al. (2017), Paper I). ## 4 On the long-term evolution The entrainment rate implied from linear growth of the entrained mass is too large to be compatible with the stellar model and observational properties (SS3.3). Similar to the argument in section 3 of Paper I, if we assume that this entrainment rate applies for the entire \(6.91\times 10^{6}\) yr main sequence lifetime of a 25 M\({}_{\odot}\) star, a total entrainment of 630 M\({}_{\odot}\) would be implied. This indicates that the entrainment we extrapolate cannot persist for even a fraction of the main sequence lifetime before the star goes through significant evolutionary changes. A motivation for the present work is to investigate whether or not including radiation pressure and radiative diffusion can result in entrainment that is more consistent with the main sequence stage of the stellar model. We have seen in SS3 above that this additional physics causes the entrainment to decrease by only about 30%. However, the linear growth of the entrained mass, the motion of the BV frequency peak, and the overshoot and undershoot of fluxes at the CB (Fig. 4) suggest that the simulated star is still in the Figure 13: Spectral power density of \(u_{r}\) at 19M\({}_{\odot}\) of M201 (top, 1000x heating, 0x diffusion), M204 (center, 1000x heating, 100x diffusion) and M207 (bottom, 1000x heating, 1000x diffusion). process of thermal adjustment. Nevertheless, the velocity distribution in both the convective core and the radiative envelope has reached a dynamical equilibrium. We would like to establish whether or not continued entrainment and motion of the CB outward might alter the character of the flow in such a way that the entrainment rate might slowly diminish. This possibility is suggested by the recent work of Anders et al. (2022) investigating the long-term secular changes driven by thermal adjustment in a simplified Boussinesqq, plane parallel, penetrative convection context. Our explicit numerical technique in PPMstar requires us to explicitly follow sound wave signals in the low Mach number stellar flow. We have seen in Paper I and also here in Fig. 12 that we can overcome this limitation by appealing to empirically observed scaling laws. By enhancing the luminosity and the radiative diffusion by a common factor \(X\), we speed up the evolution by a similar factor (actually slightly larger than \(X\), as we will discuss later). In Paper I we saw that under these circumstances the velocities in the convection zone are enhanced by the factor \(X^{1/3}\). If this enhancement of the velocities leaves them still at low Mach numbers, we do not expect the character of the flow to change significantly. As a rule of thumb, we might attempt to hold the resulting Mach numbers below 0.1, for which compressibility effects should be roughly of 1% importance. A possible consideration is that we might raise velocities of wave motions in the stably stratified envelope to the level that makes the induced IGWs modes break. However, no wave breaking in the stable envelope shows up in the visualizations of any of our flows. To validate this technique for speeding up the evolution of our flows, we can generate a series of simulations at modest grid resolution that have different enhancement factors \(X\) and that can be compared over at least an initial time interval of a reasonable length, long enough to go through a noticeable re-adjustment of thermal structure. ### Key properties of the long-term evolution We have performed such a series of simulations for the 25 M\({}_{\odot}\) star which have enhancement factors \(X\) = 1000, 3162, and 10000. These all have a grid resolution of \(896^{3}\) cells, and all were run for relatively long periods of 507, 1189, and 1054 days for the star. For the case of largest \(X\), this time duration is comparable to the thermal timescale of the simulated part of the 25 M\({}_{\odot}\) star, namely \(GM^{2}/2RL\approx 1000\)d, where \(R=2500\) Mm and \(L=10000L_{*}\). This should be sufficient for the flow to relax to a state much closer to thermal equilibrium. In the top panel of Fig. 16, we show the outward movement of the BV frequency peak. This peak marks the location within the radial entropy profile where the gradient is largest. This is also the location of the sudden jump in \(f_{\rm V}\), the fractional volume of the stably stratified envelope gas. It is evident that the outward motion of the CB is continually slowing down as this simulation proceeds. The CB is still moving at the last time shown, but clearly it has slowed considerably. Looking at Fig. 16, we notice that as the outward motion of the CB slows, there is an increasingly large region Figure 14: Power of radial velocity of M201, M204, M207, power ratio of M204 and M207 to M201 as functions of degree \(l\) (Top: at 19 \(M_{\odot}\)) and as functions of frequency (Bottom). Figure 15: Vorticity of simulations with luminosity and radiative diffusion enhanced by 1000: M213 (\(768^{3}\)), M207 (\(1152^{3}\)), M210 (\(1728^{3}\)), M284 (\(2688^{3}\)). Although the vorticity increases with grid resolution inside the convection zone, it does not do so in the stably stratified envelope. This behavior has consequences for our ability to estimate gravity-wave-based mixing rates in the envelope using simulations with only modest grid resolution. inside the CB (for time 17188 h between \(r=1600\) and 1750 Mm where the BV frequency rises in the absence of any substantial contribution from the composition gradient. This feature of the later flow structures causes the convection to be reduced in intensity without causing additional entrainment. It would seem that this is a necessary feature for the entrainment rate to be diminished. The positive entropy gradient that is established in the growing penetration region between the SB and the CB, results from a balance between convective mixing of entropy which tends to reduce this gradient, and the small region of negative gradient of the radiative diffusion flux, shown in Fig. 17, which tends to build up the gradient. There is no corresponding mechanism to counteract the convective mixing of the composition, because the negative radiative diffusion flux gradient deposits entropy and has no effect upon the gas composition. Hence we see that \(f_{\rm V}\) is efficiently mixed, even in the penetration region. In Fig. 16, the radiative gradient \[\nabla_{\rm rad}\equiv 3\kappa LP/(16\pi acGmT^{4})\,\] the actual gradient \(\nabla_{\rm star}\), and adiabatic gradient \(\nabla_{\rm ad}\) are plotted. The radiative gradient is defined as the gradient required so that all the luminosity is carried outward by radiative diffusion. The location, at roughly 1400 Mm, of the SB, where \(\nabla_{\rm ad}=\nabla_{\rm rad}\), does not change much during the course of the simulation. The actual gradient is strictly adiabatic inside the SB at \(t=0\) by design via initialization. When the convection is fully developed, the actual gradient becomes slightly super-adiabatic inside 1000 Mm and slightly sub-adiabatic above 1000 Mm (Fig. 9) and gradually approaches the radiative gradient above the SB, as seen in Fig. 16. The outward motion of the CB noted earlier slows down, which is also shown by the change of the actual gradient with time. The penetration region, where the convective flux is negative above the SB, ends at 1850 Mm where the actual gradient starts to follow the radiative gradient, and the full luminosity is then carried outward by radiative diffusion alone (Fig. 17). ### The governing equations Similar to Anders et al. (2022); Roxburgh (1989); Arnett et al. (2015) (see also Korre & Featherstone (2021)), we attempt to model the convection zone by reducing the full set of hydrodynamic equations to 1D with reasonable assumptions. The governing hydrodynamics equations are the following: \[\rho\frac{\partial\mathbf{u}}{\partial t}+\rho(\mathbf{u}\cdot\nabla)\mathbf{u}= -\nabla p+\rho\mathbf{g} \tag{14}\] \[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{u})= 0\] (15) \[T\frac{\partial}{\partial t}(\rho S)+T\nabla\cdot(\rho S\mathbf{u})= \epsilon\rho-\nabla\cdot\mathbf{F} \tag{16}\] where \(S\) is the specific entropy, \(\epsilon\) the rate of nuclear energy generation per unit mass, \(\mathbf{F}\) the heat flux by radiative diffusion. These equations (Eq. 14 - Eq. 16) are equivalent to the Euler equations in conservation form solved by PPMstar. Taking the dot product of the equation for the conservation of momentum, Eq. 14, with the Figure 16: Top: \(N^{2}\) and its compositional component (see Eq. 10) for dump 0, 2000, 4000, 6000. Bottom: radiative gradient, adiabatic gradient and actual temperature gradient for M252 (10000x) for the same dumps. The location of the SB is denoted by the thick vertical line around 1415 Mm which does not move much during the simulation. All profiles are computed from averages over 100 dumps. velocity results in the equation for kinetic energy \[\frac{\partial}{\partial t}(\frac{1}{2}\rho u^{2})+\nabla\cdot(\frac{1}{2}\rho u^{ 2}\mathbf{u})\!=\!-(\mathbf{u}\cdot\nabla)p+\rho\mathbf{u}\cdot\mathbf{g}. \tag{17}\] Without any assumption so far, we integrate the kinetic energy density Eq. 16 over a thin spherical shell between radius \(r\) and \(r+dr\) and determine its rate of change in time, \[\frac{\partial}{\partial t}\iiint\limits_{(r,r+dr)}\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! viscous dissipation scale is finally reached. In our simulations, this dissipation is carried out by numerical truncation error terms, some of which act like viscosity, but with different dependence upon the spatial scale of the motion, see Porter & Woodward (1994). The effectiveness of numerical methods like PPM in simulating turbulent flows in this fashion has been discussed at length and in detail, with many examples, in Grinstein et al. (2007). There has been much work on modelling and theories for turbulent dissipation for stellar convection, for example, Zahn (1989), Porter et al. (1998), Woodward & Porter (2006), Arnett et al. (2008). From the averaged kinetic energy equation Eq. 19, the dissipation Figure 19: Work of pressure/gravity per unit time per unit radial distance, rate of change in kinetic energy per unit radial distance, radial derivative for the total kinetic energy flux, dissipation implied by the kinetic energy equation, dissipation from turbulence model for three resolutions of 1000x simulations (top to bottom: M213, M207, M210) for dump 2200 averaged over 401 dumps. Inside the SB at about 1400 Mm there is little dependence of these values on grid resolution. This means that we can get a good measurement of the implied turbulent kinetic energy dissipation rate using only a modest grid. Figure 18: Top: work by pressure gradient and gravity field per unit time per unit radial distance, rate of change in kinetic energy per unit radial distance, radial derivative of total kinetic energy flux, dissipation derived from the turbulent dissipation model, measure dissipation rate per unit radial distance, averaged over 399 dumps centered at dump 6000 of M252; bottom: time sequence of measured turbulent kinetic energy dissipation of M252. term can be deduced from the rest of the other terms, \[-4\pi r^{2}\overline{\Phi}= \frac{\partial}{\partial t}(\frac{1}{2}\rho u^{2}4\pi r^{2})-\overline {(\rho_{1}\mathbf{u}\cdot\mathbf{g}-\mathbf{u}\cdot\nabla p_{1})}4\pi r^{2} \tag{21}\] \[+\frac{\partial}{\partial r}(\frac{1}{2}\rho u^{2}u_{r}4\pi r^{2} )\.\] Woodward & Porter (2006) estimates the turbulent dissipation as a function of density, and turbulent kinetic energy density for homogeneous, isotropic turbulence, \[\frac{\partial E_{\rm turb}}{\partial t}=-A_{0}\frac{1}{L_{0}}\sqrt{\frac{2}{ \rho}}E_{\rm turb}^{3/2} \tag{22}\] where \(L_{0}\) is the integral length scale which is the scale containing most of the kinetic energy, a dimensionless parameter \(A_{0}=0.51\), \(E_{\rm turb}=\frac{1}{2}\rho u^{2}\), \(u\) is the turbulent velocity. By inserting the spherical averages of density and velocity magnitude of M252 in Eq. 22 and using 1500 Mm here empirically as the spatial scale that contains most of the kinetic energy, we get an estimate of turbulent dissipation from the model. Fig. 18 presents the the terms in the kinetic energy equation, including the dissipation rate implied by the simulation from assuming that all the measured terms plus this dissipation must add to zero, and it also shows the dissipation rate derived using the turbulence model. The core convection is not truly homogeneous, isotropic turbulence. However, its implied dissipation rate according to Eq. 21 agrees very well with the turbulent dissipation model. The same model for turbulent dissipation with a different factor has been reported in Frisch (1995) and Arnett et al. (2009). The agreement between the turbulent dissipation model Eq. 22 and the dissipation rate indirectly measured from the simulation is striking. Note that the turbulent dissipation model does not apply above the CB, where, by our definition of the CB, the net convective entropy flux becomes essentially zero and any motions are no longer turbulent. We therefore do not apply the turbulent dissipation model at the CB and beyond. The dissipation in the convection zone is a result of the turbulent cascade only. This is confirmed by the dissipation from the simulation decreasing smoothly to zero at around 1835 Mm in Fig. 18. The dissipation of kinetic energy implied by the simulation is negligible in the radiative envelope. The numerical viscosity of the PPMstar method scales inversely with \(\Delta x^{3}\), where \(\Delta x\) is the cell width (Porter & Woodward, 1994). Each 1.5x grid refinement implies a decrease of numerical viscosity by a factor of 3.375. Hence, the results plotted in Fig. 19 show that the actual dissipation of kinetic energy is independent of the numerical viscosity, at least on a grid equal to or finer than \(768^{3}\) for our simulation. As long as the convection is fully turbulent and there is an effective mechanism to dissipate kinetic energy on the smallest scales, we will end up with the statistically same dissipation rate. #### 4.3.2 Verification of turbulent dissipation measurement Three simulations are performed, which restarted from a late dump (dynamical equilibrium already established) of the 1000x heating and 1000x radiative diffusion cases with 3 resolutions (M213, M207 and M210). Volume heating and radiative diffusion are turned off from the beginning of these three new runs. The intent is to measure the decay rate of the kinetic energy in the convective core, which should be the same as the turbulence dissipation rate. The kinetic energy per unit volume is plotted about every 8.5 hours in Fig. 20. Before the nuclear heating is removed, we have a slightly convectively unstable stratification. The unstable stratification continues driving the convection for a short while before it is eliminated. Hence, the decay of kinetic energy is barely noticeable in the first couple of dumps. The total decay rates of kinetic energy are estimated from the first 60 hours to be 20.5%, 17.8% and 19.3% (from low to high resolution) of the luminosity. Again, we do not see kinetic energy dissipated in the stable envelope. From the rundown experiment with the three resolution cases, we conclude that PPMstar converges on the behavior of the decaying kinetic energy in the context of no driving. Hence, again we see the dissipation rate is insensitive to the numerical viscosity. Because the initial slight super-adiabatic gradient converts gravitational potential energy to kinetic energy during the first few dumps, which acts as a source that affects the rate of change in kinetic energy, the change in kinetic energy is not caused by turbulent dissipation alone. In this respect convection differs from the case of uniform turbulence, where we can make a clean measurement of the dissipation just by stopping the stirring of the flow. Similarly, we obtain the turbulent dissipation for a case with a higher luminosity enhancement factor by measuring the rate of change in kinetic energy directly from a simulation that begins from a late-time data dump of our long-time run M252. This 10000x heating simulation was restarted from 20062 hours with the volume heating and radiative diffusion turned off. The motion in the convective core dies down rapidly. The dissipation rate is measured from 20062 hours to 20082 hours assuming that it is constant in time at each radius. The decay rate of kinetic energy and the dissipation rate predicted by the turbulent model using 1500 Mm as \(L_{0}\) are shown in Fig. 20. Because we have started this run-down experiment from a very late time, we get an indication in Fig. 20 of the behavior of the dissipation rate in the penetration region beyond the SB when the flow is close to thermal and dynamical equilibrium. #### 4.3.3 Reduced entropy equation Now we proceed to investigate the reduced entropy equation to see if it leads to useful 1D modelling that has predictive power on whether the star is in equilibrium or how big the convective penetration region should be. There is no approximation in deriving Eq. 20. The entropy equation simply states that the rate of change of entropy in a spherical shell is the sum of turbulent dissipation of kinetic energy, heating and cooling of nuclear burning and radiative diffusion, and the advective flux of entropy. To further simplify, we assume the divergence of the radiative diffusion is mostly radial, which is not strictly true because the adiabatic motion will heat or cool fluid parcels, and then the heat flux can have a non-zero horizontal component. The second term on the right-hand side then becomes \[\frac{\overline{1}\ \frac{\partial(\Gamma_{r}-F_{r})4\pi r^{2}}{\partial r} \equiv\overline{1}\ \frac{\partial(F_{\rm nuc}-F_{\rm rad})}{\partial r}.\] The advective entropy flux mostly cancels with the nuclear heating plus the radiative flux, leading to a negligible time rate of change in the entropy (see Fig. 21). The dissipation per unit radial distance is small compared to the derivative of advective entropy flux multiplied by the temperature and to the \(\partial(F_{\rm rad}-F_{\rm nuc})/\partial r\) term, but not small compared to the measured time rate Figure 21: The gradient of energy flux from nuclear heating and radiative diffusion, gradient of the advective entropy flux multiplied by temperature, rate of change in entropy multiplied by temperature, dissipation of kinetic energy measured at dump 6000 of run M252. Note the small role played by turbulent kinetic energy dissipation relative to the other terms in the entropy equation. Figure 20: Top: time sequence of kinetic energy density every 8.5 hours for the rundown experiments of M210: \(1728^{3}\), M207: \(1152^{3}\) and M213: \(768^{3}\) (upper, medium and lower groups of lines) where the medium and lower resolutions have been translated downward by 2 and 4, for easy visual comparison; Middle: time sequence of kinetic energy density every 8.5 hours for the rundown experiment of the M252: \(10000\)x case; bottom: average decrease rate of kinetic energy from 20062 to 20082 hours measured from the rundown simulation of M252 and predicted by the model using the flow velocity at 20062 hours. Note that essentially no dissipation is seen outside the convection zone. of change in the entropy. Of course, for a simulation in perfect equilibrium the latter would be zero. As was pointed out in SS3.2.4, the superadiabaticity changes its sign at 1000 Mm. Beyond that location, the stratification is slightly convectively stable, but there is still convection. The convective motions are not significantly damped by the tiny amount of subadiabaticity between 1000 Mm and the CB until the gas is subjected to a significant stabilizing force in the penetration region beyond the SB. The time rate of change in kinetic energy and entropy is very small compared to the source terms and the flux terms. This by no means implies, however, that the star is in thermal equilibrium, because small changes over long times are seen in our long duration run M252 to have significant effects. #### 4.3.4 Accelerating stellar evolution by enhancing luminosity and radiative diffusion In the work reported in Paper I and in earlier work, we have applied luminosity enhancement to increase the fluid velocity. The convective velocity is greater with luminosity enhancement (Fig. 7). We then extrapolate the entrainment rate to nominal heating using the scaling relation we observe in a series of runs with different luminosity enhancements. Enhancing the radiative diffusion accelerates the thermodynamical evolution. Hence, enhancing both the luminosity and diffusion by the same multiplicative factor accelerates both convection and thermal adjustment. Comparing the vertical scales of Fig. 19 and Fig. 18 suggests that the terms in the kinetic energy equation scale linearly with the boosting factor. The scaling of convective velocity with luminosity (Fig. 7) and the turbulent dissipation model Eq. 22 also imply that the turbulent dissipation scales linearly with the luminosity enhancement. Hence, the turbulent dissipation, \(\mathbf{F}\), and \(\mathbf{\Gamma}\) in Eq. 20, all scale linearly with the boosting factor \(L/L_{*}\). The time rate of change in entropy is driven to become very small on the thermal timescale if the star is nearly thermally relaxed. Then the entropy flux has to scale with the boosting factor, as the rest of the terms in the entropy equation do so, and their sum, the time rate of change of entropy, is nearly vanishing. This behavior is also demonstrated by the simulation, as can be seen in Fig. 21. Hence, the rates of change with time of kinetic energy and entropy are small when the stratification is close to equilibrium, and this implies that the rest of the terms in the kinetic energy and entropy equations must scale with the enhancement factor. Naturally, we would hope the rate of change with time also scales linearly with the enhancement factor, so that we can reasonably accelerate our simulations by boosting luminosity and thermal conductivity by the same factor. From the perspective of simulation, we present evidence here that stellar thermal evolution can be accelerated by enhancing the luminosity and the radiative diffusion by the same factor. The accelerated evolution is meaningful only if the simulation arrives at a similar stratification within a shorter time. We see in the bottom panel of Fig. 22, the convective and radiative fluxes of the two higher luminosity runs plotted at times inversely proportional to their enhancement factors \(X\). These fluxes agree very closely, which is consistent with our intent that we can speed up the flow evolution by enhancing luminosity. Fig. 22 demonstrates that boosting the luminosity by a factor of 3.162 accelerates the evolution by a factor of about 3.6, after eating away the initial CB profile and later Figure 22: \(N^{2}\) of M250 (3162x) and M252 (10000x) at three different times when they proceed to the same location; Bottom: total radiative heat flux of M252 (10000x) and M250 (3162x). Considering that \(N^{2}\) depends upon the local entropy _gradient_, it is remarkable how similar the results of these two runs are at these times, especially considering the more than 3 times greater computational cost of the M250 results. file as a spherical average of a dynamic 3D flow. The rate of thermal relaxation, with the mixing as one of the processes, does not necessarily scale precisely with the luminosity. This can be observed by comparing entrainment rates for M207 and M209 at 1000x and 3162x enhancement (Table 1). The ratio of their entrainment rates is \(1.36\mathrm{M}_{\odot}\ \mathrm{yr}^{-1}/0.369\mathrm{M}_{\odot}\ \mathrm{yr}^{-1}\sim 3.69\). We have shown in Fig. 16 that the outward motion of the convective boundary, as indicated by the BV frequency peak location, slows greatly at late times in our run M252 with the luminosity and radiative heat conduction both boosted by a factor of 10,000. We have also argued that the simulation data suggest that the slowing of this outward motion occurs as a result of the development of a thin region just inside the compositional jump at the boundary in which there is a positive entropy gradient without any significant compositional change. This entropy gradient is maintained despite convective mixing by the local heat deposition caused by the negative radial gradient of the radiative heat flux as it approaches the full stellar luminosity value from above. The volume-rendered vorticity images in Fig. 23 give us a sense of how the convection flow changes as it approaches equilibrium. At early times, as seen in the image at top left, the dipole circulation pattern has the upwelling flow strike the composition jump at the convective boundary, become deflected, and travel along the boundary for a considerable distance. The snapshots in Fig. 23 cannot convey it, but indeed at early times the dipole circulation maintains its orientation and flows along the convective boundary for significant amounts of time before this orientation changes and the upwelling strikes the boundary in a different location. The dipole direction wanders constantly, but at early times there is a continual transport of gas from above the convective boundary, with this gas collecting at the location where the opposing flows along the boundary meet and plunge back toward the center of the star (about 5 o'clock in the top-left image in Fig. 23). At the much later times shown in the three later images in Fig. 23, the dipole circulation pattern is changing its direction more rapidly, and it is causing the upwelling gas to actually meet the compositional change at the convective boundary only briefly and occasionally. This means that the convection flow is far less effective at these late times in bringing stably stratified gas into the convection zone. In fact, the flow at this time is much more like the standard picture of rising plumes overshooting or penetrating into a stably stratified region. In this core convection flow, we have mainly a single "plume" provided by the dominant dipole circulation, but it does appear to overshoot from time to time and at constantly varying locations along the convective boundary. This feature of the core convection flow suggests that for a convective shell, where the largest significant eddies are not global in scale, we might well expect the same forces working on the flow as it approaches equilibrium to produce the phenomenon of occasional identifiable plumes overshooting into a region of a stable entropy gradient, as has been observed in the simulations of Baraffe et al. (2017); Pratt et al. (2017, 2020); Korre and Featherstone (2021). ## 5 How to find a convection zone with penetration in equilibrium The jumps in \(f_{\mathrm{V}}\), \(|U|\) are shown in the panels of Fig. 24, and Fig. 25 shows entropy advective flux and entropy profiles, with the CB location indicated by the location of vanishing flux. We see that at the CB, the fluid motions essentially stop, even though they do change character as we approach the CB within the thin region of the sharp entropy jump, as has been noted in Paper I. In Fig. 25, we show data from run M252, our case with \(X=10000\). Even at the last time shown, the CB is still moving outward, but it has clearly slowed considerably. From SS4 discussing the simulation on the thermal timescale, we find that (1) penetrative convection develops above the SB (Fig. 25), (2) the entrainment decreases significantly at later times as the penetration region develops (Fig. 25 and Fig. 26), (3) there is a positive entropy gradient that develops in the penetration region (Fig. 25), (4) the convection is slowed down by the positive entropy gradient before it reaches the compositional gradient, which suppresses the entrainment significantly. We note that the \(f_{\mathrm{V}}\) jump acts as a relatively hard barrier to the convection flow at every stage of the enlargement of the penetration zone, and it is undeniably dynamically important. It continues to move outward until the return of the overshooting radiation diffusion heat flux to the full luminosity is able to heat the gas in the penetration zone sufficiently to nearly arrest its outward motion. In the near equilibria shown in Fig. 27 it is clear from the curves plotted for the convective entropy flux that very little convective transport is still happening at the latest times shown in the range of radii in which \(f_{\mathrm{V}}\) increases rapidly. All these observational properties suggest that our model star is approaching equilibrium asymptotically. A natural question to ask is whether or not we can find the ultimate equilibrium state, a convective core with penetrative convection in thermal equilibrium, that our simulation is approaching. At present, we are prevented from doing this by simply running our simulation further in time. However, we would like to explore the question of whether we can predict what the final equilibrium state is likely to be. Figure 23: Four views of the vorticity magnitude in the far hemisphere of a non-rotating main-sequence stellar model of 25 solar masses are shown (M252, 10000\(X\)\(L\), & \(K_{\star}\)). The simulation was computed on a grid of 896\({}^{3}\) cells, and in this simulation the luminosity of the star was boosted by a factor of 10000 in order to speed its approach to a dynamical and thermal equilibrium state. The image at the top left shows the star at time 12.06 days, when the dipole circulation pattern characteristic of core convection has become well established. The other three views of this same stellar model are shown at a much later time, when the flow has developed a much larger region of penetrative convection above the SB. Going clockwise from the top-right, these three later views are at times 732.41, 732.76, and 737.66 days. In the early flow, we see that the classic core-convection dipole circulation hugs the CB closely over about a quarter of the extent of this circle. The flow separates from the CB where the prominent shear layers, marked by very strong vorticity (shaded yellow), bend inward from the boundary. At top right, we see the flow much, much later. The convection zone has expanded substantially, and the dipole circulation ”contacts” the CB only along a very small segment, from which it immediately separates. Just 0.35 days later, at bottom right, the dipole circulation has left the CB entirely, leaving a thin layer of somewhat higher entropy gas between it and the boundary. In the image at the bottom left, despite the vigor of the dipole circulation flow, we see no contact with the CB, but we do see at about 2 o’clock, a strong gravity wave interfacial mode propagating along the CB, with a node in its flow pattern right at the CB radius. Our model of the convection zone identifies the thin, higher-entropy layer of convection zone gas right next to the CB as a key feature of this near-equilibrium penetrative convection structure. This layer is generated by local heating from a declining radiative diffusion heat flux that is approaching the total luminosity in this region from above. Our analysis indicates that the size of the convection zone, the location of the SB, and the structure of the penetration region does not change with the luminosity boost factor. ### Roxburgh criterion The evolution of our run M252, with \(X=10000\), shows us that the convection flow, given sufficient time, develops a significant penetration region beyond the SB. In this region the total radiative flux exceeds the total nuclear heating rate, and this excess is compensated for by a negative convective flux. For carrying out 1-D stellar evolution simulations, we would like to have a model of convection that includes such a penetration region and that relates its extent to other parameters of the problem. Anders et al. (2022) addressed this problem by relating their simplified convection model to the arguments made by Roxburgh (1989). Roxburgh (1992, 1989) developed a constraint that a convection zone in an equilibrium state should satisfy. This constraint, known as the Roxburgh criterion, \[\int_{0}^{r_{c}}(F_{\rm rad}-F_{\rm nuc})\frac{1}{T^{2}}\frac{dT}{dr}dr=0 \tag{23}\] where \(F_{\rm rad}(r)\) is the total radiative energy flux, \(F_{\rm nuc}(r)\) is the total nuclear energy flux at radius \(r\), is an equation of the volume integral over the convective core bounded by the CB, at radius \(r_{c}\), derived from the time evolution equation for the entropy. In brief, the entropy equation is integrated over the spatial extent of the convection zone, which in our case is from the center of the star out to the CB. Roxburgh assumed that the vector velocity vanishes everywhere on the spherical surface that we call the CB. Roxburgh assumes time-stationary convection, and of course our continued, although small, Figure 24: Spherical average of convective velocity and \(f_{\rm V}\), averaged over 200 dumps, and over 20 radial points (equivalent to 116 Mm) twice, of M252 (10000x \(L_{\star}\) & \(K_{\star}\)) at dump 0, 2000, 3000, 4000, 5000, 6000. The location of the SB is denoted by the thick vertical line around 1415 Mm which does not move much during the simulation. The CB for all dumps except dump 0 are vertical lines at 1780 Mm, 1850 Mm, 1880 Mm, 1900 Mm, 1910 Mm. The CBs are identified by the locations where the total convective entropy flux levels off, see Fig. 25. Figure 25: For run M252, spherical average of total entropy flux averaged over 200 dumps, and of entropy per unit mass, averaged over 200 dumps, and over 20 radial points twice, at dump 2000, 3000, 4000, 5000, 6000. entrainment invalidates this assumption. Nevertheless, it is possible that the Roxburgh constraint Eq. 23 is nearly satisfied for our runs. We examine the integral of the entropy equation without the assumption of being dissipationless (Eq. 24), by inserting the radial profiles of temperature, entropy, measured turbulent dissipation rate (by subtracting the sum of all other terms in the kinetic energy equation from zero) and radiative flux of M252, \[\int_{0}^{r_{c}}(F_{\rm rad}-F_{\rm nuc})\frac{1}{T^{2}}\frac{dT}{dr}dr=\int_{V }\frac{\Phi}{T}dV \tag{24}\] where \(\Phi\) is the turbulent dissipation rate. The development of penetration and the resulting extended region of turbulence indicate that the convection is driven towards equilibrium but is not there yet (see the equality/inequality of the Eq. 24 with Table 2). Nevertheless, illuminated by the evolutionary trends of M252, we can devise a means of producing the final equilibrium state for this particular star, which we set out in SS5.3. ### Entropy flux We hope to devise a method to find the equilibrium convection zone structure that can be used in conjunction with 1D stellar evolution codes. Because \(u\cdot\nabla p_{1}\) is inherently 3D by nature, we choose to work with the entropy equation, reducing it to 1D with assumptions that might be only minimally violated in 3D. Our simulations are initialized from the 1D stratification specified by MESA. The 1D MESA model is in hydrostatic and thermal equilibrium, because the nuclear timescale is much greater than either the thermal timescale or the dynamical timescale. By assuming a time-stationary state and spherical symmetry, because we are interested in the ultimate equlibrium state, and ignoring non-radial radiative heat flux, the entropy equation for every spherical shell becomes \[\frac{\partial(\rho\overline{S}u_{r}r^{2})}{\partial r}4\pi=\frac{\overline{ \Phi}}{T}4\pi r^{2}+\frac{1}{T}\frac{\partial(F_{\rm nuc}-F_{\rm rad})}{ \partial r}. \tag{25}\] In this equation we use the assumption of a steady state with zero time derivative to enable us to solve for the convective flux term on the left. This term would be quite difficult to obtain through modeling, using mixing length arguments, but the terms on the right in this equation, at least the second such term, are easy to pin down in a 1-D computation. The nuclear heating input and the radial temperature and opacity structure come directly out of a 1-D simulation. The kinetic energy dissipation rate, which appears here as a heat source, does not. However, we have seen that we can solve for it using the kinetic energy equation in a similar fashion, as we will explain below. ### A procedure to compute the extent of the penetration zone In discussing the long-time behavior of our simulations of core convection, we found it notable that as the convective penetration region beyond the SB very slowly was extended outward, with the accompanying ingestion of stably stratified gas from the radiative envelope, the structure of the convection flow inside the SB radius changed hardly at all. This can be seen in Fig. 17 and Fig. 18. It is reasonable that this should be the case, because the region of convective penetration is not so very large in comparison to the convection zone as a whole. The extent to which the structure inside the SB does not change over the course of our long 3-D run M252 can also be assessed by examining the plots in Fig. 24 and Fig. 25. In these plots, the radii shown begin just inside of the SB, but it is nevertheless clear that for all the times shown the behavior near the SB is very closely the same. This insensitivity of the flow structure inside the SB to the development of the convection structure \begin{table} \begin{tabular}{l r r r} \hline \hline Dump & \(\int_{0}^{r_{c}}(F_{\rm rad}-F_{\rm nuc})\frac{1}{T^{2}}\frac{dT}{dr}dr\) & \(\int_{V}\frac{\Phi}{T}dV\) & \(r_{f_{V}=0.5}\)/Mm \\ \hline 2000 & & 0.280 & 0.225 & 1800 \\ 4000 & & 0.258 & 0.216 & 1872 \\ 6000 & & 0.242 & 0.213 & 1950 \\ \hline \hline \end{tabular} \end{table} Table 2: Numerical values of Roxburgh integral derived from spherical averages, and temperature weighted integral of dissipation measured from kinetic energy equation of run M252. Both are in units of \(10^{18}\) g Mm\({}^{2}\)s\({}^{-3}\)K\({}^{-1}\)). Figure 26: Entrained mass as a function of time of M252. Over the course of this long simulation, the entrainment rate has dropped by about a factor of \(\sim 17\), although it has not fallen to zero when the simulation was stopped. The entrainment rates are measured from dump 1000 to 2000, and from dump 5000 to 6000 (durations of 278 h) outside of it allows us to pin down the kinetic energy dissipation rate, shown in Fig. 18, inside the SB in a practical manner. We determine the unknown dissipation rate, \(\Phi(r)\), by performing a short-duration 3-D convection simulation on a modest grid. From the data shown in Fig. 19, we see that, at least with PPMstar, a grid of \(768^{3}\) cells is sufficient. We begin with the 1-D structure produced by a stellar evolution code, use it to initialize our 3-D grid, and then run the simulation through its initial transient adjustment to 3 dimensions. After producing enough data to give us good time averages over multiple large eddy turn-over times, we can stop the run and measure the time-averaged radial profiles of the terms in the kinetic energy equation, which are shown in Fig. 19. If we have carried the run far enough in time, we will be able to verify that the time average of kinetic energy at each radius is nearly zero, as can be seen in Fig. 19. We can then use this vanishing of time derivatives to solve for the kinetic energy dissipation rate, \(\Phi(r)\), using equation Eq. 21. This procedure allows us to build a first-principles model of the convection zone with its penetration region without involving paramters that would need to be calibrated against observations. One might think that pinning down our 1-D model of the convection zone by solving a 3-D flow problem invalidates the advantage of a 1-D model. This might be true if we had to perform 3-D simulations often in order to carry a stellar model from the zero-age main sequence through its lifetime as a star. But this is unlikely to be the case. We can parameterize \(\Phi(r)\) and match that form to the measured result from a 3D simulation only at wide intervals in time during the star's evolution. A very simple model would relate \(\Phi(r)\) to \(|u(r)|\) through the turbulence model formula, Eq. 22, get \(|u(r)|\) from the standard MLT estimate taken at a radius well inside the SB, and apply that value everywhere inside the SB. Fitting this to the 3-D simulation results would involve some constant of order unity, which could be checked and rechecked at regular time intervals against new 3-D simulations. On modern computing equipment, 1-D stellar evolution simulations cost very little, so that adding to that cost a few modest 3-D simulations should not be a problem. In any case, a table of results could Figure 27: Top and middle: convective/radiative energy flux with \(f_{\rm V}\) and entropy gradient normalized to 1 at dump 0, 2000, 4000, 6000 (0, 5729, 11458, 17188 h). Bottom: total entropy flux and the measured dissipation rate per unit radial distance. All are averaged over 200 dumps, and over 20 radial points twice except for dump 0. in principle be generated and made available over the Internet. With the convection flow pinned down inside the radius of the SB, our challenge remains to extend it outward to the CB. To do this, we can be guided by the long-time behavior that we have described in the previous section. In Fig. 24 and Fig. 25 we see how the convection flow slowly develops as it approaches an equilibrium structure. First, Fig. 18 shows how \(\Phi(r)\) changes during this process. Its value at the SB stays roughly constant in time, while its value at the CB is, of course, zero. In addition, we can see that demanding that \(\partial\Phi/\partial r\) vanish both at the SB and at the CB is a very reasonable constraint. We may then approximate \(\Phi(r)\) between the SB and CB as a cubic polynomial, in which case it is uniquely determined. To get this function to approach zero more gradually, we could additionally demand that its second radial derivative vanishes at the CB. Then we would need to make this function a quartic polynomial. In this region, \(\Phi(r)\) will be small relative to other terms, but it will nevertheless play an important role, because it is never negative. The terms of the entropy equation are plotted in Fig. 21. In the absence of a long-duration 3-D simulation, it would be very hard to guess the functional form of \(-4\pi T\partial(r^{2}\rho Su_{r})/\partial r\), which is plotted in Fig. 21. However, we know that in the absence of a time derivative of \(\rho S\), this gradient of the convective entropy flux must cancel the sum of \(\Phi\) and the gradient of the radiative heat conduction flux (Eq. 25). We have a model for \(\Phi\), and if we can provide a model for \(\partial(F_{\rm nuc}-F_{\rm rad})/\partial r\), then we can simply solve for the convective entropy flux using Eq. 25. We know the value of \(F_{\rm rad}\) and its first 2 derivatives at the SB, and we know that \(F_{\rm rad}\) must equal \(L\), by definition, at the CB. Looking at the results of our simulation shown in Fig. 25, it seems reasonable to assume that the first two radial derivatives of \(F_{\rm rad}\) vanish at the CB. If we demand that \(F_{\rm rad}\) be a quintic polynomial between the SB and the CB, these 6 constraints determine it uniquely. This determination then is enough to allow us to solve for the convective entropy flux term, \(-4\pi T\partial(\overline{r^{2}\rho Su_{r}})/\partial r\), between the SB and CB. Of course, to do this, we must first guess a location for the CB. We cannot just make any guess for the CB location. Our model assumptions are sufficient to determine the terms in the entropy equation, Eq. 20, in its steady-state equilibrium form, which is Eq. 25. We need one further condition in order to distinguish the CB radius from all the possible guesses for it that we might make. The final constraint that we need to impose in order to fully determine our 1-D model of the full convection zone is the demand that the convective entropy flux at the CB must vanish. To check whether this constraint is satisfied for any particular choice of the CB location, we can begin at the central fluid state at \(r=0\) that we get from the 1-D stellar evolution code, and we can integrate outward in radius along an adiabat to the SB, remaining in hydrostatic equilibrium at each step. In this process, we keep the composition, \(f_{\rm V}\), constant, because the convection zone is well mixed inside the SB. At each radius, the isentropic, hydrostatic fluid state determines both \(F_{\rm nuc}\) and \(F_{\rm rad}\). Because we know \(\Phi(r)\), as described above, we may solve for \(-4\pi T\partial(\overline{r^{2}\rho Su_{r}})/\partial r\). This convective entropy flux gradient allows us to determine the flux itself, \(\overline{\rho Su_{r}}\), at our radius, if we have produced it at the previous radial step outward. In this process, we can determine the full fluid state and also the flux \(\overline{\rho Su_{r}}\) at the SB The conditions of our model, as just described above, are sufficient to carry this integration process onward to the CB if (1) we know the CB location and (2) we know \(f_{\rm V}(r)\) between the SB and the CB. We know that \(f_{\rm V}\) cannot be constant in this region, because it must be unity at the CB. Efficient convective mixing keeps \(f_{\rm V}\) at a constant value below unity inside the SB, but inside the penetration region this mixing efficiency must be diminished progressively as the CB is approached. We appeal to the results of our 3-D simulation shown in Fig. 24 and Fig. 25 to produce a model for \(f_{\rm V}(r)\). We focus particularly on the later times shown in that figure. It is clear that the sudden, but still smooth jump in \(f_{\rm V}\) comes right at the end of the penetration region, and this jump is complete at the CB. It is also clear that there is a substantial smooth rise in entropy, S, immediately before the jump in \(f_{\rm V}\) begins. That rise in entropy is caused by local heating due to the return of the radiation heat conduction flux, \(F_{\rm rad}\), to the full luminosity value, \(L\). We will assume, and it is of course an assumption rather than an established fact, that the jump in \(f_{\rm V}\) begins at the point where this local heating rate has its maximum, hence where \(\partial^{2}F_{\rm rad}/\partial r^{2}\) vanishes and also \(\partial F_{\rm rad}/\partial r\) is negative. Between this point, which we call \(r_{\rm foot}\), and the CB, at \(r_{\rm CB}\), we assert that the shape of \(f_{\rm V}(r)\) is that of the sine wave between values of its argument of \(-\pi/2\) and \(\pi/2\). Thus, we have the expression for \(f_{\rm V}(r)\) as \[f_{\rm V}(r)= \tfrac{1}{2}[1+\sin(\pi\frac{r-r_{\rm foot}}{r_{\rm CB}-r_{\rm foot }}-\tfrac{\pi}{2})](1-f_{\rm V}(r_{\rm SB})) \tag{26}\] \[+f_{\rm V}(r_{\rm SB})\] where \(r_{\rm foot}<r<r_{\rm CB}\). Now it is possible to use a Newton iteration over the CB radius \(r_{\rm CB}\) to converge upon a model of the convection zone including its pene tration region, and that model will satisfy the following assumptions and constraints: * The central state of the gas is given by the central state produced by the 1-D stellar evolution code, with a composition, \(f_{\rm V}\), that reflects any additional entrained material from above the CB. * The state of the gas inside the SB has the same entropy and composition as the state at the center, and it is in hydrostatic equilibrium. * The kinetic energy dissipation rate, \(\Phi(r)\), inside the SB is determined from a short-duration, modest grid, 3-D simulation that has achieved dynamical equilibrium, in that the time derivative of the kinetic energy everywhere inside the SB essentially vanishes. * Inside the SB, where by definition \(F_{\rm rad}=L\), both \(F_{\rm nuc}\) and \(F_{\rm rad}\) are determined by the fluid state. * Inside the SB, the convective entropy flux, \(4\pi r^{2}\rho Su_{r}\), satisfies Eq. 25. * Between the SB and the CB, \(\Phi(r)\) is the unique quartic polynomial continuous at the SB with vanishing radial derivative there and vanishing at the CB, with vanishing first two radial derivatives there. * Between the SB and the CB, \(F_{\rm rad}(r)\) is the unique quintic polynomial that is continuous and twice continuously differentiable at the SB and which equals \(L\) with vanishing first two radial derivatives at the CB. * Between the SB and the CB, \(F_{\rm rad}(r)\) has its minimum (negative) radial derivative at \(r=r_{\rm foot}\). * Between \(r_{\rm foot}\) and the CB, the composition, \(f_{\rm V}\), rises smoothly from its constant value inside \(r_{\rm foot}\) to unity at the CB with the shape of a sine wave between its argument values of \(-\pi/2\) and \(\pi/2\), see Eq. 26. * Between the SB and the CB, the convective entropy flux, \(4\pi r^{2}\rho Su_{r}\), satisfies Eq. 25, and it vanishes at the CB. When stepping outward from the SB to the CB, we replace the isentropic condition we used to arrive at the SB with the demand that a hydrostatic fluid state must also have the prescribed radiative heat conduction flux. We have found such model solutions, starting from the state given by our M252 long-duration run at the three times shown in Fig. 24, and these model convection zone equilibrium structures are shown in Fig. 28. In Fig. 30 (top panel), we show the three model convection zone equilibrium structures that we arrive at starting with states in our three runs, M250, M251, and M252, when their peak values of \(\partial f_{\rm V}/\partial r\) are all located at nearly the same radius. These three runs have luminosity enhancement factors of 1000, 3162, and 10000, and we compare their implied equilibria at times 716, 202, 48 days. We see that the implied equilibria are very nearly identical, which supports our discussion of the linear scaling with luminosity enhancement factor of the terms in the kinetic energy and entropy equations. For the three implied equilibria to agree, we must also have the width and location of the compositional jump Figure 28: Top: dissipation measured from M252 at dump 6000 and interpolated beyond the SB; bottom: entropy flux implied by the predicted hydrostatic equilibrium stratification using the measured dissipation inside the SB and the interpolated dissipation above the SB. The 1st prediction uses central density, entropy and \(f_{\rm V}\) and turbulent dissipation below the SB at dump 2000, 2nd at dump 4000, 3rd at dump 6000. in \(f_{\rm V}\) agree. That this agreement between the \(f_{\rm V}\) profiles is very nearly observed in the simulation results at these times is shown in Fig. 30 (bottom panel). We note that in Fig. 30 the \(f_{\rm V}\) values used come from radial profiles in which we have performed no time averages or moving spatial averages that could broaden the compositional jumps so that they become more similar. The closely matched width and shape of the \(f_{\rm V}\) jumps in these three runs with luminosity boosts ranging over a factor of 10 is remarkable. The rise in \(f_{\rm V}\) from 0.1 to 0.9 in these radial profiles takes 6 to 7 grid cell widths. Our PPB moment-conserving advection scheme in PPMstar is able to describe and maintain sharper rises than this, but it is nevertheless possible that this 6-cell width represents a minimum that is produced by the combination of the PPB scheme and our method for producing a radial profile from data on our Cartesian grid. That such a doubt exists underscores the fact that the composition jump right inside the CB in our simulations and in our model is a thin feature, and therefore if the model does misrepresent its thickness, it cannot do so by very much. We also note that although one might imagine that this entire procedure could be simplified by approximating the kinetic energy dissipation rate with zero everywhere, we find that doing this makes it impossible to find a solution satisfying all these simultaneous constraints. ### Comparison of the simulation results with the equilibrium model state that they imply In this section, we compare the results of our long-duration simulation, M252, with the equilibrium model state that they imply at two times, dumps 2000 and 6000. The first time comes relatively early in the simulation, and the second very late. The simulation results and their implied equilibrium models are shown in Fig. 31. At each of these two times, we plot the radiative heat conduction flux, normalized by the total luminosity (including the enhancement by a factor 10,000 in this simulation). We also show the radial profile of \(f_{\rm V}\) and of the entropy \(S\). The simulation results are indicated with solid lines, while the projected equilibrium models are shown with dashed lines. Figure 30: Entropy flux (top) implied by the predicted hydrostatic equilibrium stratification using the measured dissipation inside the SB and the interpolated dissipation above the SB at three times corresponding to dumps 400, 1700, 6000 when \(f_{\rm V}\) gradient peaks (bottom) are located at about the same location for M252 (1st, 10000x), M250 (2nd, 3162x), M251 (3rd, 1000x). These entropy flux curves are all the same to within 1.0% tolerance when they are all scaled by their luminosity enhancement factors of 10000, 3162, and 1000. Figure 29: Actual temperature gradient (Eq. 10) implied by the predicted hydrostatic equilibrium stratification using the measured dissipation inside the SB at dump 6000 and the interpolated dissipation above. The \(f_{\rm V}\) transition starts from 185 Mm (1st), 275 Mm (2nd, used in the procedure of this work) and 385 Mm (3rd), where the middle one is the location of the smallest derivative of \(F_{\rm rad}\) in the penetration zone. It is notable how closely the projected equilibria at these well separated times agree. The positions of their convective boundaries agree quite closely, as do their compositional jumps. The entropy curves differ a bit, because the central compositions at these two times differ significantly. In the equilibrium projected from the later time, the compositional jump is broadened from its width in the simulation, and a very significant rise in the entropy inside the foot of this jump causes the entropy signature of the composition jump to become nearly lost in a large, smooth increase. The change in the entropy structure from the simulation result to the implied equilibrium is significant, but the associated change in the radiation diffusion heat flux appears to be a rather minor adjustment. When we consider the earlier of these two times shown in Fig. 31, changes in the entropy structure are much less severe in the implied equilibrium beyond the convective boundary. Once again, we see that the implied equilibrium model inserts quite a substantial entropy rise inside the foot of the compositional jump. As we have pointed out earlier, this feature causes the convection to diminish before the compositional change is encountered, so that mass entrainment rate reduces to nearly zero. ## 6 Conclusions and Discussion ### Discussion One might ask if the above 1-D model of the convection zone does not simply exchange the mixing length theory's (MLT's) arbitrary paramters and assumptions with other, equally arbitrary parameters and assumptions. We have been guided by our simulations of core convection for a single stellar model near the beginning of its main-sequence life. Nevertheless, this single stellar model poses significant challenges for a 1-D model description. The convection flow is non-local, in that it is dominated by a huge dipole circulation pattern. Also, our 3-D simulations show that a region of penetrative convection develops above the SB in which the radiative energy flux exceeds the star's total luminosity. We believe that an advantage of our 1-D model, as presented in the previous section, is that the assumptions we make do not require us to calibrate the value of an assumed constant of order unity against observations of the stellar surface with which this constant might be only tenuously related. We do require the determination of the kinetic energy dissipation rate, \(\Phi(r)\) in the convection zone, but this does not require calibration against any observations. Instead, we determine this function from theory alone, using the assumption of dynamical equilibrium, the kinetic energy equation reduced to 1D, and a 3-D stellar hydrodynamics code (such as, for example, any in the recently published code comparison study, Andrassy et al., 2022). We also assume various smoothness constraints on unknown functions inside the region of convective penetration, but we believe that these assumptions are quite natural. We do assume what we believe is a reasonable location and shape for the jump in composition that must occur very near to the CB. We find very little dependence of our resulting model on where we place \(r_{\rm foot}\), the radius at which our assumed sine-wave \(f_{\rm V}\) radial profile begins. In Fig. 29, we show the projected equilibrium state using data from run M252's dump 6000 for 3 different choices of \(r_{\rm foot}\), with the middle one being the one we recommend for the model. Perhaps the placement of \(r_{\rm foot}\) would matter more at a later stage of evolution, when the jump in \(f_{\rm V}\) causes greater change in the composition of the gas. We have also shown in Fig. 30. that the breadth and shape of this rapid increase of \(f_{\rm V}\) changes little as the luminosity enhancement factor is increased by as much as a factor of 10. Figure 31: The thermal equilibria predicted by the procedure and the stratifications of M252 (10000x \(L_{*}\) & \(K_{*}\)) at dump 2000 (top) and 6000 (bottom). We believe that the outward motion of the CB can slow very greatly, but we can see no way that entrainment can ever fully stop, except through changes brought about on the nuclear reaction time scale. From this point of view our equilibrium model is an idealization. The jump in \(f_{\rm V}\) is responsible for the wiggles in the temperature gradients shown in Fig. 16, yet the radiative heat conduction fluxes shown in Fig. 24 are very smooth as they pass through the region of this jump. This merely shows the power of thermal equilibration. The thermal adjustment cannot remove the jump in \(f_{\rm V}\) or the feature in the opacity corresponding to it, but it can nevertheless produce a very smooth thermal conduction flux in this region of radii. Our model, in its present form, addresses core convection. We have not attempted as yet to generalize it to a convective shell. Nor have we attempted to make it time dependent, so that it might describe the creation of a convection zone other than by a sequence of near-equilibrium states. We note that we have relied heavily on scaling relationships in order to carry out this study with our explicit stellar hydrodynamics code PPMstar. We have argued that the scaling relationships we rely on do hold, and we have produced evidence that they do in the specific instance of this stellar model over a reasonable range of luminosity enhancement factors. It is always possible that the scaling power laws might change in some unexplored regime, or that phenomena that produce the flows we compute might somehow scale with different powers of the luminosity. With the computational tools that we have available at present, widening our exploration to gain more confidence in the scaling relationships is not practical, although we do hope to improve our tools in the future. ### Conclusion In this work we have performed a series of numerical simulations of 25 M\({}_{\odot}\) main sequence stars to investigate the effect of radiation (\(P_{\rm rad}\) and radiative diffusion) on convective and thermal timescales on entrainment, IGWs, and convection. Below is a summary of our main findings. #### Entrainment Rate at the CB By including radiative diffusion enhanced by the same factor as luminosity, we see about a \(\sim 30\%\) decrease in the entrainment rate at every luminosity enhancement SS3.3. Consequently, the entrainment rate extrapolated to the nominal heating is reduced by about \(30\%\) by radiative diffusion, from \(1.31\times 10^{-4}\) M\({}_{\odot}\)/yr to \(9.11\times 10^{-5}\) M\({}_{\odot}\)/yr. Note that the entrainment is measured in a time window much shorter than the thermal timescale and before the stratification changes dramatically from the initial state. #### Damping of the Igws IGWs of small wavelengths are damped by radiative diffusion. Therefore, the vorticity in the radiative envelope converges upon mesh refinement, while it does not do so in the absence of radiative diffusion (SS3.4). This behavior may have an impact on IGW mixing in the stable layer. #### Thermal Equilibrium Even in the cases where we scaled the thermal conductivity linearly with luminosity, the entrainment rate we measured was still too large to be consistent with observations. We conclude that our simulations only reach dynamical equilibrium, but the thermal stratification is still adjusting on a thermal timescale. Close to thermal equilibrium, the entrainment rate should drop to a very much lower level. The long time simulation that we carried out develops well defined penetration convection (SS4). We identify a mechanism that slows convective boundary mixing and the outward movement of the CB: namely, the composition increase begins where \(N^{2}\) is already significantly positive, see Fig. 16. When convection is gradually suppressed by the stable stratification in the penetration region, and the motion is more and more of IGW nature, the time it takes to fully mix increases. This behavior can explain why the entrainment slows down greatly but cannot stop. Thus it is possible that there is always a slow secular change in the composition profile and the stellar stratification. #### Behavior of Terms in the Kinetic Energy Equation The \(u\cdot\nabla p_{1}\) term in the kinetic energy equation has a major contribution from the tangential component of the pressure gradient (SS4.3). This contribution is significant near the CB, where the flows turn horizontal. This term is determined by the global morphology of the flow and cannot be well approximated solely by the radial component. This discourages attempts at 1D approximations for the kinetic energy equation. #### Turbulent Kinetic Energy Dissipation In the entropy equation, the turbulent dissipation is about 10 times smaller than the advective entropy flux term and the \((F_{\rm rad}-F_{\rm nuc})\) term. Its magnitude does not depend on the physical or numerical viscosity, and we can measure it and get a converged result with modest grid resolutions (SS4.3). Nevertheless, an accurate knowledge of the turbulent dissipation rate is required to know the size of the penetrative convection region. Overestimating the dissipation will end up producing convection zones that are unrealistically small. Fortunately, the turbulent dissipation does not vary much inside the SB over a thermal timescale and converges on a grid of \(768^{3}\). Hence, only a modest grid (for PPMstar \(768^{3}\) or more) and only a few hundred hours of simulated time is needed for the flow to sufficiently adjust inside the SB to give a good measurement of the turbulent dissipation rate that is implied by assuming that the kinetic energy equation is in balance there. _MIXING IN THE PRESENCE OF POSITIVE \(N^{2}\)_. From our long time simulations, we observe heat conduction fluxes exceeding the total luminosity and compensating negative convective heat fluxes beyond the SB. These features are recognized as penetrative convection. The compositional jump occurs at all times just inside the CB, and at late times there is very little turbulent flow and mixing there. At late times, efficient convective mixing is spatially separated from the compositional change, and this separation impedes the entrainment of buoyant fluid from above the CB. The spatial separation is caused by local heating of the gas brought about by the declining outward heat flux associated with radiative diffusion in the penetration region, where the radiative flux is approaching the total luminosity from above. This property of the stratification does not present itself in the initial setups we have used, which suggests that our 1D initial state derived from the MESA code model does not characterize convective boundary mixing in a way that is consistent with 3D simulation. We believe this is the reason why very large entrainment rates are always observed early on in our 3D simulations. _A METHOD TO DETERMINE THE CONVECTIVE PETERATION DEPTH_. We can find a good approximation to the penetration depth as in SS5.3 by 1. Integrate the entropy equation forward in radius from the center of the star to the SB, staying on a single adiabat and enforcing hydrostatic equilibrium at each radius. 2. Guess the penetration depth (the radius of the CB) and introduce in the penetration region smooth continuations of the kinetic energy dissipation rate, the radiative heat conduction flux, and the volumetric mixing fraction \(f_{\rm V}\). 3. Integrate the entropy equation forward to the provisional CB radius, and then iterate on the penetration depth estimate (steps 2 and 3) until at that radius we find that the radial entropy flux vanishes. To accomplish the first step and integrate up to the SB, we assume time independence and a well-mixed, adiabatic stratification. The convective entropy flux is then determined, if we know the relatively small rate of kinetic energy dissipation. We may find this rate from a short 3-D simulation of the convection on a relatively coarse grid, or we may interpolate between such simulation results carried out under similar but not identical conditions. We can find the dissipation rate inside the SB even from a non-equilibrium stratification (see Figure 18: the dissipation rate at an early time is very similar to that of a later time much closer to equilibrium). Alternatively, we may estimate the kinetic energy in the convection zone from classical mixing-length theory, or some equivalent approximating model, and then apply a turbulence model to arrive at the rate of kinetic energy dissipation. This approach, however, introduces two free parameters, the mixing length and the scale of the energy-containing turbulent eddies. After guessing the penetration depth, we integrate forward by assuming functional forms for three quantities: the kinetic energy dissipation rate, \(\Phi(r)\), the radiative heat conduction flux, \(F_{\rm rad}(r)\), and the volumetric mixing fraction, \(f_{\rm V}(r)\), of convection zone gas and gas from the stably stratified envelope above the convection zone. Then we determine the penetration depth, and hence the radius of the CB, by demanding that the convective entropy flux must vanish there. 1. We approximate the radiative heat conduction flux by the unique fifth-order polynomial that is continuously twice differentiable at the SB and that equals the total luminosity at the boundary of the penetration region (CB) with vanishing first and second radial derivatives. 2. We prescribe the functional form of turbulent dissipation in the penetration region as the unique quartic polynomial that is continuously differentiable with vanishing radial derivative at the SB and that vanishes at the CB with vanishing first two radial derivatives. 3. We prescribe the functional form of the volumetric mixing fraction \(f_{\rm V}\) in the penetration region as equal to its value at the SB up to a radius \(r_{\rm foot}\) and then increasing to unity at the CB with the shape of the sine wave between its minimum and maximum at argument values \(-\pi/2\) and \(\pi/2\). We choose \(r_{\rm foot}\) as the radius at which the local heating from the declining radiative heat conduction flux has its maximum value. 4. Finally, we require the entropy flux to vanish at the CB. _DEPENDENCE OF THE PENETRATION DEPTH ON LUMINOSITY ENHANCEMENT FACTOR_. In the limit of large convection efficiency, the thermal relaxation is accelerated by enhancement of the luminosity and the radiative diffusion by a common factor, so long as this enhancement keeps the Mach number of the convection flow small, so that the character of the flow is not much changed. In the equilibrium that is reached, the penetration depth depends upon the radiative heat conduction flux, which establishes a positive entropy gradient in the penetration region, and the turbulent dissipation of kinetic energy, which together determine in a time-independent equilibrium the radial entropy flux, which must vanish at the CB. All these terms that must balance in equilibrium in this region scale linearly with the luminosity under these conditions (see similar arguments in SS4.3.4, Fig. 30). Hence the penetration depth for an enhanced luminosity should equal the penetration depth for the actual star. We have tested this scaling law for enhancement factors of 1000 and greater, but, at present, technical issues prevent our demonstrating this scaling law for smaller enhancement factors. Using this procedure, we see that the penetration depth depends upon our estimate of the turbulent convective energy dissipation rate and its functional form in the penetration region. This dissipation rate cannot be determined directly, without the use of a model, in a 1-D stellar evolution computation. We find that the penetration depth estimate increases/decreases by 30 Mm if we decrease/increase the turbulent dissipation everywhere by 5% for our particular stellar model. We note that as the convection flow nears equilibrium, the convective mixing, as indicated by the size of \(\Phi(r)\), declines near the CB to values very close to zero, which reduces the convective boundary mixing dramatically and allows the outward motion of the CB to very nearly cease. The thermal adjustment in the thin region just before the CB where the composition changes results in a smooth, nearly featureless form there for \(F_{\rm rad}(r)\), although features do appear there in both the temperature and the opacity. Future WorkIn future work, we will expand the cases we have considered, and this will no doubt lead to refinements of the recommended procedure. However, we argue that finding the CB location is strictly impossible without knowing the kinetic energy dissipation as a function of radius, and knowing that is impossible in a 1D simulation without the use of a model. The research reported here was supported by NSF through CDS&E grant 1814181, travel grant 2032010, and through grants of access to the Frontera computing system at TACC in Austin, Texas, where the bulk of the simulations were carried out and image rendering performed. Partial support was also provided by NSF through the JINA-CEE physics frontier center, award PHY-1430152. Herwig acknowledges funding through an NSERC Discovery Grant and a grant of access to the Compute Canada Niagara supercomputer operated by SciNet at the University of Toronto. Herwig also acknowledges support for data analysis on the Astrohub online virtual research environment ([https://astrohub.uvic.ca](https://astrohub.uvic.ca)) developed and operated by the Computational Stellar Astrophysics group ([https://csa.phys.uvic.ca](https://csa.phys.uvic.ca)) at the University of Victoria and hosted on the Compute Canada Arbutus Cloud at the University of Victoria. Woodward acknowledges support for local data storage and analysis from the Minnesota Supercomputing Institute.
2305.18305
High Accuracy and Low Regret for User-Cold-Start Using Latent Bandits
We develop a novel latent-bandit algorithm for tackling the cold-start problem for new users joining a recommender system. This new algorithm significantly outperforms the state of the art, simultaneously achieving both higher accuracy and lower regret.
David Young, Douglas Leith
2023-05-12T11:05:59Z
http://arxiv.org/abs/2305.18305v1
# High Accuracy and Low Regret for ###### Abstract We develop a novel latent-bandit algorithm for tackling the cold-start problem for new users joining a recommender system. This new algorithm significantly outperforms the state of the art, simultaneously achieving both higher accuracy and lower regret. ## 1 Introduction In a recommender system, when a new user joins the system it initially has no knowledge of the preferences of the user and so would like to quickly learn these. The recommender system therefore initially starts in an "exploration" phase where the first few items that it asks the new user to rate are chosen with the aim of discovering the user's preferences. We focus on the simplest setup where a user explicitly rates items presented to them, e.g. on a 1-5 scale, and the aim of the recommender system is to predict other items that the user may like. One common approach to this new user cold-start task is to take ratings already collected from a population of users, use these to cluster users into groups and then train a decision-tree to learn a mapping from item ratings to the user group, see for example Figure 1(a). When a new user joins the system this decision-tree is used to decide which items the user is initially asked to rate and in this way the group to which the user belongs is initially estimated. Once the group is estimated, the system recommends items liked by members of that group e.g. using matrix factorisation or another collaborative filtering approach. However, typically users clustered in the same group do not give identical ratings to an item. Rather there is a spread of ratings, and this intra-cluster variability between users can be thought of as adding noise to the ratings. Decision-trees are vulnerable to such noise in the new users rating as an unusual rating for a particular group can send the tree down a wrong path it will never recover from. For example, Figure 1(b) shows the measured decision-tree accuracy for Netflix data clustered into 16 groups (see later for details). It can be seen that the accuracy is as low as 50-60% for some groups. In this paper we improve on this behaviour by developing a new online learning algorithm that maintains and updates a probability distribution for the users group, and then selects the items that a new user is asked to rate so that this distribution converges to correct group as quickly as possible. In this way for users with more noise in their ratings it will simply take the system longer to learn the correct group, instead of possibly concluding the wrong group. To develop our new online learning algorithm we view the cold-start task as a latent bandit, i.e. a multi-arm bandit where the distribution of the arms depends on the value of someone unknown latent parameter. In our setting the arms are the available items, the reward of an arm is the user rating for the corresponding item and the latent variable is the true group of the user. It is important to stress that ignoring the latent groups and applying standard bandit algorithms to this task leads to poor performance since (i) there are many arms and so learning is slow and (ii) repeated pulls of the same arm tend to be highly correlated. In contrast, the latent group can take a small number of values, e.g. there might only be 16 or 32 groups, and for each value there is a known distribution of arm rewards. The latter means that we do not need to pull every arm to gain information about its reward and so the latent group approach allows fast learning even when the number of arms is large. Further, in general, some arms tend to be more informative than others for learning the user group. Hence, pulling only the most informative arms allows us to quickly learn the user group. Importantly, provided we have a sufficient number of informative arms then fast learning can still be achieved even when each arm is only pulled once i.e. a new user is only asked to rate any given item once. We want to select the next arm to pull based on our current estimate of the probability distribution for the users group and with the aim of causing our estimated distribution to quickly concentrate on the true user group. Note that existing methods for latent bandits require repeated arm pulls and do not take full advantage of the informative arms. ## 2 Related Work We follow on from the work of [1] in using cluster based bandits to tackle the cold start problem. Our latent bandit approach bears most similarity to the work of [2], which develop algorithms based on UCB and Thompson sampling for the same setup. Our algorithm however leverages the latent-bandit in a way their methods do not. In [3], they first apply a UCB style algorithm to the same problem, before relaxing assumptions and proposing algorithms for problems where the reward distributions are unknown. A survey of active learning cold-start methods can be seen in [4], and both [5] and [6] use decision tree based methods in a similar cluster based setup to ours. ## 3 Latent Bandit Algorithm For User Cold-Start We have a set \(\mathcal{G}\) of user groups, set \(\mathcal{V}\) of items. Given a new user our task is to quickly learn which group \(g\in\mathcal{G}\) they belong to by asking the user to rate items in \(\mathcal{V}\), using the fact that the distribution of item ratings varies depending on the user's group. ### User Ratings We assume the rating of item \(v\) by users in group \(g\) is normally distributed with mean \(\mu(g,v)\) and variance \(\sigma^{2}(g,v)\). If a user \(u\) in group \(g\) is asked to rate the Figure 1: (a) Illustrating a movie recommender decision-tree (b) Decision-tree accuracy for Netflix data (16 groups). same item \(v\) multiple times the user gives the same rating i.e. their rating is one value drawn from \(N(\mu(g,v),\sigma^{2}(g,v))\). Let random variable \(R(v)\) be the rating of item \(v\) and random variable \(G(u)\) be the group of the user making the rating. Then \(p(R(v)=r|G(u)=g)=(1/\sqrt{2\pi}\sigma(g,v))e^{-(r-\mu(g,v))^{2}/2\sigma^{2}(g,v)}\) and for observed sequence \(D_{n}\) of ratings \(r_{1},r_{2},\ldots,r_{n}\) for items \(v_{1},v_{2},\ldots,v_{n}\) (\(D_{n}\) is a sequence of pairs \((v_{i},r_{i}),i=1,\ldots,n\)) it follows that \[p(D_{n}|G(u)=g)=\gamma_{n}(g)e^{-L_{n}(g)}\] where \(L_{n}(g):=\sum_{i=1}^{n}(r_{i}-\mu(g,v_{i}))^{2}/2\sigma^{2}(g,v_{i}),\gamma_ {n}(g):=1/(2\pi)^{n/2}\times 1/\Pi_{i=1}^{n}\sigma(g,v_{i})\). By Bayes rule \[p(G(u)=g|D_{n})=\frac{p(D_{n}|G(u)=g)p(G(u)=g)}{p(D_{n})}\] with \(p(D_{n})=\sum_{h\in\mathcal{G}}\,p(D_{n}|G=h)p(G(u)=h)\). Assuming uniform prior \(p(G(u)=g)=1/|\mathcal{G}|\) then the probability that a new user belongs to group \(g\) given item ratings \(D_{n}\) is \[p(G(u)=g|D_{n})=\frac{p(D_{n}|G(u)=g)}{\sum_{h\in\mathcal{G}}p(D_{n}|G(u)=h)}= \frac{\gamma_{n}(g)e^{-L_{n}(g)}}{\sum_{h\in\mathcal{G}}\gamma_{n}(h)e^{-L_{n} (h)}} \tag{1}\] ### Exploration: Knowledge Gained From a New Rating Given a new user's ratings \(r_{1},r_{2},\ldots,r_{n}\) for a sequence of items \(v_{1},v_{2},\ldots,v_{n}\), we need to select the next item \(v_{n+1}\) to ask the user to rate. We have a current estimate \(P(G(u)=g|D_{n}),g\in\mathcal{G}\) of the probability distribution of the user group. Intuitively, a reasonable choice is the item that is most likely to cause this distribution to maximally concentrate on the true useruser group \(g^{*}\). This means selecting the item \(v_{n+1}\) which maximises \[\mathbb{E}[P(G=g^{*}|D_{n},v_{n+1},R(v_{n+1});G=g^{*})\] Since in reality we don't know \(g^{*}\) we want to select the item \(v_{n+1}\) that maximises the expected value with respect to \(g^{*}\), i.e. \[\mathbb{E}_{g^{*}}[\mathbb{E}[P(G=g^{*}|D_{n}, v_{n},R(v_{n+1});G=g^{*})]]\] \[=\sum_{g\in\mathcal{G}}P(G=g|D_{n})\cdot\mathbb{E}[P(G=g|D_{n},v_{ n+1},R(v_{n+1});G=g)]\] This expected value cannot be found analytically, but if we take a linear approximation of \(P(G=g^{*}|D_{n},v_{n},R(v_{n+1});G=g^{*})\) and take our expectation over that we obtain \[\mathbb{E}[P(G=g^{*}|D_{n},v_{n},R(v_{n+1});G=g^{*})]\approx P(G=g^{*}|D_{n}, v_{n+1},\mu(g^{*},v_{n+1})) \tag{2}\] Instead of the approximation (2) we could use Monte Carlo simulation to evaluate this expectation but this is considerably slower to calculate and in our tests comparing (2) with the values calculated by Monte Carlo we find that (2) has surprisingly small approximation error, and negligible effect on performance. ### Exploitation: Future Reward Selecting the next item \(v_{n+1}\) to maximise (2) prioritises learning about the group that a new user belongs to i.e. exploration. However, items which accelerate learning may attack a low user rating, and so increase regret. We need, therefore, to balance exploration against exploitation i.e. selecting items predicted to have a high user rating. However, when considering exploitation it is necessary to take account of the uncertainty in our current estimate of the user's group. This is because items rated highly by users in one group may not be rated highly by users of another group. Hence, if we make a mistake in our estimate of the new user's group we may end up suggesting items that return a low rating by the user and so increase regret. We proceed by defining the discounted future reward. For a user in group \(g\) who has already rated items \(V_{n}=\{v:(v,\cdot)\in D_{n}\}\) the expected future reward is \(\sum_{v\in V\setminus V_{n}}\mu(g,v)\), assuming they stay in the system and eventually rate all items. However, its probably more reasonable to assume there is a departure process for users, who tend to only stay in the system for some lifetime. For simplicity, we will assume the case where the departure process is modelled as an independent event after every recommendation, with constant probability \(\beta\) of staying in the system. We could replace this with any step dependent model for the probability of the user still being the system. We then use \(\beta\) to discount future rewards. Let \(V_{n,g}^{*}\) be the sequence of items \(V\setminus V_{n}\) sorted in decreasing order of mean rating \(\mu(g,v)\). Assuming the recommender presents items to the user in this order, then the expected discounted future reward is \[J_{future}(g):=\sum_{i=1}^{|V_{n,g}^{*}|}\beta^{i}\mu(g,v_{i,g})\] where \(v_{i,g}\) is the \(i\)'th element in sequence \(V_{n,g}^{*}\) and \(0\leq\beta\leq 1\) is our discount factor. And so the expected discounted future loss of acting as if the user is in group \(h\) when its actually in group \(g\): \[J_{futureloss}(g,h):=\sum_{i=1}^{|V_{n,g}^{*}|}\beta^{i}|\mu(g,v_{i,g})-\mu(g, v_{i,h})|\] where \(v_{i,h}\) is the \(i\)'th element in sequence \(V_{n,h}^{*}\). Since we have estimates \(P(G(u)=g|D_{n}),g\in\mathcal{G}\) of the probability distribution of the user group, we can calculate the expected discounted future regret of acting as if the user is in group \(h\), at any step \(n\) is : \[J_{futureregret}(h):=\sum_{g\in G}P(G=g|D_{n})\sum_{i=1}^{|V_{n,g}^{*}|}\beta^{ i}|\mu(g,v_{i,g})-\mu(g,v_{i,h})|\] This value is exactly the expected opportunity cost of exploiting instead of learning more information about which group the user is in. ### Balancing Exploration & Exploitation: New Algorithm We balance exploration and exploitation by selecting the next item \(v_{n+1}\) that maximises \[v_{n+1}\in\arg\max_{v\in V\setminus V_{n}}\sum_{g\in\mathcal{G}}P(G=g|D_{n}) \cdot P(G=g|D_{n},v,\mu(g,v))\cdot[J_{futureregret}(g)^{2}+\mu(g,v)]\] When the opportunity cost \(J_{futureregret}(g)\) of incorrectly estimating the user group is large, the short term reward \(\mu(g,v)\) is effectively ignored and the next item is selected primarily to minimise \(J_{futureregret}(g)\) i.e. to maximally increase the accuracy of our estimate of the user's group. Conversely, when \(J_{futureregret}(g)\) is small, the next item is selected to maximise the short term reward \(\mu(g,v)\) i.e. picking an item likely to be rated highly by the new user given our current best estimate of the user's group. Figure 2(a) shows measurements illustrating the transition from exploration to exploitation for the Netflix dataset with 16 groups and a new user from group ten. It can be seen that the future regret for exploiting as if we are in group ten is initially high, as the probability for being in any particular group is low. The probability and future regret then rise together, indicating the user has given a rating that also increases the probability of being in groups for which the top rated items are very different. As the probability of being in group ten continues to rise sharply, the future regret of exploiting as if we are group ten drops sharply, as expected. Figure 2(b) shows the relation between the future regret value and the actual expected regret incurred by the system. It can be seen that the regret incurred rises steeply initially while the future regret is high, as the system focuses on learning more about the users group. Then as the future regret of exploiting as if we are in group ten becomes very small, the system starts to do just that, and the expected regret no longer grows. ## 4 Performance Evaluation _Datasets._ We evaluate the performance of our latent bandit algorithm on the standard Netflix dataset, the Jester dataset and the Goodreads10K dataset. _Clustering Users._ We use training data to cluster users into groups and estimate the mean \(\mu(g,v)\) and variance \(\sigma(g,v)^{2}\) of the ratings by each group \(g\) for item \(v\). _Baseline Algorithms._ We compare the performance of the latent bandit algorithm (LBA) against (i) an optimised CART decision tree and (ii) the cluster-based bandit (CBB) algorithm of [1]. These are strong baselines, with good Figure 2: Measurements illustrating the transition from exploration to exploitation. Netflix dataset with 16 groups and a new user from group ten. performance for cold-start active learning. Decision-trees are often considered for use in cold-start while the recently proposed CBB-algorithm offers state of the art performance. _Modelling New Users_. We generate the item ratings of a new user from group \(g\) by making a single draw from the multivariate Gaussian distribution with mean \(\mu(g,v)\) and variance \(\sigma(g,v)^{2}\) for each item equal to that estimated from the training data. This has the advantage that we can easily generate large numbers of new users in a clean, reproducible manner. _Performance Metrics_. We report the accuracy with which the group of a new user is estimated, i.e. the fraction of times the correct group is estimated, and the expected regret. Statistics are calculated over 1000 new users per group. Figure 3 shows typical measurements of the evolution of accuracy vs #items rated by a new user. It can be seen that the accuracy grows over time and that the performance of the new latent-bandit algorithm dominates that of the decision-tree and cluster-based bandit. Data is show for future regret discount factor \(\beta\) of 0 and 1, when \(\beta=1\) the latent-bandit focuses on exploration until there is almost no more possible gain from it, and it can be seen that, as expected, the learning rate is somewhat faster than when \(\beta\) is smaller, but this is balanced against increased regret (which nevertheless remains uniformly lower than that of the decision-tree and cluster-based bandit). The dotted line in left-hand plot shows the performance of the latent-bandit focused exclusively on exploration, with the expectation calculated using Monte Carlo. It can be seen that with \(\beta=1\) the latent-bandit performance is close to this upper bound, indicating little scope for improvement. Figure 4 breaks this accuracy data down by group, observe that the variability in accuracy amongst the groups is significantly reduced by the latent bandit algorithm. Figure 5 shows summary data for the Netflix, Jester and Goodreads datasets and for 4 to 32 user groups. It can be seen that the uniformly outperforms the state of the art, simultaneously achieving both higher accuracy and lower regret. Conclusion We present a novel latent-bandit algorithm for tackling the cold-start problem for new users joining a recommender system. This new algorithm uniformly outperforms the state of the art, simultaneously achieving both higher accuracy and lower regret.
2305.17036
Defect-induced ordering and disordering in metallic glasses
On the basis of shear modulus measurements on a Pt-based glass, we calculated temperature dependence of the defect concentration c using the Interstitialcy theory. This temperature dependence is compared with temperature dependence of the normalized full width at half maximum (FWHM) gamma of the first peak of the structure factor S(q) for the same glass available in the literature. It is found that above the glass transition temperature Tg linearly increases with c in the same way for both initial and relaxed (preannealed) samples providing the evidence of defect-induced disordering in the supercooled liquid region independent of glass thermal prehistory. For both states of the samples, the derivative d(gamma)/dc is close to unity. Below Tg, the interrelation between gamma and c is entirely different for initial and relaxed samples. In the former case, strong defect-induced ordering upon approaching Tg is observed while relaxed samples do not reveal any clear ordering/disordering. Possible reasons for these observations are discussed. To further investigate the relationship between the normalized FWHM and defect concentration, we performed molecular dynamic simulation of gamma(c)-dependence in a high-entropy FeNiCrCoCu model glass. It is found that gamma also linearly increases with c while the derivative d(gamma)/dc is again close to unity just as in the case of Pt-based glass.
A. S. Makarov, G. V. Afonin, R. A. Konchakov, J. C. Qiao, A. N. Vasiliev, N. P. Kobelev, V. A. Khonik
2023-05-26T15:46:25Z
http://arxiv.org/abs/2305.17036v1
# Defect-induced ordering and disordering in metallic glasses ###### Abstract On the basis of shear modulus measurements on a Pt-based glass, we calculated temperature dependence of the defect concentration \(c\) using the Interstitialcy theory. This temperature dependence is compared with temperature dependence of the normalized full width at half maximum (FWHM) \(\gamma\) of the first peak of the structure factor \(S(q)\) for the same glass available in the literature. It is found that \(\gamma\) above the glass transition temperature \(T_{g}\) linearly increases with \(c\) in the same way for both initial and relaxed (preamnealed) samples providing the evidence of defect-induced disordering in the supercooled liquid region independent of glass thermal prehistory. For both states of the samples, the derivative \(d\gamma/dc\) is close to unity. Below \(T_{g}\), the interrelation between \(\gamma\) and \(c\) is entirely different for initial and relaxed samples. In the former case, strong defect-induced ordering upon approaching \(T_{g}\) is observed while relaxed samples do not reveal any clear ordering/disordering. Possible reasons for these observations are discussed. To further investigate the relationship between the normalized FWHM and defect concentration, we performed molecular dynamic simulation of \(\gamma(c)\)-dependence in a high-entropy FeNiCrCoCu model glass. It is found that \(\gamma\) also linearly increases with \(c\) while the derivative \(d\gamma/dc\) is again close to unity just as in the case of Pt-based glass. keywords: metallic glasses, structure factor, disordering, ordering, defects, shear modulus, relaxation + Footnote †: journal: ## 1 Introduction Major information on the structure of metallic glasses (MGs) is derived most often from the structure factor \(S(q)\), which is calculated from primary diffraction data [1]. In particular, the non-crystalline structure of MGs can be characterized by the full width at half maximum (FWHM) of the first \(S(q)\)-peak, which constitutes an integral measure of structural disordering. It has been found that the FWHM is quite sensitive to changes of different experimental conditions. Specifically, while the position of the first \(S(q)\)-peak significantly varies with temperature and can be used for the estimates of the volume thermal expansion and density changes upon structural relaxation, the FWHM rapidly increases with temperature above the glass transition temperature \(T_{g}\) (i.e. in the supercooled liquid state) [2; 3; 4; 5; 6; 7]. The FWHM also rises with the melt quenching rate and shows that melt-spun ribbon MGs are more disordered as compared with bulk MGs produced by melt suction [5]. On the other hand, an increase of the melt quenching rate leads to a decrease of the hardness and changes the wear performance [8; 9]. Plastic deformation by cold rolling or high-pressure torsion significantly increase the FWHM indicating intense structural disordering [10; 11; 12; 13; 14]. An important information was derived by the authors [15] who fabricated over five thousand alloys and found a strong correlation between the high glass forming ability and FWHM showing that a large dispersion of structural units comprising the amorphous structure is a universal indicator for the high metallic glass-formating ability [15]. Thus, the FWHM constitutes an important integral parameter showing the degree of MGs' structural disordering and sensitive to different treatments. In this work, we are interested in temperature impact on the FWHM at temperatures both below and above \(T_{g}\). In the latter case, as mentioned above, the FWHM rapidly increases with temperature and it is suggested that this effect is due to increasing atomic vibrations [2]. However, a simple increase of atomic vibrations should provide a monotonous FWHM rise independent of any specific temperature range under consideration, which is clearly not the case. Therefore, the nature of rapid broadening of the structural factor in the supercooled liquid state above \(T_{g}\) deserves closer attention. Currently, a widely accepted idea is that the structure of MGs can be represented as a non-crystalline matrix with dominant (usually icosahedral) packing and regions with low point symmetry commonly called defects (e.g. Refs [1; 16]). In this case, an increase of the defect concentration should increase the FWHM due to the rise of the amount of regions with damaged dominant short-range order. To further utilize this idea, one has to accept a specific theory (model) of defects in MGs. In our opinion, the Interstitialcy theory (IT) is suitable for this purpose. By using the IT, we calculated the defect concentration below and above \(T_{g}\) and showed that in the latter case this concentration is directly proportional to the experimental FWHM for a particular Pt-based metallic glass. This fact indicates that it is the defect concentration, which determines a rapid rise of the FWHM at \(T>T_{g}\). Below \(T_{g}\), the situation is more complicated and the FWHM depends on the state (initial/relaxed) of glass. To further study the relationship between the FWHM and defects, we performed molecular dynamic simulations of a model metallic glass produced by quenching the melt at various rates and containing, therefore, different amount of defects in the solid glassy state. We calculated the model structure factor and found that the derivative of the FWHM over the defect concentration is quite close to the one determined from the analysis of the experimental data on Pt-based glass. This provides further argument that the broadening of the structural factor of MGs is directly related to the defect subsystem. ## 2 Experimental and modeling procedures ### Experimental The experimental part of the present investigation was performed on bulk glassy Pt\({}_{42.5}\)Cu\({}_{27}\)Ni\({}_{9.5}\)P\({}_{21}\) (at.%). The reason for this choice is determined by detailed _in situ_ measurements of the FWHM for this glass in the course of heating available in the literature [2]. The master alloy of the above composition was produced by direct fusing of the constituent elements (at least 99.9% pure) in evacuated quartz vial using a two-temperature method. The glass was next produced by a melt jet quenching method. The castings were verified to be X-ray amorphous using a Siemens D-500 diffractometer operating in Co\({}_{K_{a}}\) radiation. The density of the glass was found to be 13.3\(\pm\)0.1 g/cm\({}^{3}\). Room-temperature elastic constants, shear modulus \(G_{rt}=32.7\) GPa, Young's modulus \(E_{rt}=92.9\) GPa, bulk modulus \(B_{rt}=198.7\) GPa and Poisson ratio \(\nu=0.42\), were determined by the resonant ultrasound spectroscopy using the setup similar to that described in Ref.[17]. The defect concentration was determined on the basis of high-frequency shear modulus measurements. For this, the electromagnetic acoustic transformation (EMAT) method was used [18]. In this method, the Lorentz interaction between the current induced in sample's surface layer by an exciting coil and external magnetic field is used to produce resonant vibrations. For this purpose, frequency scanning was automatically performed every 10-15 s upon heating (3 K/min) and the transverse resonant frequency \(f\) of a 5\(\times\)5\(\times\)2 mm\({}^{3}\) specimen was determined as a maximal signal response received by a pick-up coil upon scanning in a vacuum \(\approx 0.01\) Pa. The shear modulus was then calculated as \(G(T)=G_{rt}f^{2}(T)/f_{rt}^{2}\), where \(f_{rt}\) is the resonant frequency (450-550 kHz) at room temperature. The error in the measurements of \(G(T)\)-changes was estimated to be \(\approx 5\) ppm near room temperature and about 100 ppm near the glass transition temperature \(T_{g}\). Differential scanning calorimetry (DSC) was carried out by a Hitachi DSC 7020 instrument operating in high purity (99.999 %) nitrogen atmosphere at a rate of 3 K/min using 120-130 mg samples. Each DSC run was carried out with a fully crystallized sample of the same composition and about the same mass in the reference cell. The defect concentration in Pt\({}_{42.5}\)Cu\({}_{27}\)Ni\({}_{9.5}\)P\({}_{21}\) glass was calculated within the framework of the Interstitialcy theory (see below). For this, one needs to know the shear modulus \(\mu\) of the maternal crystalline state. This does not pose a problem for an analysis of experimental data but represents the difficulty in computer simulation. Molecular dynamic simulation of the above glass is not feasible due to the absence of the information on the interatomic interaction in this system. Thus, the selection of a model system constitutes a problem. In particular, the choice of a very popular Zr-Cu system for modeling is not acceptable because it has a number of different crystalline phases and it is unclear which of them represents the maternal crystalline state. In the view of the above, we performed computer modeling on high-entropy Fe\({}_{20}\)Ni\({}_{20}\)Cr\({}_{20}\)Co\({}_{20}\)Cu\({}_{20}\) system (at.%) because it crystallizes into a single FCC phase with easily determinable shear modulus. It is this phase, which was accepted as the maternal crystalline state. ### Modeling details Molecular dynamic simulation was performed using the LAMMPS package [19]. The model size was accepted to be 32 000 atoms (20\(\times\)20\(\times\)20 translations of the FCC lattice in the crystalline state). Many-body embedded atom potential was taken from the work [20]. This potential was earlier used for an analysis of points defects in the single crystalline state and defect identification in the glassy state of this alloy [21; 22]. The initial single-crystalline system was obtained by random distribution of atoms in the FCC lattice nodes with the preservation of the nominal chemical composition. The system was next melted by heating up to 3000 K and quenched to zero temperature at different rates ranging from \(6\times 10^{12}\) K/s to \(1\times 10^{14}\) K/s as indicated below. This allowed to obtain glassy states with different degree of relaxation, which can be rationalized in terms of different concentration of frozen-in defects. The structure factor was calculated using the OVITO software [23]. ## 3 Results and discussion ### Experimental data and their analysis Figure 1 presents X-ray diffraction of the glass under investigation, which is a typical non-crystalline pattern without any signs of crystallinity. Figure 2 gives DSC traces of samples in the initial state (a) and after relaxation (b) performed by heating up to a temperature \(T=286\)\({}^{\circ}\)C, which is deep in the supercooled liquid state but below the crystallization onset temperature. It is seen that initial samples display exothermal reaction below \(T_{g}\) (shown by the arrows) while relaxed specimens exhibit endothermal reaction at \(T<T_{g}\). Thermal behavior of initial and relaxed samples above \(T_{g}\) is generally similar. Panel (c) in Fig.2 gives DSC traces of initial samples, which for the purpose of the present investigation were taken at two different heating rates, 3 K/min and 20 K/min. It is seen that the above increase of the heating rate increases \(T_{g}\) by about 12\({}^{\circ}\)C. The goal of the present paper is to relate the FWHM with the defect concentration as a function of temperature. This can be done using the aforementioned Interstitialcy theory (IT), which provides comprehensive understanding of different relaxation phenomena in MGs (see e.g. Refs [24; 25; 26] and papers cited therein). In particular, the IT provides almost exact description of the kinetics of heat effects on the basis of shear modulus relaxation data and gives adequate description of all thermodynamic excess potentials of MGs [24; 26]. Within the IT framework, temperature dependence of the concentration \(c\) of interstitial-type defects is related to temperature dependences of the unrelaxed (high-frequency) shear moduli of glass and maternal crystal, \(G\) and \(\mu\), respectively, as \[c(T)=\frac{1}{\alpha\beta}ln\left[\frac{\mu(T)}{G(T)}\right], \tag{1}\] where \(\beta\) is a dimensionless shear susceptibility (of about 20 for different MGs), which is linked to defect-induced shear softening and anharmonicity of the interatomic potential [27], while dimensionless \(\alpha\approx 1\) is related to the defect strain field [24]. Thus, the defect concentration can be calculated using shear modulus measurements on glassy and crystalline samples. The results of such investigation are presented in Fig.3, which gives three shear modulus measurement runs taken subsequently on the same sample. Run 1 shows a monotonous decrease of \(G\) up to the calorimetric glass transition temperature \(T_{g}\) (shown by the arrow). At \(T>T_{g}\), a pronounced shear softening (decrease of \(G\)) is observed. The heating was stopped at \(T=286\)\({}^{\circ}\)C (deep in the supercooled liquid state, see Fig.2) and the sample was cooled back to room temperature at the same rate that results in the increase of the shear modulus by \(\approx 5.2\%\). Run 2 gives the shear modulus in the relaxed state produced by the previous run. The sample was heated up to a temperature \(T=333^{\circ}\)C, which produces the complete crystallization and the shear modulus at room temperature becomes by \(\approx 67\%\) larger than that in the initial state. Finally, run 3 gives temperature dependence of the shear modulus \(\mu\) in the crystalline state. To calculate the defect concentration with Eq.(1), one needs to know the shear susceptibility \(\beta\). For this, we used the method presented in Ref.[28] according to which this parameter can be calculated as \[\beta=\frac{\Delta G_{rel}}{\rho Q_{rel}}, \tag{2}\] where \(\Delta G_{rel}=G_{rel}^{rt}-G_{ini}^{rt}\) is the change of the shear modulus at room temperature due structural relaxation with \(G_{rel}^{rt}\) and \(G_{ini}^{rt}\) being the shear moduli at room temperature before and after structural relaxation, respectively, \(Q_{rel}\) is the corresponding heat release and \(\rho\) is the density. With \(G_{ini}^{rt}=32.7\) GPa and \(G_{rel}^{rt}=34.3\) GPa (see Fig.3) and measured \(Q_{rel}=6670\) J/kg, with Eq.(2) one arrives at \(\beta=18\). One can now calculate the defect concentration (1) assuming \(\alpha\approx 1\) as men Figure 2: DSC traces of the samples in the initial (a) and relaxed (b) states at a rate of 3 K/min. The data for three initial and three relaxed samples are shown in order to demonstrate the excellent reproducibility of the measurements. It is seen that initial samples display heat release below the glass transition temperature \(T_{g}\) while relaxed samples exhibit only heat absorption at \(T<T_{g}\). Panel (c) gives DSC runs for the initial samples at the heating rates of 3 K/min and 20 K/min. The corresponding \(T_{g}\)s are shown by the arrows. It is seen that the increase of the heating rate rises \(T_{g}\) by \(\approx 12^{\circ}\)C. tioned above. Temperature dependence \(c(T)\) for the initial (run 1) and relaxed (run 2) states is given in Fig.4 for the heating rate of 3 K/min. In the initial state, \(c\) weakly depends on temperature up to \(T_{g}\) but rapidly increases with \(T\) in the supercooled liquid state above \(T_{g}\). After the relaxation (run 2), the defect concentration at room temperature becomes smaller by \(\approx 13\%\) and moderately increases with temperature upon subsequent heating below \(T_{g}\). Near \(T_{g}\), the concentration starts to rapidly increase so that in the supercooled liquid state temperature dependence \(c(T)\) in the relaxed state coincides with that for the initial state. Heating into the supercooled liquid state, therefore, completely removes the "memory" of the preceding thermal prehistory. For further analysis, we used the FWHM data for the same glass derived from high-energy synchrotron X-ray diffraction as a function of temperature reported by Neuber _et al_[2]. The original FWHM data from this work (see SI Fig.6 in Ref.[2]) were transformed into the _normalized_ FWHM as \(\gamma=\Gamma/q_{0}\), where \(\Gamma\) is the absolute FWHM of the first \(S(q)\) peak given in Ref.[2] and \(q_{0}=2.9\)\(\mbox{\AA}^{-1}\) is the scattering vector corresponding to this peak [29] (the same \(q_{0}\)-value comes from our X-ray data shown in Fig.1). Temperature dependence of the normalized FWHM \(\gamma\) is given in Fig.4 (X-ray diffraction patterns were measured at a rate of 20 K/min [2]) together with temperature dependence of the defect concentration \(c\). It is seen that \(\gamma\) in the initial state (run 1) is almost temperature independent up to a temperature \(T\approx 200^{\circ}\)C. At higher temperatures \(200^{\circ}\)C\(<T<T_{g}\approx 250^{\circ}\)C, \(\gamma\) decreases by \(\approx 2\%\) that is ascribed to structural relaxation [2]. Upon further heating above \(T_{g}\), the normalized FWHM rapidly increases with temperature. After the relaxation produced by heating up to \(278^{\circ}\)C and cooling to room temperature, \(\gamma\) decreases by \(\approx 3\%\). Upon subsequent heating (run 2), \(\gamma\) is nearly temperature Figure 3: Temperature dependences of the shear modulus in the initial, relaxed and crystalline states. Calorimetric glass transition temperature \(T_{g}\) is indicated. independent up to \(T_{g}\). However, in the supercooled liquid region above \(T_{g}\), the normalized FWHM rapidly increases with temperature coinciding with that during run 1. It should be emphasized that quite similar behavior of the FWHM upon heating/cooling of a high-entropy metallic glass below \(T_{g}\) was recently reported by Luan _et al_. [7]. A comparison of temperature dependences of the defect concentration \(c\) and normalized FWHM \(\gamma\) given in Fig.4 clearly shows that they are similar. Below \(T_{g}\), the concentration \(c\) and \(\gamma\) weakly depend on temperature although some minor differences are observed. Above \(T_{g}\), both quantities linearly increase with temperature while the corresponding derivatives are very close, \(dc/dT\approx(1.12\pm 0.06)\times 10^{-4}\) and \(\gamma/dT\approx(1.00\pm 0.02)\times 10^{-4}\). The relaxation-induced changes of these quantities are also close, \(\Delta c\approx 0.0026\) and \(\Delta\gamma\approx 0.0030\) (calculated as corresponding differences for runs 1 and 2 at 50\({}^{\circ}\)C). The most essential difference between these quantities is that the absolute \(\gamma\)-values are by about 0.09 bigger than the absolute values of \(c\). Possible reasons for this difference are discussed below. A better view of the relationship between \(\gamma\) and \(c\) can be obtained if temperature in Fig.4 is excluded that provides a direct view of \(\gamma(c)\)-function. This is done in Fig.5, which shows that the behavior of \(\gamma(c)\) is completely different below and above \(T_{g}\). In the initial state (run 1), \(c\) and \(\gamma\) are nearly constant that corresponds to the temperature range 30\({}^{\circ}\)C\(<T<240^{\circ}\)C in Fig.4. Further heating of the initial sample results in a quick \(\gamma\)-drop in a very narrow range of Figure 4: Temperature dependences of the defect concentration in the initial state (run 1) and after the relaxation (run 2) calculated using Eq.(1) for a heating rate of 3 K/min. The Figure also shows temperature dependences of the normalized FWHM \(\gamma=\Gamma/q_{0}\), where the FWHM \(\Gamma\) is taken from Neuber _et al._ work [2] for a heating rate of 20 K/min and \(q_{0}\) is the scattering vector corresponding to the first \(S(q)\) peak. The calorimetric glass transition temperature \(T_{g}\) determined at 3 K/min is indicated. The scales in left and right ordinate axes are equal and, therefore, \(c\) is proportional to \(\gamma\) in the supercooled liquid state (i.e. at \(T>T_{g}\)). the concentrations. At \(T>T_{g}\), \(\gamma\) linearly increases with \(c\). The relaxed sample (run 2) displays nearly no \(\gamma\) changes upon a significant increase of \(c\) occurring upon heating to \(\approx T_{g}\). Above \(T_{g}\), \(\gamma\) linearly increases with \(c\) just in the same way as for the initial state. The following conclusions can be drawn from the data given in Figs 4 and 5. Heating of the initial samples up to 200\({}^{\circ}\)C almost does not change \(\gamma\) while \(c\) is also nearly constant, which is quite natural for a "frozen" glass structure without any relaxation. However, fast "relaxation" (see Fig.5) takes place at almost constant defect concentration while DSC shows significant exothermal reaction (Fig.2). Above \(T_{g}\), the defect concentration rapidly increases with temperature leading to an increase of the amount of disordered regions of structure, which is directly reflected in a rise of the normalized FWHM. The defect concentration below \(T_{g}\) is strongly reduced in the relaxed state (run 2) as seen in Figs 4 and 5. Shear modulus measurements in this case indicate a monotonous increase of the defect concentration (Fig.4) and the presence of a notable endothermal effect (Fig.2) while \(\gamma\) remains nearly unchanged. The \(\gamma(c)\)-dependence for the relaxed state above \(T_{g}\) is exactly the same as that for the initial samples: \(\gamma\) linearly increases with \(c\) with the same slope \(d\gamma/dc\approx 1.03\). Taking into account strong heat absorption (Fig.2) and shear softening above \(T_{g}\) (Fig.3) one can conclude that the supercooled liquid state is characterized by the intense defect multiplication (Fig.4), which significantly destroys the dominant short-range order and leads to a large \(\gamma\)-increase. Since the memory of Figure 5: Dependence of the normalized FWHM \(\gamma\) on the defect concentration \(c\) during run 1 (initial state) and subsequent run 2 (relaxed state). Glass transition temperature \(T_{g}\) at 3 K/min and the direction of temperature increase are shown by the arrows. It is seen that \(\gamma(c)\)-dependences below \(T_{g}\) are completely different for the initial and relaxed states while \(\gamma(c)\) for these states in the supercooled liquid region (\(T>T_{g}\)) nearly coincide. the thermal prehistory is lost above \(T_{g}\), this behavior takes place for both initial and relaxed samples. Thus, changes in the defect subsystem of glass can lead to either defect-induced disordering (increasing \(\gamma\)) or ordering (decreasing \(\Gamma\)) depending on temperature and thermal prehistory. Possible reasons for the behaviors of \(c\) and \(\gamma\) are considered below. It is should be also once more pointed out that the defect concentration was calculated for the heating rate of 3 K/min while the FWHM was measured at 20 K/min. A more correct analysis of the relation between these quantities should be carried out using the data taken at the same heating rate. Due to technical limitations, shear modulus measurements with the present EMAT technique at 20 K/min are not possible. However, we tried to estimate what could happen if \(G(T)\) measurements would have been performed at 20 K/min. For this, the \(G(T)\)-function taken at 3 K/min was shifted by 12 K towards higher temperatures according to the \(T_{g}\)-difference at the above rates (see Fig.2c). On the other hand, we also tried to shift the \(\gamma(T)\)-data towards lower temperatures by the same quantity modeling thus a decrease of the heating rate upon X-ray measurements. Both procedures result in some minor changes of the \(\gamma(c)\)-plot but the derivative \(d\gamma/dc\) in the supecooled liquid state remains almost the same. ### Molecular dynamic simulation Model Fe\({}_{20}\)Ni\({}_{20}\)Cr\({}_{20}\)Co\({}_{20}\)Cu\({}_{20}\) melt was quenched to \(T=0\) K at different rates as indicated below. An increase of the quenching rate results in pronounced shear softening that can be rationalized in terms of an increase of the defect concentration. The identification of defect regions and determination of their concentration can be done qualitatively by the method presented in Refs [21; 22]. For quantitative estimates of the defect concentration, we applied Eq.(1) derived within the framework of the IT, as discussed above. The shear moduli of glass and maternal FCC crystal were determined by applying a small shear strain of about \(10^{-3}\). Then, the defect concentration was calculated using Eq.(1) with the product \(\alpha\beta=15.4\) as determined earlier [21]. Figure 6 shows the normalized FWHM \(\gamma\) as a function of the defect concentration \(c\) where the data points correspond to the melt quenching rates indicated in K/s. It is seen that \(\gamma\) linearly increases with \(c\) just as in the case of experimental data for Pt-based glass for temperatures \(T>T_{g}\) while the absolute \(\gamma\)-values are close to those given in Ref.[2]. At that, the slope of \(\gamma(c)\)-dependence, \(d\gamma/dc=0.86\), which is fairly close to the experiment on Pt-based glass (Fig.5). In general, one can conclude that the simulation supports the experiment giving similar results. ### Comments on the relation between the defect concentration and FWHM Let us consider possible reasons for the differences in \(c\) and \(\gamma\) absolute values. It should be first of all noted noted that the defect concentration \(c\) in Eq.(1) depends on the choice of the shear susceptibility \(\beta\), which is determined to a precision of \(10-15\%\). Besides that, the above calculation with this equation for the Pt-based glass assumed \(\alpha=1\) since the exact value of this quantity is unknown. The estimates for real MGs give \(\alpha\approx 0.6\) while molecular static simulations of crystalline pure metals and Fe\({}_{20}\)Ni\({}_{20}\)Cr\({}_{20}\)Co\({}_{20}\)Cu\({}_{20}\) lead to the values \(0.50\leq\alpha\leq 0.78\)[31]. The uncertainties in the choice of \(\alpha\) and \(\beta\) will lead to corresponding changes of the defect concentration and, therefore, to a shift of the abscissas in Figs 5 and 6. However, such redefining of \(c\) does not generally affect the consideration given above. It is to be also noted that the preparation and heat treatment of samples in the present work and Ref.[2] are notably different. In the latter case, quenched samples for the X-ray investigation were relatively thin (0.5 mm thick) while the castings in the present work are 2 mm thick. Therefore, the melt quenching rate in our work is notably smaller and the samples are more relaxed. Moreover, the relaxed state (i.e. run 2) of the samples in Ref.[2] were obtained by heating and cooling at 20 K/min while the rate of 3 K/min was applied in the present work. Thus, the defect concentration was the largest in the initial samples in Ref.[2] and minimal in the relaxed specimens tested in our work. The initial samples in the present work and relaxed samples in Ref.[2] should have some intermediate defect concentration. This is in a qualitative agreement with temperature dependences of \(c\) and \(\gamma\) below \(T_{g}\). Indeed, temperature dependence of \(\gamma\) for the initial samples in Ref.[2] shows an abrupt decrease near \(T_{g}\) (Fig.4). At that, \(c\) and \(\gamma\) for the relaxed specimens tested in Ref.[2] and initial samples in the present work are almost temperature independent up to \(T_{g}\) but rapidly increase at higher temperatures. The relaxed samples in our work demonstrate a notable growth of the defect concentration even below \(T_{g}\). Another possible reason in the difference of the absolute values of \(c\) and \(\gamma\) could be related with additional sources of \(S(q)\) broadening. In part, this pos Figure 6: Dependence of the normalized FWHM \(\gamma\) on the defect concentration \(c\) calculated using Eq.(1) for model glassy Fe\({}_{20}\)Ni\({}_{20}\)Cr\({}_{20}\)Co\({}_{20}\)Cu\({}_{20}\) produced with different melt quenching rates (indicated in K/s). The straight line gives a least-square linear fit. sibility is supported by the fact that the extrapolation of \(\gamma(c)\)-dependence in Fig.6 towards small \(c\) does not lead to zero \(\gamma\)-values but approaches a constant of about \(\approx 0.064\), which is close to the difference between the absolute values of \(\gamma\) and \(c\) in Fig.4. It is also to be noted that the extrapolation \(\gamma(c)\)-dependence in Fig.5 similarly does not lead to zero \(\gamma\)-values upon \(c\to 0\) although this is less pronounced as compared with Fig.6. The reason for additional \(S(q)\) broadening should be evidently related to the non-crystalline "matrix" itself, which, as mentioned above, can be considered as an array of relatively large clusters with predominant (e.g. icosahedral) short range order. However, the investigation described above indicates that the defects provide larger \(S(q)\) broadening at \(T>T_{g}\). This investigation provides a simple measure of defect-induced ordering/disordering in the supercooled liquid state since the changes between the normalized FWHM and defect concentration are nearly equal, i.e. \(\Delta\gamma\approx\Delta c\). ## 4 Conclusions We performed precise measurements of the high-frequency shear modulus of bulk Pt\({}_{42.5}\)Cu\({}_{27}\)Ni\({}_{9.5}\)P\({}_{21}\) glass in the initial and relaxed (preamenaled) states and on this basis calculated temperature dependence of the defect concentration using the Interstitialcy theory. These calculations were compared with temperature dependence of the full width at half maximum (FWHM) \(\Gamma\) of the first \(S(q)\)-peak derived by Neuber _et al_. [2] using synchrotron X-ray diffraction data taken on the same glass. It is found that the normalized FWHM \(\gamma=\Gamma/q_{0}\) (\(q_{0}\) is the scattering vector corresponding to the first \(S(q)\) peak), which constitutes an integral measure of structural disordering, linearly increases with the defect concentration \(c\) above the glass transition temperature \(T_{g}\) in the same way both for initial and relaxed specimens. This means that strong heat absorption and related rapid \(c\)-increase in the supercooled liquid region provide a significant defect-induced disruption of the dominant short-range order increasing thus the integral structural disordering. The determined fact that \(\Delta\gamma\approx\Delta c\) in this region provides a simple quantitative measure of this disordering. Below \(T_{g}\), the interrelation between \(\gamma\) and \(c\) is quite complicated and entirely different for initial and relaxed samples. In the former case, strong defect-induced ordering upon approaching \(T_{g}\) is observed while relaxed samples do not reveal any clear ordering/disordering. Possible reasons for these differences can be due to the variations in the defect concentrations because of different degree of relaxation of specimens used for the measurements of \(c\) and \(\gamma\) as well as the presence of additional sources of \(S(q)\) broadening, which are not related to the defect subsystem. We also performed molecular dynamic simulation of the relationship between \(\gamma\) and \(c\). Since the interatomic interaction in the Pt-based glass under investigation is unknown, we performed modeling of high-entropy non-crystalline Fe\({}_{20}\)Ni\({}_{20}\)Cr\({}_{20}\)Co\({}_{20}\)Cu\({}_{20}\). The defect concentration was again calculated using the Interstitialcy theory. It was found that \(\gamma\) linearly increases with \(c\) in the same way as in the case of Pt-based glass. At that, the derivative \(d\gamma/dc\) is also close to unity. Overall, the simulation agrees with the results obtained by the analysis of experimental data on Pt-based glass. ## Acknowledgements The work was supported by the Russian Science Foundation under the project 20-62-46003. The kind help of Prof. A.S. Aronin (Institute for Solid State Physics RAS) with X-ray measurements is greatly acknowledged.
2310.14951
Overview of Caching Mechanisms to Improve Hadoop Performance
Nowadays distributed computing environments, large amounts of data are generated from different resources with a high velocity, rendering the data difficult to capture, manage, and process within existing relational databases. Hadoop is a tool to store and process large datasets in a parallel manner across a cluster of machines in a distributed environment. Hadoop brings many benefits like flexibility, scalability, and high fault tolerance; however, it faces some challenges in terms of data access time, I/O operation, and duplicate computations resulting in extra overhead, resource wastage, and poor performance. Many researchers have utilized caching mechanisms to tackle these challenges. For example, they have presented approaches to improve data access time, enhance data locality rate, remove repetitive calculations, reduce the number of I/O operations, decrease the job execution time, and increase resource efficiency. In the current study, we provide a comprehensive overview of caching strategies to improve Hadoop performance. Additionally, a novel classification is introduced based on cache utilization. Using this classification, we analyze the impact on Hadoop performance and discuss the advantages and disadvantages of each group. Finally, a novel hybrid approach called Hybrid Intelligent Cache (HIC) that combines the benefits of two methods from different groups, H-SVM-LRU and CLQLMRS, is presented. Experimental results show that our hybrid method achieves an average improvement of 31.2% in job execution time.
Rana Ghazali, Douglas G. Down
2023-10-23T13:52:22Z
http://arxiv.org/abs/2310.14951v1
**Overview of Caching Mechanisms to Improve Hadoop Performance** ## Abstract In today's distributed computing environments, large amounts of data are generated from different resources with a high velocity, rendering the data difficult to capture, manage, and process within existing relational databases. Hadoop is a tool to store and process large datasets in a parallel manner across a cluster of machines in a distributed environment. Hadoop brings many benefits like flexibility, scalability, and high fault tolerance; however, it faces some challenges in terms of data access time, I/O operation, and duplicate computations resulting in extra overhead, resource wastage, and poor performance. Many researchers have utilized caching mechanisms to tackle these challenges. For example, they have presented approaches to improve data access time, enhance data locality rate, remove repetitive calculations, reduce the number of I/O operations, decrease the job execution time, and increase resource efficiency. In the current study, we provide a comprehensive overview of caching strategies to improve Hadoop performance. Additionally, a novel classification is introduced based on cache utilization. Using this classification, we analyze the impact on Hadoop performance and discuss the advantages and disadvantages of each group. Finally, a novel hybrid approach called Hybrid Intelligent Cache (HIC) that combines the benefits of two methods from different groups, H-SVM-LRU and CLQLMRS, is presented. Experimental results show that our hybrid method achieves an average improvement of 31.2% in job execution time. _Keywords: Hadoop performance, Caching mechanism, MapReduce, Hybrid Intelligent cache_ ## 1-Introduction Hadoop [1] is an open-source framework for the parallel processing of large datasets in a distributed environment. Hadoop consists of two major components: HDFS (Hadoop Distributed File System) [2] as a storage media and MapReduce [3] as a parallel processing model. HDFS has a master/slave architecture, dividing input data into data blocks with identical sizes that are distributed among data nodes. Moreover, it provides multiple copies of each data block (referred to as the replication factor) and keeps them in different data nodes to provide high fault tolerance. MapReduce is a parallel programming model that includes two types of tasks: Map and Reduce. In the first phase, the Map task converts input data into \(<\)key, value\(>\) pairs (referred to as intermediate data). These intermediate data are then sorted and shuffled based on identical keys. In the second phase, these data are passed to the Reduce task as input data and values with the same keys are merged to generate final results. Hadoop faces some challenges that have a significant impact on reducing its efficiency: 1. One of the bottlenecks of Hadoop is related to HDFS because it is based on a hard disk drive (HDD) system, which has high access time for I/O operations, resulting in a negative impact on the overall job execution time. 2. The shuffling phase in MapReduce is a time-consuming operation that can account for more than 33% of the job execution time. 3. A large amount of intermediate data is discarded after finishing the process. Due to the lack of any mechanism to identify duplicate computations in MapReduce, it is necessary to reload and recompute intermediate data whenever applications need to reuse them, leading to resource wastage and poor performance. 4. In an iterative program, data may be unchanged from one iteration to the next. MapReduce does not have any mechanism to reuse computational data so data must be re-loaded and re-processed at each iteration. In addition, recognizing termination conditions or fixed points (when the application's output does not change for consecutive iterations) requires an extra MapReduce job on each iteration. These two issues lead to reducing the effective use of resources such as network bandwidth and CPUs. 5. The job scheduler may assign tasks to a node that does not contain the required input data. In this case, data must be transferred from a remote data node, resulting in increased network traffic and delays in execution. 6. When failure occurs, only the requested data block is recovered and is not saved for future use. Therefore, if this data block is needed again, the recovery operation must be repeated. In recent years, many researchers have proposed caching mechanisms as an approach to tackle these challenges. The caching mechanism could prefetch on-demand data into cache memory, which has a more rapid access time than HDFS. As a result, an effective caching strategy can have a significant effect on reducing execution time and improving Hadoop efficiency. A caching mechanism consists of two phases: the placement phase and the delivery phase. The placement phase determines how to store and place data into the cache memory, based on their popularity. The second phase is the delivery phase, which retrieves data from cache memory based on the actual demands of the tasks. In this phase, the main limitations are the rate to serve the requested content and network congestion. A caching mechanism must cope with restricted cache space. For instance, a replacement algorithm is needed to determine which existing data should be evicted when inserting new content if the cache is at full capacity. In this paper, we provide a comprehensive overview of proposed methods that utilize a caching mechanism for improving Hadoop performance. We classify them based on the performance issue that they address, and we analyze their effect on Hadoop performance and discuss their advantages and disadvantages. Also, we present a novel hybrid approach called Hybrid Intelligent Cache (HIC) that combines the benefits of two methods from different classes, H-SVM-LRU and CLQLMRS. The rest of this article is organized as follows. Section 2 discusses the impact of using a caching mechanism on Hadoop performance. We classify existing caching solutions based on their approach to solving Hadoop's performance issues in Section 3. We compare them and investigate their effect on Hadoop performance in Section 4. The novel hybrid method which combines H-SVM-LRU with CLQLMRS is presented in Section 5 and finally, Section 6 contains concluding remarks and suggestions for future work. **2- The benefits of a caching mechanism to address Hadoop performance challenges** The Hadoop framework has been found to suffer from a number of problems that result in poor performance. To address these issues, many researchers have suggested the use of caching mechanisms. This strategy offers several advantages when applied to Hadoop, including: 1. By caching data, access time is significantly reduced compared to accessing data from a disk. This leads to a reduction in the number of I/O operations, a decrease in prefetching delays, and an overall improvement in execution time. In this way, caching can help to solve the problem of I/O operation time. 2. Storing intermediate data in the cache can prevent resource wastage and eliminate the need for recomputing in the MapReduce programming model, resulting in decreased execution time and resource utilization. 3. By reducing the amount of data transmission through the network, caching data has a positive impact on reducing execution time. 4. When assigning tasks to nodes, considering cache locality in scheduling decisions can improve data locality, resulting in improved system performance. Given the benefits of using caching mechanisms to improve Hadoop performance, there has been considerable research in this area. We present a novel classification based on the specific Hadoop problems that existing proposed caching mechanisms have been designed to solve. One benefit of this classification is that it identifies where there is the potential to combine caching mechanisms from different groups. We give a concrete example of this in Section 5. **3- Novel classification of caching mechanisms in Hadoop** As mentioned in the previous section, Hadoop can encounter challenges for which a caching mechanism may provide a solution. Caching mechanisms have been the subject of a significant body of research. These can be divided into two primary categories contingent upon the principal research objectives. Within the initial category, the focus is on the strategic deployment of cache resources and the identification of data types amenable to caching, thereby addressing foundational Hadoop performance issues. This particular classification of caching mechanisms shall be referred to as the "basic caching mechanism" within the context of this article. The second category encompasses efforts geared towards the enhancement of various facets inherent to caching mechanisms. For example, the refinement of cache replacement policies to optimize the utilization of limited cache capacity while circumventing issues of cache contamination may yield improvements for cache hit ratios. Furthermore, improving prefetch mechanisms by orchestrating the prefetching of a suitable volume of data at opportune junctures has the potential to positively influence data retrieval time. Consequently, the nomenclature "enhanced caching mechanism" will be used for this latter grouping. In this study, a novel classification of basic caching strategies is presented based on the performance issue that they are designed to address. These caching mechanisms are categorized into four groups, namely: 1. Caching mechanisms for improving data access time 2. Caching mechanisms for avoiding duplicate computations 3. Caching mechanisms for decreasing data transmission time 4. Caching mechanisms for other purposes. Furthermore, enhanced caching mechanisms can be divided into two groups based on their approach to improving performance: 1. Enhanced caching via improving the cache replacement strategy 2. Enhanced caching via optimizing the prefetch mechanism. This paper will discuss existing caching strategies in each group, along with their strengths and weaknesses. ### Basic caching mechanisms This section focuses on the implementation of caching mechanisms to optimize the performance of Hadoop. As previously mentioned, leveraging caching strategies in Hadoop presents several advantages that can be categorized into four distinct groups. In the subsequent sections, we will delve into a range of caching approaches, within their respective groups. #### 3.1.1 Caching mechanisms for improving data access time In Hadoop, data is stored in the Hadoop Distributed File System (HDFS), which is specifically designed to handle large datasets across multiple nodes in a cluster. However, since HDFS is based on a disk mechanism, accessing data from a disk can be a time-consuming process. As a result, caching frequently accessed data in memory can speed up data access by reducing the number of referrals to HDFS to read data from disk. This can significantly reduce data access time and improve overall system performance. In this section, we explore some caching strategies that are commonly used to achieve this purpose. One approach proposed by Zhang et al. is the _HDFS-based Distributed Cache System (HDCache)_[4], which is a data access model designed for write-once-read-many files. This model comprises a client library (_libCache_) and multiple cache services. The _libCache_ library contains shared memory access and the communication and control module, which interacts with the _HDCache_ service on the same host, communicates with _ZooKeeper_ servers remotely, and calculates hash values of desired files to locate a specific cached file. The multiple cache services provide three access layers, including an in-memory cache, a snapshot of the local disk, and the actual disk view. This method can improve real-time file access performance in a cloud computing environment. However, it does not consider complex and dynamic data access models of real-time services. Another caching middleware is _Hycache_[5], which serves as an intermediary between the distributed file system and local storage devices to enhance SSD performance at a low cost. _Hycache_ consists of three components: the Request Handler, File Dispatcher, and Data Manipulator. The Request Handler receives requests from HDFS and passes them to the File Dispatcher, which uses cache replacement algorithms like LRU and LFU to decide which files in the SSD should be swapped to the HDD. Data are manipulated between two access points via a Data Manipulator. The merits of this method are low latency in data access, high throughput, and multithread support. However, it is not scalable and only considers local data movement. Hycache+ [6] presents a scalable caching middleware on the computing node side to improve I/O performance for the parallel file system. Additionally, the 2-Layer Scheduling (2LS) approach was proposed to enhance cache locality. In the first layer, the job scheduler assigns a job to available nodes to minimize file size transmission through nodes. In order to maximize the total cached file size, the second layer locates data across local storage by considering file size and its access frequency. Hycache+ uses a heuristic caching approach instead of LRU to optimize caching effects. Minimizing network cost and scalability are advantages of this strategy. The _Two Level greedy caching strategy_[7] is a means to address the discrepancy between disk access time and bandwidth in large cluster-based systems. The primary objective is to develop a proactive fetching and caching mechanism by integrating the Memcached distributed caching system with Hadoop. The Two-Level greedy caching approach combines two distinct simple greedy caching policies, namely receiver-only greedy caching and sender-only greedy caching. This method aims to enhance data access time by augmenting the cache hit ratio. However, it is important to note that this technique also introduces additional overhead in terms of network traffic. #### 3.1.2 Caching mechanisms for avoiding duplicate computations In certain MapReduce applications, particularly those that involve iterative processes, it may be necessary to reuse intermediate data generated during the Map phase. Failure to save these data may result in redundant computation, which can be a significant waste of resources. By caching such data, duplicate calculations can be eliminated and the data can be easily retrieved from the cache, thereby saving time. Consequently, caching plays a crucial role in Hadoop as a means of avoiding the recomputation of intermediate data. Storing such data in a cache enables quick and easy access, thereby improving performance and reducing processing time. In the following, we discuss various caching strategies that can be employed to address this issue. _Haloop_[8] was specifically developed to address the challenges posed by iterative applications in the Hadoop framework. To achieve this goal, _Haloop_ employs caching of invariant data that can be reused in subsequent iterations, with the cache being indexed to expedite processing. Three types of caches are utilized in this approach. First, the Mapper input cache is used to avoid non-local data reads during non-initial iterations in the Map phase. Second, the Reducer input cache is employed to reduce shuffling time. Third, the Reducer output cache is utilized to identify a fixed point where the application's output is unchanged for two consecutive iterations. In addition, the Loop Aware Task scheduler was proposed to consider inter-iteration locality and allocate Map and Reduce tasks to worker nodes that access the same data but in different iterations. To achieve this, the NameNode must provide a mapping between the data block and the worker node that processed them in the previous iteration. While this strategy can significantly reduce run time and shuffling time, it requires a fixed number of Reduce tasks across iterations. _Incoop_[9] was designed specifically for incremental applications to identify changes in input data and automatically update the corresponding output by reusing previous results. This approach employs three techniques to achieve its objectives. First, the Incremental HDFS technique provides stable partitions of input data by using content-based chunking instead of fixed-size chunking to maximize the overlap between data chunks. Second, the Contraction phase controls the granularity of tasks in terms of their size and dependencies by combining Reduce tasks hierarchically. Third, a memorization-aware scheduler was developed to minimize data transmission across networks by assigning tasks to machines that store the results of sequential jobs for reuse. This policy considers the tradeoff between maximizing the locality of results and minimizing straggler tasks by employing a work-stealing algorithm that ensures nodes are not idle while runnable tasks are waiting. _Incoop_ is suitable for compute-intensive jobs as it avoids recomputation by reusing results, and it can reduce execution time through location-aware scheduling for memorization. However, this approach can generate computational overhead in the incremental HDFS for small dataset sizes. _Data-Aware Cache (Dache)_[10] was proposed as a mechanism to avoid discarding a large amount of intermediate data and recomputing them, thereby improving CPU utilization and accelerating execution time. To achieve this goal, results of tasks are cached, and each task queries the cache before its execution. Additionally, Dache includes two types of cached items stored in a Map cache and a Reduce cache. This method employs a novel cache description scheme that represents each cached item by a tuple comprising the input file and an operation applied to it. Furthermore, a request and reply protocol is utilized to access a cached item by matching the input file and its operation to reuse them. While this strategy provides benefits such as reducing job completion time and improving CPU utilization, there is a limitation in the data partition scheme such that it should use the same data split in both the data and cached item. This technique is well-suited for incremental processing that requires the application of duplicate calculations. _Redoop_[11] was designed to optimize recurrent queries in the Hadoop infrastructure through the use of window-aware techniques. This includes adaptive window-aware data partitioning, cache-aware task scheduling, and an inter-window caching mechanism. The dynamic data packer splits input data partitions into smaller panes based on statistical information gathered by the execution profiler, while the inter-window caching mechanism avoids redundant computation by caching intermediate data. Additionally, cache-aware task scheduling performs load balancing by utilizing cache locality in its decisions. _Redoop's_ strengths include reducing I/O costs and providing workload balancing, but its weakness lies in the generation of overhead due to the gathering of statistical information. _CURT MapReduce_ was proposed in [12] with the ability to cache and utilize task results to avoid recomputation overhead. The strategy involves the use of the _Intelligent Square Divisor (IDS)_ algorithm to split input and intermediate data into a specific number of pieces, a Cache Data Seeker to search for existing input data and corresponding results from previous tasks, and a Cache Data Creator to format cached data into input-to-output tuples. _CURT MapReduce_ is appropriate for iterative applications as it reduces resource wastage and computation overhead, leading to a positive impact on performance. However, it may not be efficient for applications with a low volume of recomputation. #### 3.1.3 Caching mechanisms for decreasing data transmission The utilization of caching mechanisms is a viable approach to improving the efficiency of data transmission across networks. This is achieved by taking into account the cache locality in job scheduling policies. Consequently, tasks can be allocated to nodes that have cached the requisite data, leading to a reduction in data access time, and an increase in task locality rates. By doing so, this approach effectively prevents the need for extra data transmission through the network, contributing to overall improved Hadoop performance. Zhang et al. [13] employed a weighted bipartite graph paradigm in which vertices correspond to Map tasks and resources, and the relationships between these elements are established through edge weights. Within this framework, Map tasks undergo prioritization and are subsequently organized into a selected matrix predicated on the spatial arrangement of task input data. The selection of suitable worker nodes for task processing is facilitated through the implementation of maximum matching algorithms. While this approach improves data and cache locality for map tasks, it does not incorporate considerations pertaining to the overall workload of the cluster. _LARD (Locality Aware Request Distribution)_[14] leverages disk buffer caches and predicts the storage location of cached data by using information about where previous requests were processed. The effectiveness of this approach decreases for large file sizes and frequent alterations, where more prediction errors result in increased disk access durations and suboptimal system performance. Embedded within the operating system's buffer cache, _CATS (Cache Aware Task Scheduling)_[15] is characterized by two primary components: a buffer cache probe that gathers information regarding cached data across each worker node, and a task scheduler that considers cache locality when making scheduling decisions. A drawback of this approach is that data locality is not considered. This drawback is further exacerbated by the overhead resulting from disk accesses that ensue when cache-local tasks cannot be instantiated. The _Adaptive Cache Local Scheduling Algorithm (ACL)_[16] is based on a cache affinity-aware replacement algorithm intended for data block eviction from the in-memory cache. To determine cache affinity, a value, denoted as C, is computed, giving the number of times a task is overlooked by the job scheduler in order to satisfy cache locality. The value of C is proportional to the percentage of cached input data for the job. To mitigate instances of starvation, the algorithm dictates that if the scheduler overlooks a task D times, said task should be dispatched to a node containing the requisite input data and possessing an available slot. Nevertheless, this strategy does not factor in the performance ramifications resulting from concurrently executing applications within a given workload. Moreover, the _ACL_ algorithm has overhead for scheduling and deployment on cache-local and data-local nodes. Furthermore, the efficacy of this approach requires the tuning of parameters C and D. In an effort to enhance task locality in terms of both cache locality and data locality, Ghazali et al. proposed _CLQLMRS (Cache Locality with Q-Learning in MapReduce Scheduling)_[17]. This scheduling method employs a form of reinforcement learning known as Q-Learning to train a scheduling policy without requiring prior environmental information. The objective of this approach is to improve execution time by reducing the amount of data transmission. _CLQLMRS_ is particularly suitable for I/O-Bound jobs and data-oriented applications. #### 3.1.4 Caching mechanisms for other purposes Despite its original intended purpose, caching mechanisms in Hadoop have been utilized by researchers for other beneficial purposes, such as enhancing scalability, reducing shuffling time, and improving data recovery. In the following section, we describe these approaches in greater detail. The _Separation_ method [18] was developed to address the scalability issues of Hadoop clusters and overcome the NameNode's memory limitations when storing filesystem metadata. To achieve this, the NameNode utilizes a cache to retain frequently used data for high availability. Furthermore, a maximum memory capacity threshold is set for the storage of metadata in the NameNode. Once the volume of metadata in the NameNode's memory reaches a threshold value, the separation algorithm is activated. This algorithm transfers the least recently used metadata from the NameNode to secondary storage, thereby caching only frequently used metadata. The advantages of this strategy are improved availability and scalability, although it requires adaptation of the threshold value, and generates some overhead by introducing count and last-time fields in the metadata. Maddah Ali et al. proposed _Coded MapReduce_[19] to reduce the communication load during the shuffling phase. This approach employs coded multicast to exploit the repetitive mappings of the same data block at different servers. Although this method increases processing time due to the repetitive execution of Map tasks, it effectively balances the computation load with the interserver communication load. Coded MapReduce can reduce communication load by a multiplicative factor that grows linearly with the number of servers. However, this approach imposes additional calculations, which can negatively impact execution time. When a failure occurs in a Hadoop cluster, and some DataNodes become unavailable, the degraded process begins to serve incoming data requests by utilizing their replicas in surviving nodes. However, requested data can only be recovered and not shared, leading to wastage of resources, particularly network bandwidth, and resulting in increased execution time and poor performance. To address this issue, _Cooperative, Aggressive Recovery and Caching (CoARC)_[20] was introduced as a novel data recovery mechanism in the distributed file system. This approach aims to avoid redundant recovery of failed blocks by recovering unavailable data blocks on the same strip in addition to the requested data and then caching them in a separate node for accessibility. In this mechanism, the Least Recently Failed (LRF) algorithm was presented as a new cache replacement algorithm. As most of the DataNodes come back alive after some time, there is no need to write back the data blocks to HDFS. The benefit of this approach is that it recovers all unavailable data blocks in the strip without any additional overhead and network traffic, leading to reduced execution time. ### Enhanced caching mechanisms A considerable amount of research has been conducted to enhance caching mechanisms, either by improving cache replacement or prefetch strategies. Consequently, there are two groups of enhanced caching mechanisms. This section will provide a detailed survey of both groups. #### 3.2.1 Enhanced caching via improving cache replacement strategy The cache replacement algorithm plays a vital role in improving the cache hit ratio and cache space utilization. Various cache replacement techniques consider different features of the cached items to evict them when the cache capacity is reached. This section will investigate different cache replacement strategies employed in Hadoop, including their advantages and disadvantages. Within the context of _PacMan_[21], the concurrent execution of parallel jobs is organized in a wave-like fashion, characterized by the all-or-nothing property. This operational framework incorporates two innovative in-memory cache replacement strategies, namely, _LIFE_ and _LFU-F_. The _LIFE_ strategy is distinguished by its eviction of data blocks associated with files possessing the widest wave-width, a tactic designed to diminish the average job completion time. Conversely, _LFU-F_ evicts data blocks with infrequent access, thereby optimizing cluster efficiency. Notably, both mechanisms circumvent cache pollution through the implementation of a window-based aging mechanism and prioritize caching data blocks originating from completed files. These strategies are most suitable for data-intensive parallel jobs. The _WSClock_ algorithm, as introduced in _Enhanced Data-Aware Cache (EDACHE)_[22], is a cache replacement algorithm that relies on a circular list for the management of cached items, particularly intermediate data. The determination of evicted data is based on the examination of a reference bit and the last time of usage. Specifically, the reference bit is initially inspected; if its value is found to be one, indicative of recent usage, the reference bit is reset, the timestamp of last usage is updated, and the clock hand progresses. Conversely, in cases where an item's age surpasses a predefined threshold value, it is selected for eviction. This algorithm may not be well suited for managing large blocks of data due to the extended search times required to locate requested content. _A Modified ARC replacement algorithm_ was introduced in [23]. This algorithm integrates the Least Recently Used (LRU) and Least Frequently Used (LFU) approaches, resulting in the creation of two distinct cache segments: the recent cache and the frequent cache. These segments store data blocks, each equipped with an individual history section containing references to the corresponding data blocks. These references play a pivotal role in determining eviction decisions. When a request for a specific block is made, the algorithm examines the references within both history caches. If the references are found, the corresponding block is allocated within either the recent cache or the frequent cache. Conversely, if the references are absent, the cache consults the history caches and subsequently accommodates the request from one of them. This process enhances caching efficiency, expediting the retrieval of files for preliminary assessments. A reference discovery in the recent history prompts the placement of the block in the recent cache, followed by its transfer to the frequent cache. As such, successful references within either history cache trigger the removal of the reference itself and facilitate the placement of the corresponding block within either the recent or frequent cache. Importantly, the caching of a block encompasses the concurrent caching of associated metadata. When either of the caches reaches full capacity, the eviction of a block is executed from the recent or frequent cache; however, the pertinent reference is retained within the corresponding history. When saturation occurs within either of the history caches, the references simply exit the cache, thereby concluding their presence. Another noteworthy cache replacement algorithm is the _Cache Affinity Aware Cache Replacement Algorithm_[24]. This novel approach leverages cache affinity as a quantifiable metric for prioritizing the caching of input data. In cases where multiple data blocks exhibit identical cache affinity, the algorithm employs the LRU policy to determine the eviction of a particular block. The efficacy of this method hinges upon the availability of accurate information pertaining to the cache affinity of various applications. The block goodness-aware cache replacement strategy [25] employs a dual-metric framework to effectively administer cache space. Specifically, this approach incorporates the Cache Affinity (CA) metric, representing an application-oriented attribute that measures the degree of affinity exhibited by applications toward cache resources. Concurrently, the Block Goodness (BG) metric measures the intrinsic popularity of distinct data blocks within the cache context. This strategy initially computes the BG value, a process informed by the cumulative access frequency of data blocks. The eviction selection process ensues, wherein a data block possessing the smallest BG value is chosen for removal from the cache. In scenarios where multiple data blocks have the minimal BG value, the data block endowed with the earliest access timestamp is chosen. The _Adaptive cache algorithm_[26] incorporates a dual cache replacement framework encompassing _Selective LRU-K (SLRU-K)_ and _Exponential-Decay (EXD)_ mechanisms within the Hadoop Distributed File System (HDFS) cache, specifically designed for Big SQL workloads. Initially, the algorithm partitions tables, directing them into the HDFS cache. Within this context, the _SLRU-K_ strategy adopts a weight heuristic to facilitate the selective inclusion of partitions within the cache. This approach requires the maintenance of a list capturing the last access times of the K most recent interactions for each partition. This algorithm necessitates a greater allocation of space to accommodate the storage of access time data, a notable drawback. In contrast, the _EXD_ mechanism exclusively retains information concerning the most recent access time, utilizing this data to compute a score for each partition. This score determines the prioritization between access frequency and recency. The Adaptive _SLRU-K_ and _EXD_ approach dynamically adapt their behavior in response to diverse workload access patterns, adjusting their parameter values to align with the requirements of specific workloads. Beyond the above approaches, several cache replacement strategies leverage the capabilities of machine learning techniques to improve cache hit ratios and optimize cache space utilization. The _AutoCache_ strategy [27] employs a lightweight gradient-boosted tree model, specifically the _XGBoost_ algorithm, to forecast file access patterns within the HDFS cache environment. This predictive model quantifies the likelihood of accessing a particular file through the derivation of a probability score. This probability score guides the cache replacement policy, strategically mitigating cache pollution. Notably, when the available cache space falls to below 10% of its total capacity, the eviction procedure is triggered. This operation persists until the cache's capacity drops below the 85% threshold. A significant advantage of this strategy is its minimal computational overhead, achieved by constraining computations to a predetermined number of files. To mitigate cache pollution, _H-SVM-LRU_[28] utilizes a smart classification _Support Vector Machine (SVM)_ to classify cached items into two classes based on their likelihood of future reuse. Subsequently, the LRU cache replacement strategy is employed, resulting in an improved cache hit ratio. This combined strategy can have a notable effect on job execution time, as more tasks have the opportunity to utilize cached data instead of accessing data from HDFS. Consequently, the reduction in I/O operation time is one of the advantages of this approach. However, it should be noted that the need for training data poses a limitation to the effectiveness of this method. #### 3.2.2 Enhanced caching with prefetching The implementation of a prefetching mechanism, designed to retrieve data blocks from the HDFS and store them within the cache, can be effective in the reduction of data access latency. To effectively harness the benefits of prefetching techniques, one must account for specific conditions, including the optimal timing for initiating prefetch operations to minimize conflicts with concurrent activities, as well as ensure timely utilization of prefetched data. Moreover, it is essential to prefetch an appropriate volume of data, aligning with the cache capacity, thereby mitigating cache loss ratios. In this section, a number of prefetching mechanisms are described, each aimed at enhancing the performance of the Hadoop framework: The realm of big data applications is characterized by the manipulation of extensive data block volumes, presenting a challenge in facilitating unhindered access to input data blocks by all computational tasks from the cache. The inherent difficulty lies in the low likelihood of data blocks being accessed from the cache prior to eviction. In response, a solution known as _Just Enough Cache (JeCache)_ was introduced [29]. This solution introduces a just-in-time data block prefetching mechanism, which dynamically assesses data block access patterns and computes the average data processing duration to ascertain the minimal number of data blocks that warrant cache retention. _JeCache_ is composed of two principal components: (1) Prefetch information generation, which leverages job history logs to identify data blocks deserving of initial caching and establishes a prefetch sequence for data blocks during job execution; and (2) Prefetch controller, responsible for monitoring data block access within each worker node and orchestrating the eviction of data blocks from the cache once their processing concludes. Notably, this approach leads to a reduction in cache resource demands with a resulting positive impact on execution times. However, it is important to highlight that _JeCache_ exclusively addresses read-caching scenarios. Aiming to curtail data transmission latency between remote and processing nodes within a heterogeneous cluster, Vinutha et al. [30] proposed a _proactive prefetch thread_. This thread proactively retrieves requested input data from remote nodes, depositing them in the buffer of the processing node, which serves as a transient storage space. This strategy can reduce job execution times, by overlapping data transmission with processing activities and improving data locality rates during task launches. Nevertheless, it remains noteworthy that despite the implementation of this prefetching strategy, the initial data transmission still necessitates a certain waiting period. In an alternative approach presented in [31], the _streaming technique_ was introduced to facilitate simultaneous data transfer and processing. By harnessing this technique, the reduced data size inherent to streaming alleviates transmission waiting times. Kalia et al. [32] proposed _speculative prefetching_, a strategy that factors in node processing capacity when loading input data onto processing nodes. This method leverages _K-Nearest Neighbors (KNN)_ clustering, utilizing the Euclidean distance metric to group intermediate data. The goal is to bolster data locality rates for Reduce tasks, consequently amplifying overall performance. It is important to underscore, however, that this approach does not account for additional factors such as workload capabilities and DataNode throughput. In [33], a _two-tiered correlation-based file prefetching mechanism_ and dynamic replica selection strategies were introduced. These strategies collectively aim to minimize data access latency and alleviate the burden on overloaded DataNodes through load balancing measures. The approach encompasses four distinct placement patterns tailored to the storage of fetched data. _Smart prefetch_[34] is an intelligent prefetching mechanism that comprises three steps. In the initial phase, the appropriate prefetch time is determined based on the progress rate of tasks, such as Map tasks and Reduce tasks, taking into account the processing capacity of the nodes. Subsequently, in the second phase, the volume of prefetched data is determined using the _K Nearest Neighbors (KNN)_ clustering algorithm, which relies on the Euclidean distance between data blocks. Finally, a data locality metric is employed to determine the placement of the prefetched data. Experimental results indicate that this mechanism has a significant impact on performance by enhancing data locality and facilitating increased access to data from the cache. ## 4 Comparison of Caching strategies In this section, we compare the introduced Hadoop caching strategies based on different aspects, including the techniques applied, cache specifications, and their impact on the performance of the Hadoop framework. ### Comparison of Caching Characteristics In this subsection, we address several key questions to analyze the characteristics of caching mechanisms utilized in Hadoop. 1. Which Hadoop cache level is used? We explore the specific cache levels employed in Hadoop, such as Distributed Cache, In-Memory Cache, HDFS Cache, and Memcache. Distributed Cache facilitates the distribution of small, read-only files, archives, or other resource types to nodes within a Hadoop cluster. Its primary purpose is to make these resources available to tasks running on the nodes. The Distributed Cache copies the required files to the local disk of each worker node before executing the corresponding task. The In-Memory Cache, on the other hand, involves storing frequently accessed or computed data in the main memory (RAM), enabling faster data access and processing. The HDFS Cache is designed to enhance the performance of data access in HDFS. It allows caching of frequently accessed files or portions of files in the memory of DataNodes, which are individual nodes responsible for data storage and processing within a Hadoop cluster. Caching data in the HDFS Cache enables subsequent read requests for that data to be directly served from the cache, reducing disk I/O and improving overall performance. Memcache, an open-source distributed in-memory caching system, is not specific to Hadoop but can also be employed as a caching layer in Hadoop deployments. It stores data in the memory of multiple servers, enabling fast data access and retrieval. Memcache can be utilized in Hadoop applications to cache frequently accessed data or intermediate results, thus enhancing processing speed and efficiency. 2. What are the cached Items? Different types of items can be cached in Hadoop, including files or data blocks. These items can be further categorized based on their data types, such as input data, intermediate data generated during the execution of MapReduce jobs, and the final results of those jobs. 3. Where are the cached items placed? Cached items can be located through client-side caching or server-side caching. Client-side caching involves caching items on the local disk of the client application, which is suitable for smaller data sets. On the other hand, server-side caching involves caching items on the DataNodes within a Hadoop cluster, which is beneficial for larger data sets that cannot fit into the memory of client machines. 4. Which access pattern is used in the cache? The access pattern in the cache can be classified as read-through caching, write-through caching, or write-behind caching. Read-through caching involves retrieving data from the cache if it is present; otherwise, the cache fetches the data from the data source. Write-through caching involves first writing data to the cache and subsequently updating the data source. In contrast, write-behind caching writes data to the cache and asynchronously updates the data source. This mechanism is particularly useful when write operations occur frequently. 5. Which cache replacement algorithm is applied? When the cache reaches capacity, a cache replacement algorithm is employed to evict a cached item and free up space. Commonly used algorithms include Least Recently Used (LRU), Least Frequently Used (LFU), and First In First Out (FIFO) caching, each with its eviction strategy. 6. How to manage the cache? Cache management is a critical aspect when it comes to optimizing performance in Hadoop environments. In this regard, three prevalent strategies are commonly employed: shared cache management, distributed cache management, and centralized cache management. Each strategy offers distinct advantages and is suitable for specific scenarios within a Hadoop cluster. Shared cache management involves the establishment of a centralized cache that is made accessible to multiple nodes or processes in the Hadoop cluster. Shared cache management proves particularly advantageous in scenarios where multiple jobs or tasks can benefit from a common cache, promoting efficient resource utilization and improved overall performance. On the other hand, distributed cache management focuses on maintaining separate caches on individual nodes within the Hadoop cluster. Each node possesses its dedicated cache, which is managed independently. This strategy is particularly suitable when data access patterns are localized, and each node necessitates a dedicated cache to store frequently accessed data. In contrast, centralized cache management revolves around the implementation of a single cache that is centrally managed within the Hadoop cluster. Typically, the cache resides on a dedicated server or a set of servers, facilitating access by all nodes in the cluster. This approach streamlines cache management operations and ensures consistent caching performance across the entire cluster. Centralized cache management offers the benefit of simplified administration and can contribute to improved performance by providing a unified caching mechanism for all nodes within the cluster. Table 1 summarizes the key attributes associated with each caching mechanism as applied in Hadoop environments. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Caching strategy** & **Hadoop cache** & **Cached item** & **Cached item** & **Items access** & **Cache replacement** & **Cache management** \\ \hline **HDCache** & Distributed & Files & Server-side & Write-once-read-many & LRU & Shared memory management \\ **[**4**]** & cache & & & & & \\ \hline **HyCache** & Distributed & Files & Server-side & Read/write & LRU/LFU & Distributed cache \\ **[**5**]** & cache & & & & & \\ \hline **HyCache+** & Distributed & Files & Server-side & Read/write & LRU & Distributed cache \\ **[**6**]** & cache & & & & & \\ \hline **Haloop** & Combination of & Invariant data in the first & Inter-iteration \& iteration \& re-behind & LRU & Centralized cache \\ **[**8**]** & HDFS cache \& in-memory & & locality & caching & & management \\ & cache & & (Server-side) & & & \\ \hline **Incoop** & Memcache & Fresh results from the & Server-side & Write-through & LRU & Memcached management \\ **[**9**]** & & previous run & & & & \\ \hline **Dache** & In-memory & tuple: \{Origin, Operation\} & Server-side & Read-through & LRU & Centralized cache \\ **[**10**]** & cache & Origin is the name of a & & caching and & & \\ & & file in the DFS, and & & write-behind & & management \\ & & Operation is a linear list & & caching & & \\ & & of performed operations & & & \\ \hline **Redoop** & In-memory & Input data block \& & Client-side & Read-through & LRU & Local cache management \\ **[**11**]** & cache at the & intermediate data block & & caching & & \\ & client-side & & & & & \\ \hline **CurtMapReduce** & Distributed & Results of Map tasks \& & Server-side & Read-through & LRU & Distributed cache \\ **[**12**]** & cache & Reduce Tasks & & and write-through & & management \\ & & & & caching & & \\ \hline **Improved CL, and** & Distributed & Input data blocks for Map & Server-side & Read/write & LRU & Centralized cache \\ **DL for map tasks** & cache & tasks & & & & \\ & [**13**] & & & & & \\ \hline **LARD** & Buffer cache & Input data blocks \& & Server-side & Read/write & LRU & Centralized cache \\ **[**14**]** & & intermediate data blocks & & & & management \\ \hline **CATS** & Buffer cache & Input data blocks \& & Server-side & Read/write & LRU & Distributed cache \\ **[**15**]** & & & intermediate data blocks & & & & \\ \hline **ACL** & Distributed & Input data blocks \& & Server-side & Read/write & LRU & Centralized cache \\ **[**16**]** & cache & intermediate data blocks & & & & \\ \hline **CLQLMRS** & Distributed & Input data blocks \& & Server-side & Read/write & LRU & Centralized cache \\ **[**17**]** & cache & intermediate data blocks & & & & \\ \hline **Separation** & HDFS cache & File system metadata & Server-side & Read/write & LRU & Centralized cache \\ **[**18**]** & & & & & & \\ \hline **CodedMapReduce** & Distributed & Intermediate data blocks & Server-side & Read-modify-write & LRU & Distributed cache \\ **[**19**]** & cache & & & approach & & management \\ \hline \end{tabular} \end{table} Table 1: Caching mechanism characteristics ### Comparison of caching strategies Table 2 provides a comparison of caching strategies by taking into account their techniques, advantages, disadvantages, and limitations. Moreover, it suggests appropriate scenarios to utilize these strategies. ### Comparison of caching strategies Table 2 provides a comparison of caching strategies by taking into account their techniques, advantages, disadvantages, and limitations. Moreover, it suggests appropriate scenarios to utilize these strategies. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Caching strategy** & **Technique** & **Advantages** & **Disadvantages** & **Limitation** & **Usecase** \\ \hline **HDCache** & Distributed hash table & Improves real-time file access performance in a cloud computing environment & Does not consider & Used for intranet & Cloud computing \\ & & & cloud computing environment & data access models of real-time services & outside firewall & environment \\ \hline \end{tabular} \end{table} Table 2: Comparison of caching strategies ## 6 Conclusion In this paper, we have proposed a novel approach to the classification of the data-driven models to predict the performance of the model. We have proposed a novel approach to predict the performance of the model. ### The impact of caching strategies on Hadoop performance In this section, we investigate the effects of the presented caching strategies on Hadoop performance by considering various performance metrics: data access time, job execution time, resource utilization, scalability, overhead, and data locality. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Cache affinity** & **Data access time** & **Job execution time** & **Resource utilization** & **Scalability** & **Overhead** & **Data locality** \\ \hline **HDCache** & Reduces disk & Reduces execution time by improving data access time & N/A & No & Omits network traffic overheads & N/A \\ \hline **HyCache [4]** & Accelerates HDFS & Improves execution time & N/A & No & High & N/A \\ \hline **HyCache+ [6]** & Accelerates HDFS & Speed up by 29X in their tests & N/A & Yes & High & Achieves data locality using the novel 2LS \\ \hline **Haloop [8]** & Improves invariant data access without recomputation & Reduces run time & Avoids recomputation of invariant data & No & Overhead of shuffling invariant data is completely avoided & Inter-iteration locality \\ \hline \end{tabular} \end{table} Table 3: Impact of caching strategies on Hadoop performance **LARD** [14]. [14]. [14]. [15]. [15]. [16]. [17]. [15]. [16]. [17]. [18]. [19]. [20]. [21]. [22]. [23]. [24]. [25]. [26]. [27]. [28]. [29]. [20]. [21]. [23]. [28]. [29]. [20]. [21]. [22]. [24]. [25]. [26]. [27]. [28]. [29]. [20]. [21]. [22]. [29]. [21]. [22]. [23]. [24]. [25]. [26]. [27]. [28]. [29]. [29]. [21]. [22]. [29]. [22]. [28]. [29]. [29]. ### Discussion and research opportunities This section presents a discussion and comparison of various caching approaches employed in the context of Hadoop. Statistical information, as illustrated in Figure 1, is provided to compare the papers reviewed in terms of their contributions to addressing Hadoop-related issues. As previously mentioned in Section 3, the caching methods are categorized into two groups: basic and enhanced. It is evident that researchers have conducted more studies on enhanced caching techniques in order to harness the additional benefits offered by caching mechanisms. Notably, the topic garnering the highest popularity among these studies is the improvement of cache replacement algorithms, which aims to increase the cache hit ratio. This particular area of research accounts for approximately 26% of all studies. The avoidance of recomputation and the reduction of data transmission are next in popularity, each constituting nearly 16% of all studies. Furthermore, Figure 2 presents the distribution of performance parameters used for evaluating caching mechanisms. The analysis reveals that execution time is the most significant performance metric, accounting for 47% of the evaluations. Data locality is next with a share of 15%. The prominence of these two metrics can be attributed to their substantial impact on overall performance. Conversely, overhead has the lowest share among these parameters, underscoring the need for greater attention to this aspect in future research endeavors. Figure 1: Percentage of the papers reviewed in terms of the problem-solving approach Figure 2: Percentage of performance metrics for evaluating caching algorithms Based on the findings of this survey, it is evident that the caching mechanism in Hadoop remains a significant area of research interest, necessitating further investigation. As a prospective avenue for exploration, novel caching strategies could be devised to optimize multiple metrics simultaneously (or manage trade-offs between them), thereby addressing and resolving various challenges encountered in Hadoop. For instance, caching mechanisms can be employed to tackle multiple difficulties in Hadoop simultaneously, or a hybrid approach combining two methods can be developed to enhance performance. These avenues present promising opportunities for future research. In the subsequent section, we propose a hybrid method that combines CLQLMRS and H-SVM-LRU, taking into consideration job execution time as a performance metric for evaluation. ## 5 Hybrid approach We investigate basic and enhanced caching mechanisms and propose a novel hybrid approach called Hybrid Intelligent Cache (HIC) that combines the benefits of both methods. The HIC framework incorporates the CLQLMRS MapReduce job scheduler, which employs reinforcement learning to train scheduling policies considering data and cache locality for task assignment to worker nodes. The CLQLMRS scheduler improves task locality rates, resulting in reduced data transmission time and enhanced Hadoop performance. Additionally, the HIC framework incorporates the H-SVM-LRU intelligent cache replacement strategy, which optimizes cache space utilization by preventing cache pollution. This strategy classifies cached items into reusable and non-reusable groups and determines eviction based on item class and the Least Recently Used (LRU) policy. The proposed hybrid approach is evaluated through experiments conducted on a Hadoop cluster to assess its impact on system performance metrics. ### Experimental setup For carrying out the experiment, we use a cluster consisting of a single NameNode and nine DataNodes located in two racks such that odd-numbered nodes are in rack1 and even-numbered nodes are in rack2. * _Hardware configuration_: The nodes are connected via a 10 Gigabit Ethernet switch. The experimental environment is heterogeneous with different memory sizes at the nodes: NameNode has16 GB RAM; DataNode 1, DataNode 4, and DataNode 7 have 4 GB RAM, DataNode 2, DataNode 5, and DataNode 8 have 6 GB RAM, DataNode 3, DataNode 6, and DataNode 9 have 8 GB RAM. The CPU for all nodes is an Intel Core i7-6700 processor and a one TB hard disk. * _Software configuration_: We use the Ubuntu14.04 operating system and JDK 1.8, Hadoop version 2.7 (which employs in-memory caching), and Intel HiBench [35][36] version 7.1 as the Hadoop benchmark. * _Hadoop configuration parameters:_ The block size of files in HDFS is 128 MB, the number of cache replicas is set to one, and data replication is set to 3. The memory sizes for Map tasks, Reduce tasks, and the node manager are 1GB, 2GB, and 8 GB, respectively. The maximum size of the cache is set to 1.5 GB and we assume that each DataNode in the cluster has the same size cache. The remaining Hadoop configuration parameters are set to default values. * _MapReduce applications_: We use Intel HiBench as a Hadoop benchmark suite and carry out our experiments by using two groups of benchmarks: 1) Micro benchmarks that contain WordCount, Sort, and TeraSort applications. 2) Hive benchmarks as a big data query and analysis application, for instance, Join and Aggregation. WordCount is a CPU-intensive application that counts occurrences of each word in a text file. Sort is a typical I/O-bound application with moderate CPU usage and sorts input data. TeraSort is an I/O-oriented program that needs moderate disk I/O during the Map and Shuffle phases and heavy disk I/O in the Reduce phase. Both Aggregation and Join are supported by Hive and used for operation in the query. Join is a multiple-stage application where the results of each step are used as input for the next step. * _Input data:_ For carrying out experiments, we have used the default data sizes from the HiBench suite. Sort and WordCount have 60 GB and TeraSort has 1 TB as input data size. The input data for WordCount, Sort, and TeraSort are generated by using the RandomTextWriter, RandomWriter, and TeraGen programs respectively, all contained in the Hadoop distribution. * _Metric:_ We consider job execution time as the metric to evaluate our purposed method. It plays a vital role in Hadoop performance improvement and is related to data access time. The data access time decreases significantly if we could access data from the cache instead of the disk, reducing the job execution time. To calculate the average job execution time, we run each application five times. * _Scenarios:_ In this experiment, we take into account four scenarios. First, Hadoop original does not utilize HDFS in-memory caching and used the default MapReduce job scheduler (FIFO). We use this as a baseline. Second, we utilize H-SVM-LRU instead of LRU as a cache replacement algorithm. Next, we apply CLQLMRS as a job scheduler to train a scheduling policy that considers both data locality and cache locality. Finally, we compose H-SVM-LRU with CLQLMRS as a hybrid method. ### Evaluation Figure 3: Average job execution time for various applications The average job execution time for the considered applications is depicted in Figure 3. The results demonstrate that the H-SVM-LRU approach exhibits a notable improvement in job execution time, achieving a 16% reduction compared to the Hadoop-original configuration. This improvement can be attributed to H-SVM-LRU's ability to effectively utilize the limited cache space and mitigate cache pollution, resulting in an enhanced cache-hit ratio. Consequently, a larger number of tasks can leverage cached input data, leading to reduced data access time and improved overall performance. Furthermore, the CLQLMRS technique, which considers both cache locality and data locality in its scheduling policy, displays significant enhancements in the job execution time, with an approximate improvement of 26.54% compared to the FIFO approach. By prioritizing tasks that benefit from cache and data locality, CLQLMRS optimizes task allocation, resulting in more efficient execution and reduced latency. Building upon the advantages of both CLQLMRS and H-SVM-LRU, our proposed hybrid method combines these approaches to further enhance job execution time. By leveraging the benefits offered by both techniques, our hybrid method achieves an average improvement of 31.2% in job execution time across the evaluated applications. This demonstrates the potential of integrating cache locality, data locality, and advanced cache replacement strategies in optimizing job execution and resource allocation for improved performance. ## 6 Conclusion This paper examines the utilization of caching mechanisms as a potential solution to address various challenges within the Hadoop framework. The study encompasses an analysis of two primary categories of caching mechanisms: basic caching and enhanced caching. Subsequently, a novel classification scheme is introduced to categorize basic caching strategies based on the specific problem they seek to alleviate, such as enhancing data access time, eliminating redundant computations, reducing data transmission, and other related objectives. Enhanced caching is further subdivided into two groups: improved cache replacement algorithms and enhanced prefetch mechanisms. Each method is described, and a comparative analysis is conducted to evaluate their respective advantages, disadvantages, and impact on the overall performance of Hadoop. Moreover, the paper presents a novel hybrid intelligent caching approach that leverages the combined benefits of the H-SVM-LRU cache replacement algorithm and the CLQLMRS scheduling policy. Experimental findings demonstrate that this novel approach yields a notable improvement of 31.2% in execution time. This enhancement is attributed to the heightened cache hit ratio facilitated by the H-SVM-LRU cache replacement algorithm, as well as the increased likelihood of tasks utilizing local data due to the CLQLMRS scheduling policy. In light of these achievements, we are interested in exploring additional machine learning techniques to further investigate the potential of intelligent caching. We have a goal of expanding the application of machine learning methodologies to advance the field of caching mechanisms within Hadoop, by addressing more intricate challenges.
2301.02872
Machine Learning to Estimate Gross Loss of Jewelry for Wax Patterns
In mass manufacturing of jewellery, the gross loss is estimated before manufacturing to calculate the wax weight of the pattern that would be investment casted to make multiple identical pieces of jewellery. Machine learning is a technology that is a part of AI which helps create a model with decision-making capabilities based on a large set of user-defined data. In this paper, the authors found a way to use Machine Learning in the jewellery industry to estimate this crucial Gross Loss. Choosing a small data set of manufactured rings and via regression analysis, it was found out that there is a potential of reducing the error in estimation from +-2-3 to +-0.5 using ML Algorithms from historic data and attributes collected from the CAD file during the design phase itself. To evaluate the approach's viability, additional study must be undertaken with a larger data set.
Mihir Jain, Kashish Jain, Sandip Mane
2023-01-07T15:09:51Z
http://arxiv.org/abs/2301.02872v1
# Machine Learning to Estimate Gross Loss of Jewelry for Wax Patterns ###### Abstract In mass manufacturing of jewellery, the gross loss is estimated before manufacturing to calculate the wax weight of the pattern that would be investment casted to make multiple identical pieces of jewellery. Machine learning is a technology that is a part of AI which helps create a model with decision-making capabilities based on a large set of user-defined data. In this paper, the authors found a way to use Machine Learning in the jewellery industry to estimate this crucial Gross Loss. Choosing a small data set of manufactured rings and via regression analysis, it was found out that there is a potential of reducing the error in estimation from \(\pm\)2-3 to \(\pm\)0.5- using ML Algorithms from historic data and attributes collected from the CAD file during the design phase itself. To evaluate the approach's viability, additional study must be undertaken with a larger data set. Keywords:CAD Gross Loss Jewelry Machine Learning Wax Model. ## 1 Introduction Loss is an inevitable component of manufacturing. In the manufacturing of jewellery from precious metals, accounting and calculating the losses is a very crucial. Gross Loss of jewellery is the total metal loss during its manufacturing. Loss in metal happens during casting, filing, polishing, setting and at almost every stage. Even though most of this lost metal is recovered and refined in the refinery to get a recovery of 92%, on average, these losses are extremely crucial not to be accounted for. The loss on each piece of jewellery varies, based on various factors. Estimating this gross loss beforehand was very crucial for the manufacturing of that jewellery. This estimated gross loss was used for while pulling wax patterns during the process of injection moulding [1]. Jewelry made from the heavier wax piece will have surplus metal that must be filed down and recovered later, which is a waste of time and materials because only some of the metal will be recovered. Therefore, estimating the total loss provides a general estimate of the wax weight and can be used as a guide for how each procedure should be carried out. In a production process, a step wise loss of each of step of manufacturing is collected. This is done by weighing the jewelry after each step. Hence after the jewelry has been manufactured it can assess the final data of gross loss that the company bore. Total recovery that was done was also considered, and added to the database. This gross loss found out was further collected out of which a wide set of databases is manufactured by an in-house engineer. Calculations based on current trends are made where a few other variables are also taken into consideration. Variables like, weight of the final product, metal type (White Gold, Yellow Gold, Pink Gold, Silver, Platinum and Palladium), cartage of metal (8k, 9k, 10k, 12k, 14k, 18k, 20k etc.), the customer for whom the jewelry is being manufactured, the setting of diamond (whether the piece is handset or wax set) and of course the type of jewelry it is (whether it is a ring, a pendant, an earring, a bracelet or a bangle.) Currently, the estimation comes with a variance of \(\pm\)4-5%. Hence there is a scope here by which, using the powerful tools of Machine Learning [2, 3, 4, 5] we can consider the variable constants to find out the gross loss in jewelry. These variable constants can most often than not be fetched directly from the CAD files which are made way before the actual manufacturing process even begins. The aim of the paper is to estimate the gross loss of jewelry at the CAD level with greater and repeatable accuracy using machine learning algorithms. This paper will systematically narrow down the variables responsible for gross loss of jewelry during its manufacturing, create a machine learning model that predicts the final gross loss based on the data collected from the CAD file generated before manufacturing and ensure greater accuracy of the model as compared to the traditional methods of estimating loss. ## 2 Methodology As the project is a proof of concept, it only takes into account 26 rings as a sample size. This project will only use information from the last several months of production for all ring kinds for which CAD files were available (developed in Rhino 3D [6]) and for which the company knew the associated gross losses. It is important to highlight that only information that could be shown publicly has been included in this report. There were notably three stages to the project's execution. ### Creating the Dataset The first phase comprised of selecting all possible attribute of the rings from the CAD file and listing them down with their corresponding values on an excel sheet. This data was paired with its corresponding historic gross loss. ### Preparation of Data The compiled data was obtained from the CAD files. This data had irrelevant parameters that are currently unknown but will be filtered out through processing. The reason why all possible data was collected was to avoid any human generated discrepancies in the very first stage of the project. Even though 26 is a small number for a machine learning algorithm, its corresponding volume would still suffice to give us the proof of concept required to carry on with the project. But in an ideal situation, the number of rows should be 4x the number of columns. So, it can safely consider that the results obtained will only improve when the amount of data of rings increases. Before testing algorithms, the data was checked for any missing values and such values were designated a weighted average value. Feature scaling was done using Standard Scaling which standardize features by removing the mean and scaling to unit variance [7]. The standard score of a sample x is calculated as: \[z=(x-u)/s \tag{1}\] where u is the mean of the training samples, and s is the standard deviation of the training samples. \begin{table} \begin{tabular}{|l|l|l|} \hline **\#** & **Attribute** & **Datatype** \\ \hline 1 & Volume & mm\({}^{3}\) \\ 2 & Surface Area & mm\({}^{2}\) \\ 3 & Metal & Karat-metal \\ 4 & Weight/ Piece (Estimated) & gm \\ 5 & Total Lot Quantity & integer \\ 6 & Total Weight of Lot & gm \\ 7 & Inner Diameter & mm \\ 8 & Outer Diameter & mm \\ 9 & Minimum Shank Thickness & mm \\ 10 & Maximum Shank Thickness & mm \\ 11 & Minimum Shank Width & mm \\ 12 & Maximum Shank Width & mm \\ 13 & Total Height & mm \\ 14 & Top Height & mm \\ 15 & Number of Components & integer \\ 16 & Number of Rings & integer \\ 17 & Tone & 1/2/3 \\ 18 & True Miracle & binary \\ 19 & No. of True Miracle & integer \\ 20 & Diamond – Handset and Wax Set & integer \\ 21 & Filigree & binary \\ 22 & J Back & binary \\ 23 & Gallery & binary \\ 24 & Fake Beads & integer \\ 25 & Plating & binary \\ \hline \end{tabular} \end{table} Table 1: Parameters of the Dataset ### Models The above-mentioned data was split tested, and the mean value graph of each feature was derived. The data set was split as 80-20, where 80 was the train dataset and 20 was the test dataset. After the data set is split, feature scaling was done to make the comparison easier. This data was the processed through various algorithms. 1. _Linear Regression Algorithm_ 2. Linear Regression [8] is a supervised learning-based machine learning technique. One of its functions is to carry out a regression analysis. Through the use of independent variables, the regression model can predict a desired outcome. Its primary function is to investigate causal links between factors and predicting. 2. Predicting a value (y) of a dependent variable (x) from known values (y) of independent variables (x) is the job of linear regression (x). Accordingly, this method of regression establishes a linear connection between the input variable (x) and the output variable (y) (output). Linear Regression perfectly describes this method [9]. Hypothesis function for Linear Regression : \[y=\theta_{1}+\theta_{2}.x\] (2) 3. When training the model - it fits the best line to predict the value of y for a given value of x. The model gets the best regression fit line by finding the best \(\theta_{1}\) and \(\theta_{2}\) values where \(\theta_{1}\) is the intercept and \(\theta_{2}\) is the coefficient of x. Figure 1: Architecture of Linear Regression Algorithm * The best fit line is obtained by locating the optimal values of \(\theta_{1}\) and \(\theta_{2}\). When our model is used for prediction, it will give us y as a function of x. 2. _Random Forest Regression_ 1. Random Forest Regression 2. The results of several different machine learning algorithms, an ensemble learning method can produce a more precise forecast than any one of them could on its own. 3. Random Forest relies on the "wisdom of the crowds" principle, which states that a large number of independent models working in concert can achieve better results than any of their parts working alone. 4. This is owing to the fact that the trees buffer one another from their particular errors. Since a random forest is completely random, there is no communication between the trees that make up the forest. Random forest is an estimator technique that takes the outputs of multiple decision trees, compiles them, and then generates the ideal answer for the given situation. [11] 5. _Decision Tree Regression_ 1. Decision tree [12] builds regression or classification models in the form of a tree structure. It breaks down a dataset into smaller and smaller subsets while at the same time an associated decision tree is incrementally developed. The result is a tree with decision nodes and leaf nodes. 2. There are three distinct sorts of nodes in this regression tree. The Root Node is the primary node, representing the whole sample and potentially being subdivided into further nodes. Features of a dataset are represented by Interior Nodes, while decision rules are shown by Branches. In the end, the result is shown by the Leaf Nodes. If you have an issue that requires a choice, this algorithm is excellent. [13] Figure 2: Architecture of Random Forest Algorithm 3. A single data point is processed all the way down to the leaf node by asking and answering True/False queries. Ultimately, the dependent variable value in each leaf node is averaged to arrive at a final forecast. The Tree is able to provide an accurate prediction after going through several rounds. 4. The benefits of using decision trees include their simplicity, the fact that they require less data cleaning, the fact that non-linearity has no effect on the performance of the model, and the fact that the number of hyper-parameters to be set is practically zero. 5. _K-Nearest Neighbors Regression_ Figure 4: Example of KNN Algorithm Figure 3: Architecture of excision Tree Algorithm 1. K-Nearest Neighbors [14] is an easy-to-implement method that remembers past examples and makes a numerical prediction based on a similarity measure (e.g., distance functions). KNN is a non-parametric technique that has been utilised for statistical estimates and pattern recognition since the early 1970s. 2. We need to use cross-validation to choose K. Unlike classification we cannot accuracy use as metric, since our predictions will almost never exactly match the true response variable values. Therefore in the context of KNN regression we will use root mean square prediction error (RMSPE) instead. The mathematical formula for calculating RMSPE is: \[RMSPE=\sqrt{\frac{\sum_{1}^{n}\left(y_{i}-\hat{y_{i}}\right)^{2}}{n}}\] (3) Where n is the number of observations, y\({}_{\text{i}}\) is the observed value for the i\({}^{\text{th}}\) observation, and \(\hat{y}_{\text{i}}\) is the predicted value for the i\({}^{\text{th}}\) observation. 3. To put it another way, for each observation in our test (or validation) set, we calculate the squared difference between the anticipated and true response value, average over observations, and then square root. Since differences can be either positive or negative--that is, we can over- or under-estimate the genuine response value--we utilise the squared difference rather than merely the difference. [15] ## 3 Results As we've seen, we have put our datasets through their paces with a wide range of ML algorithms. The Mean Absolute Error (MAE) represents the error that was introduced into our findings. The formula for the same is \[MAE=\frac{\sum_{i=1}^{n}\left|y_{i}-x_{i}\right|)}{n} \tag{4}\] Where, y\({}_{\text{i}}\) is the prediction, x\({}_{\text{i}}\) is the true value and n specifies the total number of data points. It was observed that all 4 algorithms performed well considering how small the data set was. All algorithms gave promising results with Linear Regression \begin{table} \begin{tabular}{|l|l|} \hline Method & Mean Absolute Error \\ \hline Linear Regression & 0.56 \\ Random Forest Regressor & 1.72 \\ Decision Tree Regressor & 1.49 \\ K-Nearest Neighbour Regressor & 2.02 \\ \hline \end{tabular} \end{table} Table 2: Mean Absolute Error Of Different Methods lending the lowest MAE. Though with increasing data set, it would be wise to consider all the remaining models as well. The scores will only improve as the data set increases. ## 4 Conclusion The results show us that the gross loss can be predicted to an error margin of \(\pm 0.5\). The proof of concept needed to be derived from the results was sufficient to take to the company to act on it. Each of the 4 algorithms have potential, Linear Regression being the most promising one so far. Further testing needs to be done by increasing the number of data set and even expanding to different category of jewelry. The implementation of this innovation in the field of jewelry manufacturing would be a big undertaking and would be a time consuming and labour-intensive processes but one which would bare fruitful results.
2306.09980
Creating Multi-Level Skill Hierarchies in Reinforcement Learning
What is a useful skill hierarchy for an autonomous agent? We propose an answer based on a graphical representation of how the interaction between an agent and its environment may unfold. Our approach uses modularity maximisation as a central organising principle to expose the structure of the interaction graph at multiple levels of abstraction. The result is a collection of skills that operate at varying time scales, organised into a hierarchy, where skills that operate over longer time scales are composed of skills that operate over shorter time scales. The entire skill hierarchy is generated automatically, with no human intervention, including the skills themselves (their behaviour, when they can be called, and when they terminate) as well as the hierarchical dependency structure between them. In a wide range of environments, this approach generates skill hierarchies that are intuitively appealing and that considerably improve the learning performance of the agent.
Joshua B. Evans, Özgür Şimşek
2023-06-16T17:23:49Z
http://arxiv.org/abs/2306.09980v2
# Creating Multi-Level Skill Hierarchies in Reinforcement Learning ###### Abstract What is a useful skill hierarchy for an autonomous agent? We propose an answer based on the graphical structure of an agent's interaction with its environment. Our approach uses hierarchical graph partitioning to expose the structure of the graph at varying timescales, producing a skill hierarchy with multiple levels of abstraction. At each level of the hierarchy, skills move the agent between regions of the state space that are well connected within themselves but weakly connected to each other. We illustrate the utility of the proposed skill hierarchy in a wide variety of domains in the context of reinforcement learning. ## 1 Introduction How can an agent autonomously develop an action hierarchy as it interacts with its environment? This is a fundamental open question in artificial intelligence. Before answering this algorithmic question, it is useful to first ask a conceptual question: What is a useful action hierarchy? Here we focus on this conceptual question to provide a useful foundation for future algorithmic development. We propose a characterisation of a useful action hierarchy based on a graphical representation of an agent's interaction with its environment. We first partition this graph at multiple scales, exposing the structure of the environment at various levels of granularity, then define actions that efficiently move an agent between neighbouring clusters in the partitions. The outcome is an action hierarchy that enables the agent to efficiently interact with its environment at multiple time scales. In a diverse set of environments, the proposed characterisation translates into action hierarchies that are intuitively appealing, improve learning performance, and has desirable scaling properties. Our approach has two key characteristics. First, we partition the interaction graph by maximising _modularity_, a measure of partition quality that generates clusters that are strongly connected within themselves but weakly connected to each other. Secondly, our approach produces a multi-level hierarchy, where lower-level actions are naturally composed into higher-level actions. Multi-level action hierarchies naturally support the ability to act, plan, explore, and learn over varying timescales, offering many concrete benefits over unstructured collections of actions that are not organised into a hierarchy. ## 2 Background We use the reinforcement learning framework, modelling an agent's interaction with its environment as a finite Markov Decision Process (MDP). An MDP is a six-tuple \((\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\mathcal{D},\gamma)\), where \(\mathcal{S}\) is a finite set of states, \(\mathcal{A}\) is a finite set of actions, \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\to[0,1]\) is a transition function, \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\to\mathbb{R}\) is a reward function, \(\mathcal{D}:\mathcal{S}\to[0,1]\) is an initial state distribution, and \(\gamma\in[0,1]\) is a discount factor. Let \(\mathcal{A}(s)\) denote the set of actions available in state \(s\in\mathcal{S}\). At decision stage \(t\), \(t\geq 0\), the agent observes state \(s_{t}\in\mathcal{S}\) and executes action \(a_{t}\in\mathcal{A}(s_{t})\). Consequently, at decision stage \(t+1\), the agent receives a numerical reward, \(r_{t+1}\in\mathcal{R}\), and observes the next state, \(s_{t+1}\in\mathcal{S}\). The _return_ at decision stage \(t\) is the discounted sum of future rewards, \(G_{t}=\sum_{k=0}^{\infty}\gamma^{k}r_{t+k+1}\). A policy \(\pi:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\) is a mapping from state-action pairs to probabilities. The agent's objective is to learn a policy that maximises the expected return. The _state-transition graph_ of an MDP is a weighted, directed graph whose nodes represent the states of the MDP and whose edges represent possible transitions between these states via primitive actions. An edge \((u,v)\) exists on the graph if it is possible to transition from state \(u\in\mathcal{S}\) to state \(v\in\mathcal{S}\) by taking some action \(a\in\mathcal{A}(u)\). Unless stated otherwise, we use uniform edge weights of \(1\). The actions of an MDP take exactly one decision stage to execute; we refer to them as _primitive actions_. Using primitive actions, it is possible to define _abstract actions_, or _skills_, whose execution can take a variable number of decision stages. Primitive and abstract actions can be combined to form complex action hierarchies. We represent skills using the options framework [1]. An option \(o\) is a three-tuple \((\mathcal{I}_{o},\pi_{o},\beta_{o})\), where \(\mathcal{I}_{o}\subset\mathcal{S}\) is the initiation set, specifying the set of states in which the option can start execution, \(\pi_{o}:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\) is the option policy, and \(\beta_{o}:\mathcal{S}\rightarrow[0,1]\) is the termination condition, specifying the probability of option termination in a given state. An option policy is ultimately defined over primitive actions--because these are the fundamental units of interaction between the agent and its environment--but this can be done indirectly by allowing options to call other options, making it possible for agents to operate with hierarchies of primitive and abstract actions. ## 3 Related Work Although many existing approaches to skill discovery use the state-transition graph, all such approaches produce skill hierarchies with only a single level of skills above primitive actions. Multi-level skill hierarchies are essential when solving complex tasks, which often require agents to act, plan, explore, and learn over varying timescales. They offer many concrete benefits to reinforcement learning agents. First, when using a multi-level hierarchy, an agent can learn about not only the skill it is currently executing but also about any lower-level skills it calls upon. Secondly, arranging skills into multi-level hierarchies allows them to be updated in a modular fashion. For instance, any improvements to the internal policy of a lower-level skill would be immediately reflected in all higher-level skills that call it. Thirdly, the ability to form multi-level skill hierarchies supports a continual learning process, where an agent combines existing skills to produce new skills in an open-ended manner. While existing graph-based methods do not learn multi-level hierarchies, policy-gradient methods have made some progress towards this goal. Bacon et al. [2] extended policy-gradient theorems [3] to allow the learning of option policies and termination conditions in a two-level hierarchy. Riemer et al. [4] further generalised these theorems to support multi-level hierarchies. Fox et al. [5] propose an imitation learning method that finds the multi-level skill hierarchy most likely to generate a given set of example trajectories. Levy et al. [6] propose a method for learning multi-level hierarchies of goal-directed policies, with each level of the hierarchy producing a subgoal for the lower-levels to navigate towards. However, these methods are not without their limitations. Unlike the approach proposed here, they all require the number of hierarchy levels to be pre-defined instead of finding a suitable number automatically. They also make simplifying assumptions, such as that all skills are available in all states, and target different types of problems than we do, such as imitation-learning or goal-directed problems with high-dimensional or continuous state-spaces. Our approach is most directly related to skill discovery methods that use graph partitioning [7; 8; 9; 10; 11; 12]. Three of these methods use the concept of modularity, which is also central to our approach. One such approach is to generate a series of possible partitions by successively removing the edge with the highest edge betweenness from a graph, then selecting the partition with the highest modularity [8]. A second approach is to generate a partition using the label propagation algorithm and then to merge neighbouring clusters until no gain in modularity is possible [12]. In these two approaches, the final partition will maximise modularity with respect to only the initial clusters identified by other methods; in other words, the final partition will generally not be the one that maximises modularity overall. The label propagation method runs in near-linear time, whereas the edge betweenness method has a time complexity of \(O(m^{2}n)\) on a graph with \(m\) edges and \(n\) nodes. A third approach is by Xu et al. [11], who use the Louvain algorithm to find a partition that maximises modularity but, unlike our approach, define skills only for moving between clusters in the highest-level partition, discarding all lower-level partitions. Using only the highest-level partition produces skills for navigating the state-space at only a high level, whereas our approach produces skills for navigating the state-space at varying timescales by using the full cluster hierarchy. Another approach to skill discovery is to identify useful subgoals and define skills for navigating to them. Suggestions have often been inspired by the concept of "bottleneck" states. They include states that are on the border of strongly-connected regions of the state-space [13], states that allow transitions between different regions of the state-space [14], and states that lie on the shortest path between many other pairs of states [15]. To identify such states, several approaches use graph centrality measures [15; 16; 17; 18; 19]. Others use graph partitioning algorithms to identify meaningful regions of the state-space, then identify subgoals as the nodes bordering them [20; 21; 22; 9; 13; 23; 24; 25]. The bottleneck concept has also inspired non-graphical approaches to skill discovery, such as seeking to identify states that are visited frequently on successful trajectories but infrequently on unsuccessful ones [26]. Alternatively, it has been proposed that "landmark" states found at the centre of strongly-connected regions of the state-space can be used as subgoals [27]. Several approaches have used the graph Laplacian [28; 29] to identify skills that are specifically useful for efficiently exploring the state space. These methods produce skills that aim to minimise the expected number of actions required to navigate between any two states, allowing efficient navigation between distant areas of the state space. It is unclear how to arrange such skills to form multi-level skill hierarchies. In contrast, we propose a principled approach to characterising skills that are both useful and naturally form a multi-level hierarchy. ## 4 Proposed Approach We identify partitions of the state-transition graph that maximise modularity [30; 31]. A _partition_ of a graph is a division of its nodes into mutually exclusive groups, called _clusters_. Given a set of clusters \(C=\{c_{1},c_{2},\ldots,c_{k}\}\) forming a partition of a graph, the _modularity_ of the partition is \[\sum_{i=1}^{k}e_{ii}-\rho a_{i}^{2}\,\] where \(e_{ii}\) denotes the proportion of total edge weight in the graph that connects two nodes in cluster \(c_{i}\), and \(a_{i}\) denotes the proportion of total edge weight in the graph with at least one end connected to a node in cluster \(c_{i}\). A resolution parameter \(\rho\) controls the relative importance of \(e_{ii}\) and \(a_{i}\). Intra-cluster edges contribute to both \(e_{ii}\) and \(a_{i}\) while inter-cluster edges contribute only to \(a_{i}\). So, as \(\rho\) increases, large clusters with many inter-cluster edges are penalised to a higher degree, leading to partitions with smaller clusters. A partition that maximises modularity will have dense connections within its clusters but sparse connections between them. In the context of reinforcement learning, these clusters correspond to regions of the state space that are easy to navigate within but difficult to navigate between. The proposed skill hierarchy gives an agent the ability to efficiently move between such regions. Finding a partition that maximises modularity for a given graph is NP-complete [32]. Therefore, when working with large graphs, approximation algorithms are needed. The most widely used approximation algorithm is the _Louvain algorithm_[33], an agglomerative hierarchical graph clustering algorithm. While no formal analysis exists, the runtime of the Louvain algorithm has been observed empirically to be linear in the number of graph edges [34]. The Louvain algorithm starts by placing each node of the graph in its own cluster. Nodes are then iteratively moved locally, from their current cluster to a neighbouring cluster, until no gain in modularity is possible. This results in a revised partition corresponding to a local maximum of modularity with respect to local node movement. This revised partition is used to define an _aggregate graph_ as follows: each cluster in the partition is represented as a single node in the aggregate graph, and a directed edge is added to the aggregate graph if there is at least one edge that connects neighbouring clusters in that direction. This process is then repeated on the aggregate graph, and then on the next aggregate graph, and so on, until an iteration is reached with no modularity gain. For more details please refer to Blondel et al. [33], Arenas et al. [35], and the pseudocode in Appendix B. The algorithm's output is a series of partitions of the input graph. This series of partitions has a useful structure: multiple clusters found in one partition are merged into a single cluster in the next partition. In other words, the output is a _hierarchy_ of clusters, with earlier partitions containing many smaller clusters which are merged into fewer larger clusters in later partitions. This hierarchical structure forms the basis of the multi-level skill hierarchy we propose for reinforcement learning. Let \(h\) denote the number of partitions returned by the algorithm. We use each of the \(h\) partitions to define a single layer of skills, resulting in a hierarchy with \(h\) levels above primitive actions. Each level of the hierarchy contains one or more skills for efficiently navigating between neighbouring clusters. Specifically, we define an option for navigating from a cluster \(c_{i}\) to a neighbouring cluster \(c_{j}\) as follows: the initiation set consists of all states in \(c_{i}\); the option policy efficiently takes the agent from a given state in \(c_{i}\) to a state in \(c_{j}\); the option terminates with probability \(1\) in states in \(c_{j}\), with probability \(0\) otherwise. Taking advantage of the natural hierarchical structure of the partitions produced by the Louvain algorithm, we compose the skills at one level of the hierarchy to define the skills at the next level. That is, at each level of the hierarchy, option policies will be defined over only the actions (options or primitive actions) in the level below, with options at only the first level of the hierarchy using policies defined over primitive actions. We call the resulting set of skills the _Louvain skill hierarchy_. ## 5 Empirical Analysis We analyse the Louvain skill hierarchy in six environments: Rooms, Grid, Maze [27], Office, Taxi [36], and Towers of Hanoi. The environments are depicted in Figure 1 and fully described in Appendix D. In all environments, the agent receives a reward of \(-0.001\) for each action and an additional \(+1.0\) for reaching a goal state. In Rooms, Grid, Maze, and Office, there are 4 primitive actions: north, south, east, and west. In Taxi, there are two additional primitive actions: pick-up-passenger and put-down-passenger. Taxi also features irreversible actions. For instance, after picking up the passenger, the agent cannot return to a state where the passenger has not yet been picked up. All experimental details are in Appendix F. Our analysis is directed by the following questions: What is the Louvain skill hierarchy produced in each environment? How does this skill hierarchy impact the learning performance of the agent? How are the results impacted as the environment gets larger? We report results using resolution parameter \(\rho=0.05\) (unless stated otherwise) but also examine the impact of \(\rho\) on the results. **Louvain Skill Hierarchy.** Figure 2 shows the Louvain cluster hierarchy in Rooms, Office, Taxi, and Towers of Hanoi obtained by applying the Louvain algorithm to the state-transition graph of each environment. Appendix E shows the Louvain cluster hierarchies in Grid and Maze. In Rooms, the hierarchy has four levels. At the third level, each room is placed in its own cluster. Moving up the hierarchy, at the fourth level, two of these rooms are joined together into a single cluster. Moving down the hierarchy, each room is divided further into smaller clusters at level 2, and then into even smaller clusters at level 1. It is easy to see how the corresponding skill hierarchy would enable efficient navigation between and within rooms. In Office, at the top level, we see six large clusters connected to each other by corridors. As we move lower down the hierarchy, these clusters are divided into increasingly smaller regions. At level 3, we see many rooms that form their own cluster. At level 2, most rooms are divided into multiple clusters. Once again, it is relatively easy to see how the corresponding skill hierarchy would enable efficient navigation of the space at multiple levels of granularity. Figure 1: The environments. In Taxi, the state-transition graph has four disconnected components, each corresponding to one particular passenger destination: R, G, B, or Y. In Figure 2, we show only one of these components, the one where the passenger destination is B. The Louvain hierarchy has four levels. At the top level, we see three clusters where the passenger is waiting at R, G, or Y, and a fourth cluster where the passenger is either in-taxi or delivered to the destination. Navigation between these clusters is unidirectional, with only three possibilities, and the three corresponding skills navigate the taxi to the passenger location _and_ pick up the passenger. Moving one level down the hierarchy, the clusters correspond to skills that move the taxi between the left and the right side of the grid, which are connected by a bottleneck state in the middle of the grid. In Towers of Hanoi, the hierarchy has three levels. Moving between the top-level, middle-level, and bottom-level clusters corresponds to moving, respectively, the largest, the second-largest, and the third-largest disc between the different poles. In each domain, (1) the Louvain skill hierarchy closely matches human intuition, and (2) it is clear how skills at one level of the hierarchy can be composed to produce the skills at the next level. Figure 2: The cluster hierarchies produced by the Louvain algorithm when applied to the state-transition graphs representing Rooms, Office, Taxi, and Towers of Hanoi. For Taxi and Towers of Hanoi, the graph layout was determined by using a force-directed algorithm that models nodes as charged particles that repel each other and edges as springs that attract connected nodes. **Learning Performance.** We compare learning performance with the Louvain skill hierarchy to the Edge Betweenness [8] and Label Propagation[12] methods, and to the method proposed by Xu et al. [11]. These methods are the most directly related to the proposed approach because they make use of modularity maximisation to define their two-level skill hierarchies. In addition, we compare to options that navigate to local maxima of Node Betweenness [15], a state-of-the-art subgoal-based approach that captures the bottleneck concept that characterises many existing subgoal-based methods. We also compare to Eigenoptions[28] derived from the graph Laplacian, a state-of-the-art Laplacian-based approach that encourages efficient exploration of the state space. Finally, we also include a Primitive agent that uses only primitive actions. Primitive actions are available to all agents. For all methods, we generated options using the complete state-transition graph and learned their policies offline using macro-Q learning [37]. We trained all hierarchical agents using macro-Q learning and intra-option learning [38]. Although these algorithms have not previously been applied to multi-level hierarchies, they both extend naturally to this case. The primitive agent was trained using Q-Learning [39]. The shaded regions on the learning curves represent the standard error over \(40\) random seeds. When creating the Louvain skill hierarchy, we discarded any lower level of the cluster hierarchy that had a mean number of nodes per cluster of less than a threshold value, \(c\). We used \(c=4\) in all experiments; our reasoning is that skills that navigate between such small clusters execute for only a very small number of decision stages (often only one or two) and are not meaningfully more abstract than primitive actions. Figure 4: Learning curves comparing various different Louvain agents. An epoch corresponds to 100 decision stages in Rooms, 300 in Taxi, and 750 in Maze. Figure 3: Learning performance. An epoch corresponds to 100 decision stages in Rooms and Towers of Hanoi, 300 in Taxi, 750 in Maze and Grid, and 1000 in Office. We show learning curves in Figure 3. The Louvain agent has a clear and substantial advantage over other approaches in all domains except for Towers of Hanoi, where its performance was much closer to that of the other hierarchical agents, with none of them performing much better than the primitive agent. This is consistent with existing results reported in this domain (e.g., by Jinnai et al. [29]). **Hierarchical versus flat arrangement of skills.** An alternative to the Louvain skill hierarchy is a flat arrangement of the same skills, where the skills have the same behaviour but they call primitive actions directly rather than indirectly through other (lower-level) skills. We expect the multi-level hierarchy to lead to faster learning than the flat hierarchy due to the additional macro-Q and intra-option learning updates enabled by the hierarchical relationship between the skills. Figure 4 shows that this is indeed the case. In the figure, Louvain shows an agent that uses the Louvain skill hierarchy while Louvain flat shows an agent that uses the Louvain skills but where the skill policies call primitive actions directly rather than through other skills. In addition, the figure shows a number of agents that use only a single level of the Louvain hierarchy, with option policies defined over primitive actions; these are depicted in the figure by the label Level 1, 2, 3, or 4. Primitive actions were available to all agents. The figure shows that the hierarchical agent learns more quickly than the flat agent. Furthermore, the agents using individual levels of the Louvain hierarchy learn more quickly than the primitive agent but not as quickly as the agent using the full Louvain hierarchy. **Impact of the resolution parameter \(\rho\).** Figure 4(a) shows how changing \(\rho\) impacts the cluster hierarchy produced by the Louvain algorithm in Towers of Hanoi. At \(\rho=10\), the output is a single level containing many small clusters comprised of three nodes. At \(\rho=3.3\), a two-level cluster hierarchy is produced: level 1 is identical to the partition produced at \(\rho=10\), but level 2 contains larger clusters, each formed by merging three of the clusters from level 1. Further decreasing \(\rho\) produces additional levels, each containing progressively fewer, larger clusters. As \(\rho\) is reduced, the clusters identified at a given level of the hierarchy generally remain stable: the lowest-level partition found using a higher value of \(\rho\) will typically be the same as the lowest-level partition found using a lower value. In other words, decreasing \(\rho\) may add levels to the hierarchy, but it generally does not impact the existing levels. A sensitivity analysis on the value of \(\rho\) showed that a wide range of \(\rho\) values led to useful skills being produced, and that performance gradually decreased to no worse than that of a primitive agent at higher values of \(\rho\). Please refer to Appendix A for this sensitivity analysis and a more detailed discussion on the choice of \(\rho\). **How do Louvain skills scale to larger domains?** The Louvain algorithm has been successfully applied to graphs with millions of nodes and billions of edges in minutes [33], and has been observed empirically to have a time complexity that is linear in the number of graph edges [33, 34]. Also important is how the Louvain skill hierarchy changes with the size of the state space. We experimented with a multi-floor version of the Office environment, with floors connected by a central elevator, where two primitive actions move the agent up and down between adjacent floors. The size of the state space can be varied by adjusting parameters such as the number of office floors. We generated a series of fifteen offices of increasing size, with the smallest office having a single floor (\(\sim\)\(10^{3}\) states), and the largest office having one thousand floors (\(\sim\)\(10^{6}\) states). Figure 4(b) shows that hierarchy depth increased very gradually with the size of the state space. At \(1000\) states, the hierarchy contained \(5\) levels, with the top-level skills allowing high-level navigation of each office floor, as well as from regions near an elevator to adjacent floors. At \(\sim\)\(4000\) states, a sixth level was added, containing skills for efficiently moving from anywhere on one floor to an adjacent floor. Figure 5: (a) How the Louvain algorithm’s output when applied to Towers of Hanoi changes with the resolution parameter. (b) How the Louvain skill hierarchy’s depth scales with the size of the state space. (c) Learning performance in Office with two floors containing \(2537\) states. Subsequently, as the state space size grew to \(\sim\)\(25000\) and \(\sim\)\(300000\) states, two levels were added that allowed the agent to move, respectively, two and four office floors at a time. Figure 4(c) shows that the Louvain agent learns much more quickly than other approaches in a two-floor Office, even while some alternatives, including the Eigenoptions agent, fail to achieve any learning. ## 6 Discussion and Future Work Our results show that the Louvain skill hierarchy is a useful answer to the conceptual question of what constitutes a useful skill hierarchy. This hierarchy is intuitively appealing, improves agent performance (more so than alternatives in the literature), and shows desirable scaling properties. An important research direction for future work is incremental learning of Louvain skill hierachies as the agent is interacting with its environment--the state-transition graph will not always be available in advance. We explored the feasibility of incremental learning in the Rooms environment and present the results in Figure 6. The agent started with an empty state-transition graph and no skills. Every \(m\) decision stages, it updated its state-transition graph with new nodes and edges, and it revised its skill hierarchy in one of two ways. In the first approach, the agent applied the Louvain algorithm to create a new skill hierarchy from scratch. In the second approach, the agent incorporated the new information into its existing skill hierarchy, using an algorithm similar to the Louvain algorithm. This algorithms starts by assigning each new node to its own cluster; it is then iteratively moved locally, between neighbouring clusters (both new and existing), until no modularity gain is possible. This revised partition is used to define an aggregate graph and the entire process is then repeated on the aggregate graph, and the next aggregate graph, and so on, until an iteration is reached with no modularity gain. Aside from existing clusters being merged into new higher-level clusters, the cluster membership of existing nodes stays fixed; only new nodes have their cluster membership updated. The result is a revised set of partitions from which a revised hierarchy of Louvain skills are derived. Pseudocode for these incremental approaches is in Appendix C. Figures 5(b)-5(d) show the evolution of the partitions using the second approach as the agent performed a random walk in Rooms. The partitions were updated after observing \(30\) states, \(60\) states, and all possible transitions. The figure shows that, as more nodes were added, increasingly higher-level skills were produced. After \(30\) states, the top-level skills allow low-level movement within and between two of the rooms. After \(60\) states, another level was added, allowing slightly higher-level navigation of the state-space. After observing all possible transitions, the top level contained skills enabling efficient movement between the four rooms. Figure 5(a) shows the performance of the incremental agents. The agents started with only primitive actions; after decision stages \(100\), \(500\), \(1000\), \(3000\), and \(5000\), the state-transition graph was updated and the skill hierarchy was revised following the two approaches discussed above. The figure compares performance to a Primitive agent and an agent using the full Louvain skill hierarchy, whose performance acts as a Ceiling for the incremental agents. Each incremental agent learned much faster than the primitive agent and only marginally slower than the fully-informed Louvain agent. The two incremental agents had similar performance throughout training but the first approach reached a higher level of asymptotic performance than the second approach. The reason is that partitions produced early in training are based on incomplete information; the first approach discards Figure 6: (a) Performance of Incremental Louvain Options in Rooms. An epoch corresponds to \(100\) decision stages. (b-d) How the state-transition graph and top-level partitions produced by the Incremental 2 method evolved as an agent explored Rooms. The hierarchy contained 2 levels after visiting 30 states, 3 levels after visiting 60 states, and 5 levels after observing all possible transitions. the early imperfections while the second approach carries them forward. But there is a trade-off: the first approach has a higher computational cost than the second one. These results demonstrate the feasibility of learning Louvain skills incrementally. A full incremental method for learning Louvain skills may take many forms, and different approaches may be useful under different circumstances, with each having its own advantages and disadvantages. We leave the full development of such algorithms to future work. Another direction for future work is extending Louvain skills to environments with continuous state spaces such as robotic control tasks. Such domains present a difficulty to all graph-based skill discovery methods due to the inherently discrete nature of the state-transition graph. If the critical step of constructing an appropriate graphical representation of a continuous state space can be achieved, all graph-based methods would benefit. Some approaches have already been proposed in the literature; we use one such approach [40, 23, 10] to examine the Louvain hierarchy in a variant of the Pinball domain [41], which involves manoeuvring a ball around a series of obstacles to reach a goal, shown in Figure 6(a). The state is represented by two continuous values: the ball's position in the \(x\) and \(y\) directions. At each decision stage, the agent can choose to apply a small force to the ball in each direction. The amount of force applied is stochastic and causes the ball to roll until friction causes it to come to a rest. Collisions between the ball and the obstacles are elastic. We sampled \(4000\) states, added them to the state-transition graph, then added an edge between each node and its \(k\)-nearest neighbours, according to euclidean distance, assigning each edge \((u,v)\) a weight of \(e^{-\nicefrac{{\|u-v\|^{2}}}{{\sigma}}}\). We then applied the Louvain algorithm to the resulting graph and derived the Louvain skill hierarchy from the partitions produced. The result was the three-level cluster hierarchy shown in Figure 7. The highest-level clusters yield skills for high-level navigation of the state space and take into account features such as the natural bottlenecks caused by the obstacles, allowing the agent to efficiently change its position. Skills derived from the lower-level partitions enable more local navigation. Once again, we see a skill hierarchy that is intuitive, and it can easily be seen how the lower-level skills can be composed to produce the higher-level skills in the hierarchy. Currently, Louvain skills at one level of the hierarchy are composed of skills from only the immediately previous level. While such skills may be optimal with respect to the skills from the previous level, they may not be optimal with respect to primitive actions. Future work could consider higher-level skills composed of skills from all lower levels, including primitive actions. Because we derive Louvain skills solely from the connectivity of the state-transition graph, not considering the reward function, we expect them to be suitable for solving a range of tasks in a given domain. Examining their use for transfer learning is a useful direction for future work. Additionally, future work can examine how to use the reward function when defining Louvain skills to tailor the resulting skills for solving specific tasks. A possible difficulty with building multi-level skill hierarchies is that having a large number of skills can end up hurting performance by increasing the branching factor of the problem. Future work should consider how best to manage large skill hierarchies. One solution that has been explored in the context of two-level hierarchies is "pruning" less useful skills from the hierarchy [24, 42]. Lastly, we point out that the various characterisations of useful skills proposed in the literature, including the one proposed here, are not necessarily competitors. To solve complex tasks, it is likely that an agent will need to use many different types of skills. An important avenue for future work is studying how different ideas on skills discovery can be brought together to enable agents that can autonomously develop complex and varied skill hierarchies. Figure 7: (a) Pinball domain, with the green ball in its initial position, the red goal, and several obstacles. (b–d) The cluster hierarchy produced by the Louvain algorithm. ## Acknowledgments and Disclosure of Funding This research was supported by the Engineering and Physical Sciences Research Council [EP/R513155/1] and the University of Bath. This research made use of Hex, the GPU Cloud in the Department of Computer Science at the University of Bath. We would like to thank the members of the Bath Reinforcement Learning Laboratory for their constructive feedback.
2302.05495
Helical-Phase Distinguishability for the Phase Distribution of Laguerre-Gaussian Beams
In the past few years, orbital angular momentum beams have contributed significantly to many applications in the optical-related fields due to their unique phase characteristics. Here, a novel approach about phase-helices structure for Laguerre-Gaussian beams carrying orbital angular momentum is presented. This proposal is based on numerical simulations to track the rotation of each helix in the phase and shows an impact on the rotational speed of the Laguerre-Gaussian phase distribution by using the characteristic of distinguishability of each helix. If the helices in the phase of a vortex cannot be distinguished, then the angular rotation speed of the phase distribution can be seen as independent of the azimuthal index l, due to its rotational symmetry, and it is called {\omega}indistinguishable. Oppositely, when the helices are distinguished from each other, the rotational speed is affected by this azimuthal index, and it becomes {\omega}distinguishable. This distinguishable theory can build a pathway to applications in the field of optical communications (as a coding system) and particle manipulation (changing the dynamics of off-beam trapped particles).
Arturo Pazmino, Peter Iza, Manuel S. Alvarez-Alvarado, Erick Lamilla
2023-02-10T20:19:49Z
http://arxiv.org/abs/2302.05495v1
# Helical-Phase Distinguishability for the Phase Distribution of Laguerre-Gaussian Beams ###### Abstract In the past few years, orbital angular momentum beams have contributed significantly to many applications in the optical-related fields due to their unique phase characteristics. Here, a novel approach about phase-helices structure for Laguerre-Gaussian beams carrying orbital angular momentum is presented. This proposal is based on numerical simulations to track the rotation of each helix in the phase and shows an impact on the rotational speed of the Laguerre-Gaussian phase distribution by using the characteristic of distinguishability of each helix. If the helices in the phase of a vortex cannot be distinguished, then the angular rotation speed of the phase distribution can be seen as independent of the azimuthal index 1, due to its rotational symmetry, and it is called \(\omega_{\mathrm{indinguishable}}\). Oppositely, when the helices are distinguished from each other, the rotational speed is affected by this azimuthal index, and it becomes \(\omega_{\mathrm{distinguishable}}\). This distinguishable theory can build a pathway to applications in the field of optical communications (as a coding system) and particle manipulation (changing the dynamics of off-beam trapped particles). ## 1 Introduction Electromagnetic waves can be represented as the synchronized oscillation of electric and magnetic field propagating at the speed of light in vacuum, these waves have two distinct physical properties: spin angular momentum (SAM) and orbital angular momentum (OAM) [1]. According to Beth [2], an optical beam propagates with a spin angular momentum due to its polarization. However, in 1992 Alen _et al._ explained, for the first time, the propagation of the optical beam with an entire OAM related to the structure of the wavefront and spatial distribution of the electromagnetic radiation [3]. Since then, the study of the OAM in light has become more attractive due to its important contributions on applications in the fields of particle manipulation using light through optical tweezers [4; 5; 6; 7], codification of information [8; 9; 10], improvement of borders in the image processing [11], super-resolution imaging [12], laser processing [13], and applications related to spatial multiplexing in optical communications [14; 15; 16; 17; 18; 19]. Studies based on group velocity changes in Laguerre-Gaussian (LG) beams with OAM were conceptualized by Allen, resulting in the theoretical work presented in 1994, where the azimuthal Doppler shift in LG beams with OAM is proposed. They show that, an atom moving in a LG vortex beam exhibits an azimuthal shift in the resonant frequency, adding an azimuthal component of velocity. This prediction played a significant role in particle manipulation and its impact in the group and phase velocity of vortex beams [20]. Literature presents expanded studies of optical beams carrying OAM, for instance, Luo _et al._ theoretically demonstrated that rotational Doppler effect in some materials is unreversed, due to the combined contributions of negative phase velocity and inverse screw of wavefront [21]. Lavery _et al._ reported an experimental observation of spinning objects using the orbital angular momentum of light scattering, where the degree of OAM enhancement of rotation direction is a function of the experimental conditions [22]. The concept of angular rotation with light was also explored in the form of angular acceleration for Schulze _et al._, where a class of the light field with angular acceleration during propagation is studied. In that work, angular accelerating light fields and conservation of angular momentum through an energy exchange mechanism across the optical field is discussed [23]. Additionally, there has been works about petal-like intensity structure from the superposition of LG beams due to its implications with quantum information [24,25]. This petal-like structure has been used to experimentally measure, for the first time, tiny velocities [26]. Regarding monochromatic plane waves, the propagation velocity, referred to as the phase velocity in vacuum, is a constant; but any deviation from the plane wave constraint can lead to a propagation velocity different from \(c\) (speed of light in vacuum). Bouchard _et al._ show that twisted light pulses exhibit subluminal velocities in vacuum, being 0.1% slower relative to \(c\)[27]. This research invites us to draw a difference between the group velocity and the phase velocity for light beams with a twisted wavefront. Generally, for a plane wave propagating along with a nondispersive medium, both group and phase velocity take the same value (i.e., \(v_{g}=v_{ph}=c/n\)), being \(n\) the refractive index of the medium. Nevertheless, concerning twisted light (also known as vortex beam), the phase and group velocities can differ in magnitude. Under specific conditions, group velocity in optical beams can be superluminal [28,29], and even have a negative velocity propagation [30,31]. The light with OAM is associated with the spatial structure of the optical field. This structure is helical in both, wavefront and phase distribution. For the phase distribution case, an optical beam with a helical phase, that depends on _exp(-il\(\phi\))_, carries an orbital angular momentum independent of the polarization state. The angle \(\phi\) is the azimuthal coordinate in the beam's cross section, and \(l\), the topological charge (or azimuthal index) of the optical beam, takes integer and fractional values [32,35]. The most common form of a helical phased beam is the Laguerre-Gaussian (LG) [3]. Helical wavefronts are also observed in Bessel beams [36], Mathieu beams [37], and Ince-Gaussian beams [38]. Due to the impact of the OAM regarding the helical structure of vortex beam's phase distribution in the past few years, this work proposed an innovative philosophy to study the well-known rotational velocity based on a "distinguishable helical-phase" theory, not mentioned elsewhere in the literature, to the author's knowledge. This approach may build a pathway for new mathematical developments for future applications in optical communications. ## 2 Laguerre-Gaussian vortex beam overview In free space, an optical field can be described by a cylindrical wave which is characterized by Laguerre polynomials. An optical beam propagating in the z-direction in free space can be expressed as: \[E(r,\phi,z)=U(r,\phi,z)\ \exp\ (-ikz) \tag{1}\] where \(U(r,\phi,z)\) is the amplitude of beam in cylindrical coordinates. The scalar optical field satisfies the Helmholtz equation, and the paraxial wave equation is derived when the second derivative in the z-direction is ignored. Then, the cylindrical symmetric solutions Laguerre-Gaussian beams are given by [3]: \[U(r,\phi,z)=\sqrt{\frac{2p!}{\pi(p+|l|)!}}\frac{1}{\omega(z)} \left(\frac{r\sqrt{2}}{\omega(z)}\right)^{|l|}L_{p}^{|l|}\left(\frac{2r^{2}}{ \omega^{2}(z)}\right)exp\left(-\frac{r^{2}}{\omega^{2}(z)}\right)exp\left(-i \left(l\phi+\frac{kr^{2}}{2R_{z}}-(2p+|l|+1)tan^{-1}\left(\frac{z}{z_{0}} \right)\right)\right) \tag{2}\] where \(z_{0}=\pi\omega_{0}^{2}/\lambda\) is the Rayleigh range, \(R_{z}=z(1+(z/z_{0})^{2})\) is the radius of curvature, \(\lambda\) is the wavelength and \(k=2\pi/\lambda\) is the wave number, \(\omega_{z}=\omega_{0}\sqrt{1+(z/z_{0})^{2}}\) is the beam radius with \(\omega_{0}\) being the beam waist, \(L_{p}^{|l|}(x)\) is the associated Laguerre polynomial. In the literature, the term \(tan^{-1}(z/z_{0})\) is known as the Gouy's phase and is multiplied with the mode order \((2p+|l|+1)\). All LG beams are characterized by two indexes: the radial index \(p\) and the azimuthal index \(l\), also called topological charge for vortex beams, denoting the orbital angular momentum (in units of \(\hbar\)). The phase distribution of a LG vortex beam is shown in the following equation: \[\Phi(r,\phi,z)=kz+l\phi+\frac{kr^{2}}{2R_{z}}-(2p+|l|+1)tan^{-1}\left(\frac{z} {z_{0}}\right) \tag{3}\] then, the group velocity (rate at which the envelope of the beam's amplitude propagates through space) and phase velocity (rate at which a point of constant phase propagates through space) is given by [39, 40]: \[v_{g}=\frac{1}{|\varphi|\delta_{\omega}\Phi(r,\omega)||} \tag{4}\] and \[v_{ph}=\frac{\omega}{|\varphi\Phi(r)|} \tag{5}\] where \(\omega\) is the angular frequency. The Gouy's phase shift term, in the longitudinal phase of a LG beam, is an intrinsic characteristic of an electromagnetic field that is spatially confined in the transverse plane of the propagation. An important observation is that the phase velocity can be slightly bigger than the speed of light in free space (superluminal velocity). It is relevant to mention that such fact does not violate special relativity, because any signal information sent by this LG vortex beam (or any other optical beam) travels at the speed of light (or even at lower speed) [41]. For a LG vortex beam, information will be transmitted at its group velocity \(v_{g}\), that is slightly smaller than the speed of light (subluminal velocity) [40]. ## 3 Speed rotation of Laguerre-Gaussian phase distribution Moving one step further by analyzing the rotation velocity of the phase distribution of a LG beam through simulating the argument of the full LG Eq. (1) and analyzing its rotation at different values of the propagation axis. Fig. 1 shows the track rotation of the beam phase distribution for different values of topological charge \(l\) and radial index \(p\) (\(LG_{p}^{1}\)), with a beam waist \(\omega_{0}=5.0\) mm and wavelength \(\lambda=633\) nm. Fig. 1a, is the rotation of a LG beam phase for \(LG_{0}^{1}\), with a phase distribution from -\(\pi\) to \(\pi\) (Fig. 1a.i), its rotation is counterclockwise due to the term \(\exp\) (\(-ikz\)). The phase distribution starts at \(z=0\) (Fig. 1a.ii), it has a half rotation at \(z=\lambda/2\) (Fig. 1a.iii) and a full rotation at \(z=\lambda\) (Fig. 1a.iv). Then, the period of this phase rotation is \(T=\lambda/v_{0}\), where \(v_{0}\) is a linear velocity at which the phase moves along the axis propagation. Fig. 1b, shows the rotation phase of a \(LG_{0}^{2}\) beam, the phase distribution has two helices, blue-green color map to the left and orange color map to the right, both with a phase distribution from -\(\pi\) to \(\pi\), in Fig. 1b.i. The helical-phase distribution starts at \(z=0\) (Fig. 1b.ii), has a half rotation at \(z=\lambda\) (Fig. 1b.iii), and a full rotation at \(z=2\lambda\) (Fig. 1b.iv). Fig. 1c and 1d, shows the rotation phase for a \(LG_{1}^{1}\) and \(LG_{2}^{2}\) beams, which follow the same behavior as Fig. 1a and 1b respectively. In general, from Fig. 1, the radial index \(p\) does not affect the phase angular rotation speed, and the period of the rotation only depends on the topological charge \(l\). Moreover, analyzing step by step the period of the helical-phase (hp) rotation of a LG beams from this figure, for \(l=l\) the period is \(T_{hp}=\lambda/v_{0}\) (Fig. 1a and 1c), for \(l=2\) the period is \(T_{hp}=2\lambda/v_{0}\) (Fig. 1b and 1d); going further, for \(l=3\) the period is \(T_{hp}=3\lambda/v_{0}\); and so on, therefore, generalizing the expression, the helical-phase rotation period would be \(T_{hp}=l\lambda/v_{0}\). Then, the rotation speed of the helical-phase distribution of a LG vortex beam is given by \[\omega_{hp}=\frac{2\pi v_{0}}{\lambda l}=\frac{kv_{0}}{l} \tag{6}\] ## 3 Distinguishability feature of a helical-phase distribution A known feature, observed in Fig. 1, is that the number of helices is only given by the topological charge \(l\), and the characteristic of distinguishability of the helical-phase distribution in the LG vortex beam can be inferred from it. This distinguishability characteristic is better illustrated in Fig. 2, in which the phase distribution is plotted for a LG vortex beam of waist \(\omega_{0}=\) 5.0 mm and wavelength \(\lambda=\) 633 nm, for \(p=\) 0 and \(l=\) 3, at different values of the axis propagation. Figure 1: Tracking of the rotation of Laguerre-Gaussian phase distribution (\(LG_{p}^{l}\)) for a) \(LG_{0}^{1}\), b) \(LG_{0}^{2}\), c) \(LG_{1}^{1}\), and d) \(LG_{2}^{2}\), using a fixed beam waist \(\omega_{0}=\) 5.0 mm and wavelength \(\lambda=\) 633 nm, at different values of the axis propagation, \(z\). For \(l=\) 2, there are two helices colored in different gradient colors (blue-green and orange) for rotational tracking purposes. In Fig. 2a, the helical-phase distribution is plotted using the same color, such that helices are indistinguishable from one to another. Then, at \(z=0\) the phase has certain distribution (Fig. 2a.i) that is repeated at \(z=\lambda\) (Fig. 2a.ii), \(2\lambda\) (Fig. 2a.iii) and \(3\lambda\) (Fig. 2a.iv), therefore, the apparent period would be \(T=\lambda/\nu_{0}\), giving an apparent rotation speed of \(\omega_{indistinguishable}=kv_{0}\), due to its rotational symmetry. On the other hand, if each helix in the phase distribution is plotted using different colors, as it is shown in Fig. 2b, the real rotation of the phase distribution can be tracked. Now the helical-phase structure is distinguishable from one to another, the phase distribution starts at \(z=0\) (Fig. 2b.i), for \(z=\lambda\) the entire phase has rotated 120 degrees (Fig. 2b.ii), and for \(z=2\lambda\) it has rotated 240 degrees (Fig. 2b.iii). At \(z=3\lambda\) the phase rotates a full cycle (Fig. 2b.iv); therefore, the rotation period would be \(T=l\lambda/\nu_{0}\), giving a rotational velocity of \(\omega_{distinguishable}=kv_{0}/l\) as in Eq. (6). To emphasize the latest finding, the rotational velocities \(\omega_{indistinguishable}\) and \(\omega_{distinguishable}\) are plotted in Fig. 3, using 633 nm of wavelength, beam waist of \(\omega_{0}=\) 5.0 mm, \(l=\) 1,2,3,4, and \(p=\) 0. When the helical-phase is indistinguishable (dashed lines), the rotational velocity of the phase distribution can be considered the same, even for very large numbers of topological charge \(l\). However, when the helical-phase is distinguishable (solid lines), only the rotational velocity for \(l=l\) lays in the same value as the \(\omega_{indistinguishable}\). For \(l>\)1, there is a notable difference between both rotation speeds. Figure 2: Tracking of the rotation of the LG phase distribution for \(p=0\) and \(l=3\) using a beam waist \(\omega_{0}=\) 5.0 mm and wavelength \(\lambda=\) 633 nm, at different values of the axis propagation, \(z=0\), \(\lambda\), \(2\lambda\), and \(3\lambda\). a) All helices in the phase distribution have the same color, making them indistinguishable from one to another. b) Each helix in the phase structure has a different color, making them distinguishable from one to another. ## 4 Conclusion A novel approach to identify the distinguishability characteristic of a helical-phase distribution of LG vortex is presented. This feature enables to identify each helical-phase structure in the LG beam phase distribution, presenting relevant impact on the rotational speed of this phase. The simulation of the rotation of the helical phase is performed. The results of such simulation validate the mathematical expressions of the proposed approach. This rotation can be clockwise when the longitudinal phase is taken as \(\exp\left(ikz\right)\), and counterclockwise when \(\exp\left(-ikz\right)\). If the helical-phase distribution, in vortex beam, cannot be distinguished, each phase distribution would be interpreted as a unique distribution, due to its rotational symmetry, as shown in Fig. 2a. On the other hand, if the helical structure is distinguished, the angular rotation speed is found using Eq. (6). The larger the azimuthal index, the lower the rotation speed at which the phase distribution rotates. One future applications of the present helical-phase distinguishable theory are the possibility of being used as a coding communication technology to carry some kind of information in each helix from the helical structure of the LG beam phase. To accurately decode the information, it is important to distinguish each helix in the received phase distribution, i.e., we need to know which helix is the first one, the second one, and so on. Therefore, a track of the phase distribution needs to be done while the LG beam moves from the emitter to the receiver. Additionally, using this coding technology in combination with other optical dimensions, can improve more the coding efficiency and increase the information transmission capacity [42]. Another application can be in the optical particle manipulation field, by studying the dynamics of particles trapped in an off beam, including acceleration and deacceleration of the angular rotation speed [43].
2306.10209
ZeRO++: Extremely Efficient Collective Communication for Giant Model Training
Zero Redundancy Optimizer (ZeRO) has been used to train a wide range of large language models on massive GPUs clusters due to its ease of use, efficiency, and good scalability. However, when training on low-bandwidth clusters, or at scale which forces batch size per GPU to be small, ZeRO's effective throughput is limited because of high communication volume from gathering weights in forward pass, backward pass, and averaging gradients. This paper introduces three communication volume reduction techniques, which we collectively refer to as ZeRO++, targeting each of the communication collectives in ZeRO. First is block-quantization based all-gather. Second is data remapping that trades-off communication for more memory. Third is a novel all-to-all based quantized gradient averaging paradigm as replacement of reduce-scatter collective, which preserves accuracy despite communicating low precision data. Collectively, ZeRO++ reduces communication volume of ZeRO by 4x, enabling up to 2.16x better throughput at 384 GPU scale.
Guanhua Wang, Heyang Qin, Sam Ade Jacobs, Connor Holmes, Samyam Rajbhandari, Olatunji Ruwase, Feng Yan, Lei Yang, Yuxiong He
2023-06-16T23:26:19Z
http://arxiv.org/abs/2306.10209v1
# ZeRO++: Extremely Efficient Collective Communication for Giant Model Training ###### Abstract. Zero Redundancy Optimizer (ZeRO) has been used to train a wide range of large language models on massive GPUs clusters due to its ease of use, efficiency, and good scalability. However, when training on low-bandwidth clusters, or at scale which forces batch size per GPU to be small, ZeRO's effective throughput is limited because of high communication volume from gathering weights in forward pass, backward pass, and averaging gradients. This paper introduces three communication volume reduction techniques, which we collectively refer to as ZeRO++, targeting each of the communication collectives in ZeRO. First is block-quantization based all-gather. Second is data remapping that trades-off communication for more memory. Third is a novel all-to-all based quantized gradient averaging paradigm as replacement of reduce-scatter collective, which preserves accuracy despite communicating low precision data. Collectively, ZeRO++ reduces communication volume of ZeRO by 4x, enabling up to 2.16x better throughput at 384 GPU scale. Large model training, High performance computing, Deep learning + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † † †: Footnote † On one hand, clusters with low-bandwidth is common in majority of cloud computing environments. Although high performance nodes like DGX boxes (Krizhevsky et al., 2017; Zhang et al., 2018) are equipped with high-bandwidth NVLink (Wang et al., 2018) and NVSwitch (Wang et al., 2019) as intra-node interconnects, cross-node links are often less than 100Gbps ethernet which makes it the communication bottleneck. As shown in Figure 1(a), the per GPU throughput on low bandwidth clusters is only half of that with high-bandwidth clusters. On the other hand, even on high-bandwidth clusters, when running on thousands of GPUs, the batch size per GPU is limited by the maximum global batch size that can be used during the training without sacrificing convergence efficiency (Krizhevsky et al., 2017; Zhang et al., 2018; Zhang et al., 2018). In other words, as global batch size cannot be increased indefinitely without slowing down model convergence, training on thousands of GPUs forces the batch size per GPU to be very small, which reduces the compute-to-communication ratio and thus creates a communication bottleneck. As shown in Figure 1(b), the per GPU throughput is heavily impacted by small batch size per GPU, which is a result of communication bottleneck. However, rare efforts have been made to optimize end-to-end communication efficiency for ZeRO. There are many previous work on reducing communication overhead in distributed model training, such as 1-bit LAMB (Lamb, 2018), 1-bit Adam (Srivastava et al., 2014) and other error compensation compression techniques for gradient averaging (Krizhevsky et al., 2017; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018). However, none of them can work with ZeRO as they all assume model state replication, while model states are partitioned in ZeRO. We start from scratch and provide an end-to-end system for reducing all communication overhead in ZeRO training. ### ZeRO++ In this paper, we present a novel system of communication optimizations collectively called ZeRO++ that offers dramatic communication volume reduction for ZeRO. Below we discuss the main communication overheads in ZeRO, followed by three different communication optimizations in ZeRO++ that address them. Assume the model size as \(M\). During the forward pass, ZeRO (Zuo et al., 2017) conducts an all-gather operation to collect all the parameters (\(M\)) needed to train for all model layers. In the backward pass, ZeRO re-collects parameters (\(M\)) with all-gather first, then each GPU can compute local gradients. After that, ZeRO operates reduce-scatter function to aggregate and redistribute gradients (\(M\)) across accelerators. In total, ZeRO has a total communication volume of \(3M\), spreads evenly across 2 all-gather and 1 reduce-scatter. To reduce these communication overheads, ZeRO++ has three sets of communication optimizations, targeting each of the above mentioned three communication collectives respectively: **Quantized Weight Communication for ZeRO (qwZ)** First, in order to reduce parameter communication volume during forward all-gather, we adopt quantization on weights to shrink down each model parameter from FP16 (2 bytes) to INTS (1 byte) data type before communicating, thus reducing the communication volume by half. However, naively conducting quantization on weights may lose model training accuracy. In order to preserve decent model training precision, we adopt block-based quantization (Dong et al., 2019), which conducts independent quantization on each subset of model parameters. There is no existing implementation for high performance block-based quantization. Thus, we implement highly optimized quantization CUDA kernels from scratch. **Hierarchical Weight Partition for ZeRO (hpZ)** Second, to reduce communication overhead of all-gather on weights during backward, we trade GPU memory for communication. More specifically, instead of spreading whole model weights across all the machines, we maintain a full model copy within each machine. At the expense of higher memory overhead, this allows us to replace the expensive cross-machine all-gather on weights with intra-machine all-gather, which is substantially faster due to much higher intra-machine communication bandwidth. **Quantized Gradient Communication for ZeRO (qgZ)** Third, reducing communication cost of gradients using reduce-scatter is even more challenging. Directly applying quantization to reduce communication volume is infeasible. The main issue is, even by incorporating block-based quantization to reduce-scatter operation, it will still significantly hurt model training accuracy. The key reason behind is quantization will decrease value precision. And reduction on low-precision values will accumulate and amplify the errors. Therefore, we propose a novel and much more efficient gradient communication paradigm as a general replacement of reduce-scatter collective, where the gradients are compressed using block-based INT4 quantization during the communication to reduce the communication volume, but the full precision is recovered before the reduction operator to preserve training accuracy. We call this \(qgZ\), and is designed to i) overcome significant accuracy loss that would result from low-precision reduction if we were to simply implement reduce-scatter in INT4/INT8, and ii) avoid accuracy degradation and significant latency overhead of a long sequence of quantization and dequantization steps needed by a ring (Zuo et al., 2019) or tree (Zuo et al., 2019; Wang et al., 2019) based reduce-scatter (e.g., left of Figure 5), even if we did the reductions in full-precision. Furthermore, \(qgZ\) leverages the hierarchical nature of modern GPU clusters, where intra-node bandwidth is significantly higher than inter-node, to first reduce gradients within a node before doing cross-node reduction to minimize inter-node communication volume, resulting in 2/4x communication volume reduction (INT8/4) compared to FP16 reduce-scatter. We further reduce end-to-end latency of \(qgZ\) by pipelining intra-node and inter-node communication and conducting CUDA kernel fusion. **Communication Volume Reduction** By incorporating all three components above, we reduce the cross-node communication volume from \(3M\) down to \(0.75M\). More specifically, for forward all-gather operation on model weights, by applying INT8 quantization, we reduce the communication size from \(M\) to \(0.5M\). During backward all-gather on weights, with our secondary copy of model parameters, we reduce the communication size from \(M\) to \(0\). By replacing backward fp16 reduce-scatter on gradients to our novel all-to-all based INT4 reduce-scatter, we reduce cross-node communication from \(M\) to \(0.25M\). Thus, in total, we reduce \(3M\) communication to \(0.75M\). **Evaluation** We implemented ZeRO++ and performed extensive evaluation demonstrating three key results: i) scalability of GPT-3 like models on up to 384 GPUs achieving over 45% of sustained peak throughput, ii) consistent speedup of up to 2.4x over ZeRO (Zuo et al., 2017) baseline across models ranging from 10-138B parameters, and iii) comparing with baseline in 4x higher bandwidth cluster, ZeRO++ achieves similar throughput in low-bandwidth setting. In addition, we show the impact of each of the three optimizations in ZeRO++ and how they compose together. Furthermore, we also show the impact of our optimized kernel implementations on end-to-end system throughput. Finally, we conduct convergence evaluation indicating that ZeRO++ has negligible impact on model convergence and maintains similar model training accuracy as ZeRO baseline. The main contributions of this paper are as follows: * Blocked quantized weights (\(qwZ\)) reduces communication volume of all-gather of weights by 50%. * Hierarchical partitioning of model weights (\(hpZ\)) completely eliminates inter-node all-gather communication in backward propagation. * Novel, all-to-all quantized gradient reduction collective (\(qgZ\)) reduces gradient communication by 75% comparing with reduce-scatter. * Optimized Integration of each of the above techniques into existing ZeRO implementation, that enables communication and computation overlapping, and leverages custom high performance CUDA kernels for quantization, dequantization, as well as operator fusion (section 4). Our implementation translates the 4x communication volume reduction of ZeRO++ into real throughput improvement. * Extensive experiments shows that i) over 45% of sustained peak throughput even at small batch sizes, ii) up to 2.4x end-to-end system improvement over ZeRO, and iii) achieving similar throughput in low-bandwidth cluster compared to baseline in high-bandwidth cluster. In addition, we present performance breakdown and analysis of diffrent components of ZeRO++.Our end-to-end training shows that ZeRO++ does not affect model convergence. * ZeRO++ is open-sourced and released as part of [https://github.com/microsoft/DeepSpeed](https://github.com/microsoft/DeepSpeed) ## 2. Background and Related Work ### Data, Model and 3D parallelism Data parallelism (DP), pipeline parallelism (PP), and tensor parallelism (TP) are three forms of parallelism used to train large models across multi-GPU clusters. (Golovolovolov et al., 2015; Golovolov et al., 2015; Golovolov et al., 2016; Golovolov et al., 2017) DP is commonly used when model size fits within a single GPU memory. In DP, each GPU holds a full copy of model weights and trains on separate input data. MP is orthogonal to DP, and is often used in cases where model size cannot fit into a single GPU's memory. Instead of splitting input data, model parallelism partitions a full model into pieces and assigns each model piece onto a GPU. There are mainly two approaches for model parallelism: i) pipeline parallelism (PP) and ii) tensor parallelism (TP). PP (Peters et al., 2016; Golovolov et al., 2016; Golovolov et al., 2017) splits models vertically, creating sequential stages consisting of a contiguous subset of layers. While there is sequential dependency between stages for an input micro-batch, the stages can be executed in parallel across micro-batches. In contrast, TP (Peters et al., 2016) splits each layer across multiple GPUs, where each GPU works on a different part of the layer for the same input. 3D parallelism (Shen et al., 2016; Wang et al., 2017) refers to combination of DP, PP, and TP, and is capable of achieving excellent throughput and scalability, and has been used to train a wide range of large language models (Golovolov et al., 2016; Golovolov et al., 2017; Golovolov et al., 2017; Golovolov et al., 2017). Despite being highly efficient, 3D parallelism is severely limited by the fact that it requires complete rewrite of model and training pipeline to make them compatible with 3D parallelism (Shen et al., 2016). ``` Input :\(model\),\(worldSize\) Output :\(model\) while\(model\) not convergeddo all\(gather\_Parameters(worldSize)\); model\(forward()\); partition\((worldSize)\); all\(gather\_Parameters(worldSize)\); model\(backward()\); partition\((worldSize)\); reduce\(scatter\_Gradients(worldSize)\); optimizer.step\(()\); endwhile Return:\(model\) ``` **Algorithm 1**ZeRO algorithm ### ZeRO Optimizer ZeRO is a memory-optimized solution for data parallel training. ZeRO partitions and distributes all model states (i.e., parameters, gradients, optimizer states) among GPUs in use and recollects model states only when the layer needs to be computed. There are three different stages for using ZeRO to optimize on-device memory usage. In ZeRO stage 1 (ZeRO-1), only optimizer states are split and spread across all GPUs in use. ZeRO stage 2 (ZeRO-2) partitions both optimizer states and gradients, where ZeRO stage 3 (ZeRO-3) splits all three components of model states as parameters, gradients, and optimizer states. ZeRO-3 is the most memory efficient solution for model training at large scale, but at the cost of more collective communications. Algorithm 1 illustrates the high-level pseudocode for ZeRO-3. During model training, ZeRO-3 lazy-schedules the fetching of parameters until the computation needs to happen on a particular layer. Before forward propagation, ZeRO launches an all-gather to collect the full model weights and then computes the forward pass (line 2-3) of Algorithm 1. Then ZeRO empties the all-gather weights buffer after forward computation completes (line 4). During backward, ZeRO re-collects all model weights again via a second all-gather (line 5) to calculate gradients (line 6). Once gradients are calculated on each GPU, ZeRO empties weights buffer again (line 7) and conducts a reduce-scatter operation to do gradient averaging and re-distribution (line 8). Model states and parameters are updated in optimizer step (line 9). In a nutshell, to minimize the on-device memory footprint using ZeRO-3, three collective communication operations are issued at each training iteration, which include 2 all-gather on weights and 1 reduce-scatter on gradients. ### Communication Reduction Techniques **Quantization:** Quantization is often used to reduce memory footprint, and data movement volume by using low precision to represent data (Golovolov et al., 2016; Golovolov et al., 2016). However, the loss of information from representing high precision data with lower precision often comes with accuracy degradation. Many related work focus on improving quantization accuracy. The fundamental challenge of quantization accuracy lies in the vast difference in number ranges and granularity between high precision and low precision data (Eg. FP32/16 vs. INT8). Some related work (Wang et al., 2017) propose to filter the outliers in data to mitigate the gap in numerical ranges. Yet their accuracy hinges on the quality of outlier filtering and it brings extra filtering overhead. Dettmers et al. (2018) proposes to use block based quantization on optimizer states to improve the quantization accuracy yet it requires changes to the model structure thus limits its usability. **Gradient Compression:** Starting from 1-bit SGD of error-compensation compression (Wang et al., 2017), gradient compression has been pushed to an extreme direction of using just a single bit. To deal with non-linear gradient-based optimizers like Adam or Lamb, 1-bit quantization algorithms like 1-bit Adam (Kingmae and Ba, 2015) and 1-bit Lamb (Kingmae and Ba, 2015) are proposed, which achieve extreme efficient gradient communication in distributed training. However, 1-bit Adam/LAMB cannot be directly applicable to ZeRO-3. The main reason is 1-bit Adam/Lamb assumes each GPU has the full view of optimizer states (OS) for the model, but ZeRO-3 splits it across all the GPUs in use. Therefore, it is infeasible to directly apply existing gradient compression techniques at ZeRO-3 and we need to design our own. **ZeRO Communication Reduction:** To reduce expensive cross-node communication, recent optimization on ZeRO-3, such as MiCS (Wang et al., 2017), trades on-device memory for communication. In MiCS, the GPU cluster is divided into sub-groups, and model states are partitioned within a sub-group but replicated across sub-groups. By keeping the sub-group size small, MiCS can either leverage high bandwidth intra-node interconnect, or use hierarchical communication to lower the communication volume. _hpZ_ in ZeRO++ adopts a similar approach of trading memory for less communication. The key difference is that _hpZ_ only do secondary partition on weights, while keeping all other model states partitioned across all GPUs. This allows hpZ to achieve significant communication reduction without the massive memory overhead of MiCS. ## 3. Design In this section, we elaborate on the design of our three key optimizations in ZeRO++ introduced in Section 1 for reducing the communication overhead of ZeRO: i) Quantized Weight Communication for ZeRO (_qwZ_), ii) Hierarchical Partitioning for ZeRO (_hpZ_), and iii) Quantized Gradient communication for ZeRO (_qgZ_). After that, we discuss the end-to-end impact of these optimizations to reduce to total communication volume of ZeRO. ### Quantized Weight Communication for ZeRO (_qwZ_) As discussed in Section 2.2, ZeRO partitions the model weights across all the ranks (i.e., GPUs) and fetches the FP16 weights layer-by-layer right before they are needed in computation via all-gather for the forward and backward of each training iteration. To reduce the communication overhead of forward all-gather on weights, _qwZ_, quantizes FP16 weights to INT8 right during the all-gather, and dequantizes them back to FP16 on the receiver side, and then conducts layer computation. While this reduces the communication volume of the all-gather by 2x, doing so naively results in two major issues: i) the lowering of precision results in significant accuracy degradation during training as discussed in 2.3, and ii) the quantization and dequantization overhead negates any throughput gain from communication volume reduction. We discuss the optimized implementation of _qwZ_ to minimize the quantization and dequantization overhead in Section 4. Here, we primarily focus on design choices to mitigate accuracy degradation. _qwZ_ uses blocked based quantization to improve the quantization accuracy. As illustrated in Figure 2, each weight tensor is divided into smaller chunks, and converted into INT8 by symmetric quantization, using an independent quantization scaling coefficient. By keeping the quantization granularity small, we significantly mitigate the gap in number ranges and granularity. We show an example of the quantization error of performing block based quantization vs. the non-blocked quantization baseline in Figure 2(a). Fig. 2(b) shows a case study of weights quantization on BERT model, where block based quantization reduces the quantization error by 3x. More in-depth convergence evaluations are shown in Sec. 5. Figure 3. hpZ removes cross node traffic in backward all-gather by holding secondary weight partitions in on-device memory. Figure 2. Illustration & example of block based quantization vs. baseline ### Hierarchical Partitioning for ZeRO (\(hpZ\)) ZeRO-3 partitions all its model states across all its ranks, resulting in communication collectives that span all the GPUs. With \(hpZ\), we notice that it is possible to have different partitioning for different model states, limiting the communication collectives to a subset of the GPUs. Given that on modern GPU clusters, intra-node communication bandwidth is significantly higher than inter-node communication bandwidth, this presents opportunities to reduce the inter-node communication. More specifically, in \(hpZ\), we eliminate the inter-node all-gather during the backward pass by holding secondary FP16 weights partition within each node. We do this by creating a hierarchical partitioning strategy consisting of two partitions: first, all model states are partitioned globally across all devices as in ZeRO-3, which we call primary partition. Second, a secondary copy of FP16 parameters is partitioned at the sub-global level (e.,g., compute node, see figure 3), which we call secondary partition. This secondary copy of FP16 parameters is replicated across multiple secondary partitions. Consider a 64-node cluster, each node with 8 GPUs. Model weights are partitioned in two stages: i) across all 512 GPUs that we call primary partition, and ii) the same weights are also partitioned within a compute node across 8 GPUs, that we call secondary partition. In this example, for the secondary partition, each compute node in the cluster holds a full replica of FP16 weights partitioned among the 8 GPUs within the node, and there are 64 of such replicas in total. #### 3.2.1. A training iteration with \(hpZ\) During the forward pass of a training iteration, we all-gather weights based on the primary partition across all GPUs. However, once the weights are consumed during the forward pass, they are partitioned based on the secondary partition. Given the temporal consistency of model parameters between forward and backward passes, when the weights are needed again during the backward pass, we all-gather weights based on this secondary group. Note that when the secondary partitioning is set to be a compute node, this avoids any inter-node communication for this all-gather. Finally, at the end of the iteration, during the optimizer step, all the model states, as well as the primary copy of the fp16 parameter are updated based on the primary partition. hpZ makes two changes to baseline ZeRO pseudocode in Algorithm 1: i) in line 4, parameter partitioning is based on _secondary group size_, ii) parameter all-gather preceding backward pass in line 5 is also based on _secondary group size_. Our design of \(hpZ\) is flexible to support any _secondary group size_. The group size controls how many ranks (i.e., GPUs) are in the secondary partition. It is also a measure of memory-communication trade-off of \(hpZ\). Simply put, by default, \(hpZ\) secondary partition is node-based (recall intra-node bandwidth is multiple factors of inter-node bandwidth for current and future hardware configurations) but can be extended to support multiple compute nodes as needed. #### 3.2.2. Memory Usage Analysis By design, \(hpZ\) trades memory for communication efficiency. It is important to analyze this tradeoff. Recall that standard data parallel DNN (DP) replicates model parameters across data parallel ranks, ZeRO-3 on the other hand partitions parameter across data parallel ranks. A midway approach is model parameters partitioned across a subset of devices as long as model parameters fit. Figure 4 provides a concrete memory usage estimate of a typical large language model of size of 100B parameters, with primary group size of 1024 GPUs and secondary group size of 16 GPUs (e.g., DGX-2 V100 node). As shown in Figure 4, with our proposed method, \(hpZ\) consumes 8.9\(x\) more memory than ZeRO-3, our approach is still 114x less memory requirement than standard DP. This marginal increase in memory usage is compensated for by efficient intra-node communication schedule. By eliminating or reducing inter-node communication for backward pass, \(hpZ\) reduces the end-to-end communication of ZeRO by 1.5\(x\), while still supporting model training with hundreds of billions of parameters. ### Quantized Gradients Communication for ZeRO (\(qgZ\)) In this section, we propose a novel quantized reduce-scatter algorithm called qgZ based on all-to-all collectives that enables a 4x communication volume reduction of gradient reduce-scatter by replacing FP16 with INT4 quantized data, while overcoming precision loss challenges described in Section 1, as well as numerous system challenges that we will outline in this section. qgZ leverages all-to-all collectives to implement quantized reduce-scatter which includes three major components: 1) all-to-all-based implementation of quantized gradient reduce-scatter, 2) reducing communication volume with hierarchical collectives, 3) tensor slice reordering for correct gradient placement. We talk about each of them step-by-step. #### 3.3.1. All-to-all based implementation A naive approach towards quantized reduce-scatter, while avoiding precision loss due to reduction is to apply quantization and dequantization to a ring-based reduce-scatter directly as shown on the left of Figure 5. We can inject quantization and dequantization on each GPU. Once a GPU Figure 4. Per-device memory consumption analysis of standard data parallel (DP), ZeRO stage 3 (ZeRO-3) and proposed hierarchical partitioning of ZeRO parameters (\(hpZ\)). \(K\) denotes the memory multiplier of optimizer states, \(M\) represents the number of trainable parameters, \(P\) is the data parallel group size or world size, and \(\alpha\) is the number of secondary groups or ratio of world size to the number of ranks in the secondary group. A typical real world scenario example is provided in the last column. We assume a model size of 100B trained on 1024 V100 GPU DGX cluster (64 compute nodes, 16 GPUs per node). receives gradients from its predecessor, we dequantize it to recover full precision and conduct a local reduction. Next we can quantize local reduction output and pass quantized data to its successor. To finish the whole reduce-scatter, the number of sequential quantization and dequantization kernels is equal to the number of GPUs (i.e., n) in use. Thus, applying quantization and dequantization on existing ring based reduce-scatter collective will lead to high communication latency and low value precision due to multiple sequential quantization and dequantization steps. Although recent tree-based collective like Blink[38] could reduce the number of sequential kernels from n to log(n), the long latency and low precision issue is not completely resolved. To overcome this, we completely abandon existing ring-based reduce-scatter approach and incorporate 1-hop all-to-all collective for our gradient communication. As shown on the right of Figure 5, we first apply quantization on a given tensor, then we conduct all-to-all communication among all the GPUs. After all-to-all, we apply another dequantization to recover the data precision and then reduce on high-precision values to get the final gradient reduction output. By replacing ring-based solution with our all-to-all collective, we reduce the number of sequential quantization+dequantization kernel from the number of GPUs to 1. Thus, we solve the long latency and low precision issues when applying quantization in reduce-scatter for supercomputing scenarios like DGX boxes connected in fat-tree topology. #### 3.3.2. Reducing inter-node communication volume Although replacing reduce-scatter with all-to-all achieves single-shot quantization and dequantization, it introduces a new problem; the inter-node communication volume increases instead of decreasing despite the quantization of data. We elaborate on this in Figure 6. Here we assume model size of \(M\), GPU per node is \(N\), gradient compression ratio as \(Z\). Reduce-scatter, reduces the data during transmission over the ring, thus the total amount of data for cross-node communication is M. However, when using our 1-hop all-to-all approach, even though the data are compressed before communication (i.e., \(M/Z\)), each GPU needs to send out \(M/Z\) amount of data to GPUs on the other nodes. Therefore, each machine will generate \(N*M/Z\) amount of cross-node communication data, which is much bigger than reduce-scatter communication volume. To address this, we do a hierarchical 2-hop all-to-all instead of 1-hop: a) first intra-node all-to-all and b) followed by inter-node all-to-all, which is shown as Figure 7. First, with high-bandwidth links among GPUs inside a machine, we conduct intra-node all-to-all on quantized data, then dequantize data and reduce on dequantized data. After intra-node quantization, all-to-all, dequantization, and reduction, we reduce the data size per GPU from \(M/Z\) to \(M/(Z*N)\). After intra-node all-to-all is completed, we conduct the inter-node all-to-all communication, which is similar to 1-hop all-to-all we described above. Given that now each GPU only needs to send out \(M/(Z*N)\) data, the communication volume per machine is now \(M/(Z*N)*N=M/Z\). By adopting this hierarchical all-to-all communication as 2-hop approach, we resolve the communication volume blow-up issue in our 1-hop scheme perfectly. Note that even though the total communication volume is doubled (one intra-node, the other inter-node), intra-node communication introduces negligible overhead given NVLink/NVswitch high bandwidth, and cross-node traffic has been significantly reduced, which is the major bottleneck in gradient communication. #### 3.3.3. Tensor slice reordering for correct data placement With the 2-hop all-to-all, the inter-node communication volume is as expected, however, this introduces a gradient misplacement issue. We describe this issue using a 2x2 example, where we have 2 machines and each machine has 2 GPUs. As shown in Figure 8, the correct final gradient placement is shown as green boxes in the figure, where GPU 0 holds final gradient partition 1, GPU 1 holds gradient partition 2, so on and so forth. Figure 5. Comparison between ZeRO-3 ring-based reduce-scatter and qgZ 1-hop all-to-all. Figure 6. Communication volume comparison between ZeRO-3 reduce-scatter and qgZ 1-hop all-to-all. Figure 7. qgZ apply hierarchy all-to-all to reduce cross node traffic. Our 2-step all-to-all communication works as follows, first we divide all gradients on each GPU into 4 chunks, then conduct our intra-node all-to-all. After intra-node all-to-all finishes, GPU0 (i.e., G0) holds partial aggregated gradient partition 1,2 whereas G1 holds gradient partition 3,4. Same thing happens on G2 and G3. Since G1 does not have gradient partition 2 (which is supposed to be held by G1) while G2 does not have gradient partition 3, after inter-node all-to-all, there is gradient misplacement issue on both G1 and G2. We address this with tensor slice reordering. As shown in Figure 9, before intra-node all-to-all begin, we first swap the tensor slice order of slice 2 and 3, which is shown as orange arrows. Then after intra-node all-to-all is completed, G1 now has gradient 2 while G2 has gradient 3. Therefore, after the inter-node all-to-all, all GPUs get the correct gradient placement. Mathematically, given X GPUs per node and Y nodes in total, each GPU will hold X'Y gradient slices initially. Our tensor slice reordering works as follows: \[before:[0,1,2,3,4,...YX-3,YX-2,YX-1] \tag{1}\] \[after:[0,X,2X,...(Y-1)X,1,X+1,(Y-1)X+1,...YX-1] \tag{2}\] Based on Equation 1 and 2, we can map each original tensor slice position (i.e., Equation 1) to new tensor slice position (i.e., Equation 2) on each GPU to correct final gradient misplacement issue. In summary, by solving above three challenges step-by-step, we design a novel gradient communication and reduction protocol, which can be a more communication efficient and generalized replacement of reduce-scatter collective. We discuss some of the optimization and implementation details for our approach in Sec. 4. ### ZeRO++ Communication Volume Analysis Table 1 illustrates theoretical communication volume comparison between ZeRO-3 and ZeRO++. We assume the model size of \(M\). As described in Section 2, during ZeRO-3 there are 3 collective calls: all-gather on weights in forward pass, then all-gather on weights in backward pass and last is reduce-scatter on gradients in the backward. And each collective communicates \(M\) volume of data. With ZeRO-3, in total we need to communicate 3M data per each training iteration. Given that intra-node communication is fast with NVLink and NVSwitch, we ignore intra-node communication and focus on cross-node traffic only. For all-gather in the forward pass, by incorporating our quantized weights communication, we reduce communication volume from M to 0.5M. During the all-gather in the backward pass, by holding secondary weights partition within each node, we completely removed cross-node traffic. For reduce-scatter in the backward pass, by replacing reduce-scatter with our novel quantized gradient communication protocol, we reduce cross-node traffic from M to 0.25M. Therefore, compared with ZeRO-3, ZeRO++ reduces communication volume from 3M down to 0.75M for each training iteration. ## 4. Optimized Implementation In this section, we discuss two key optimizations that enable ZeRO++ to fully realize the potential of 4x communication volume reduction to improve throughput without getting limited by implementation overheads: i) overlapping different communication and compute streams, when doing so enables better resource utilization, and ii) optimized CUDA kernels for quantization, dequantization, and tensor slice reordering operators, and kernel fusion across these operators when appropriate to minimize the memory traffic overhead. Below we discuss the two lines of optimization in detail. ### Overlap Compute and Communication To reduce end-to-end communication time, we overlap quantization computation with communication for all-gathering of weights in both forward and backward passes. For the hierarchical all-to-all based reduce-scatter implementation of gradients, we overlap the intra-node communication with inter-node communication. #### 4.1.1. Communication-computation overlapping on weights For all-gather on weights, we enable communication-computation overlap using two key features : i) we track the execution order of model layers to get the sequence they will be fetched. ii) we guarantee asynchronous quantization execution. Specifically, the call to the \begin{table} \begin{tabular}{|c|c|c|c|} \hline Comm. & forward & backward & backward \\ Volume & all-gather & all-gather & reduce-scatter \\ \hline ZeRO-3 & M & M & M \\ \hline ZeRO++ & 0.5M & 0 & 0.25M \\ \hline \end{tabular} \end{table} Table 1. Communication volume comparison between ZeRO-3 and ZeRO++. Figure 8. Gradient partition misplacement when applying hierarchical all-to-all in qgZ. Figure 9. Tensor slices reordering to correct gradient misplacement in qgZ. quantization kernel is non-blocking and we further avoid operations that involve explicit/implicit CUDA synchronization (e.g. tensor concatenation), making the quantization a non-blocking operation that can be launched asynchronously. With this two features, as ZeRO fetch parameters for each layer, the communication of the current layer and the quantization of the next layer can be launched at the same time on different CUDA streams. When the quantized data are needed for the next layer, ZeRO++ synchronizes the quantization stream to make sure the quantized data are ready. This approach hides the quantization cost of the next layer under the communication time span of the current layer which hides the quantization overhead. #### 4.1.2. Hierarchical Collectives for Gradient Communication As discussed in Sec. 3.3.2, our all-to-all based gradient communication is broken into two stages: first intra-node communication followed by inter-node communication. The inter-node communication depends on the results of the intra-node communication, therefore, with a naive implementation, inter-nodes links are idle during intra-node communication and vice versa. To reduce latency by leveraging both inter-node and intra-node links in parallel, we chunk our input gradient tensor and pipeline transfer between intra-node communication and inter-node communication. As shown in Figure 10, compared with "no pipeline" case on the top, simply adopting a "2-stage pipeline" transfer achieves the amount of end-to-end latency reduction shown as the red arrow-line in Figure 10. By overlapping intra-node and inter-node communication, the end-to-end latency of gradient communication is significantly reduced. Doing this pipeline correctly has implications on our tensor slice reordering process. The more pipeline stages we have, the more fine-grained tensor slices are needed for reordering. Therefore, we also propose a generalized tensor slices reordering scheme as algorithm 2, which covers both w/ and w/o pipelining data transfer cases. Here stages refer to the number of pipeline stages we have, nodeSize is the number of GPUs per node and nodes is the number of nodes. Next, we discuss how we optimize our CUDA kernels to further reduce all quantization related overhead. ``` Constants:\(stages\), \(nodeSize\), \(nodes\) Input :\(partitionID\) Output :\(mappedPartitionID\) 1\(totalDevices\)\(\leftarrow\)\(nodeSize*nodes\); 2\(stageID\)\(\leftarrow\)\(partitionID\)\(\%\)\(totalDevices\); 3\(chunkID\)\(\leftarrow\)\(\frac{partitionID}{totalDevices}\); 4\(pipelineOffset\)\(\leftarrow\)\(stageID*totalDevices\); 5\(chunkOffset\)\(\leftarrow\)\(\frac{stageID}{nodesize}\); 6\(chunkBase\)\(\leftarrow\)\((chunkID\)\(\%\)\(nodeSize)*nodes\); 7Return:\(pipelineOffset+chunkBase+chunkOffset\); ``` **Algorithm 2**Generalized tensor slice reordering (\(ggZ\)) ### CUDA Kernels As existing quantization implementations are unable to capture the combination of data mapping and high throughput necessary to minimize kernel overhead, we implement and optimize custom CUDA kernels to implement these primitives. In particular, these kernels aim to (1) saturate device memory bandwidth and (2) minimize the total traffic via fusion. **Maximizing Bandwidth Utilization:** A core quantization and dequantization library of composable operators was developed as the foundation for ZeRO++. The core primitives leverage efficient vectorized memory accesses at the maximum granularity a given GPU architecture supports. In order to satisfy the alignment requirements these instructions have, model state is partitioned such that quantization granularities will be 16B aligned. Additionally, we leverage instruction level parallelism to overlap multiple memory transactions with each other. In practice, the combination of vectorized accesses and instruction level parallelism enables the quantization library to achieve full GPU memory bandwidth utilization. **Minimizing Total Traffic:** Multiple techniques are used to reduce the total memory traffic for quantization kernels. First, the size of each quantization block is tuned so as to express sufficient parallelism to schedule across a GPU's streaming multiprocessors and cache values not quantized yet in the register file while calculating the quantization scale and offset for the block. Second, we fuse tensor reshaping and quantization into the same kernel to avoid redundantly loading data from global memory. For example, the tensor slice reordering (i.e., orange arrow-lines in Figure 9) is realized within a fused quantization and remapping kernel.This fused kernel achieves the same level of performance as a single quantization kernel working with contiguous data. Finally, we fuse sequential dequantization, reduction, and quantization operations into single kernel implementation, which reduces total memory traffic by 9x in \(qgZ\). ## 5. Evaluation In this section, we perform three sets of evaluations for ZeRO++. First, we perform end-to-end evaluations showing : i) scalability evaluation on up to 384 GPUs, ii) speedup over state-of-the-art (SOTA) baseline across models ranging from 10-138B parameters, and iii) throughput comparisons for cluster setting with varied cross-node bandwidth. Second, we perform throughput analysis Figure 10. Pipelining and overlapping intra-node communication with inter-node communication in \(qgZ\). and breakdown, evaluating the impact of different components of ZeRO++, as well as the impacts of our kernel optimizations on end-to-end throughput. Finally, we show convergence evaluation indicating that ZeRO++ doesn't harm model convergence and maintains similar model training accuracy. ### Methodology **Hardware:** 24 NVIDIA DGX-2 nodes where each with 16 V100 SXM3 32 GB GPUs (Krizhevsky et al., 2015). The nodes are connected by InfiniBand (IB) with NVIDIA SHARP support (Krizhevsky et al., 2015), achieving total inter-node bandwidth of over 800 Gbps. To evaluate ZeRO++ in clusters under different network environments, we show the performance of ZeRO++ running with different cross-node bandwidth by enabling from 1 to 8 IB connections (i.e., 100 Gbps to 800 Gbps). **Baseline:** We use ZeRO-3 as the baseline given its ease-to-use for training giant models at large scale. To evaluate the performance of our optimized kernels, we also implemented ZeRO++ with PyTorch quantization(Paszke et al., 2017) and non-fused kernels as baselines for our ablation study. **Model Configurations:** We use GPT-style transformer models for evaluation. Given Megatron-Turing-NLG (Wang et al., 2018) training 530B model on 2K GPUs using 2K tokens per GPU (i.e., micro batch size), we evaluate ZeRO++ with the same 2k tokens per GPU setting. We also evaluate on 1K tokens per GPU to test ZeRO++ with more extreme scale scenario. The number of layers and hidden sizes are adjusted to have models of different sizes. Please refer to the appendix and our open-sourced evaluation scripts for hyperparameters and other training details. ### E2E System Evaluations We evaluate ZeRO++ end-to-end performance here. One key metric we use here is the percentage of _peak performance_, which is shown as equation 3. \[peak\_performance=achieved\_TFLOPs/max\_TFLOPs \tag{3}\] Given that we use V100 GPU, its _max_TFLOPs_ is 120 TFLOPs (Zhou et al., 2018) for mixed precision computation. Thus, our reported _peak performance_ refers to the percentage number of _achieved_TFLOPs_/120. #### 5.2.1. Scalability upto 384 GPUs In Figure 11, we present ZeRO++ scalability evaluation from 64 to 384 GPUs with 18B model on both low (1 IB) and high (8 IB) bandwidth clusters. On low bandwidth cluster, ZeRO++ achieves 30% and 38.3% of peak performance (120 TFLOPs) even at 384 GPUs for 1K and 2K batch sizes, which is much higher compared to 12.5% and 21.6% as baseline peak performance. This presents up to **2.4x** better throughput. On high bandwidth cluster, despite having significantly more bandwidth, ZeRO++ still enables up to 1.29x better throughput, and can achieve up 45% of sustained peak throughput at 384 GPUs. ZeRO++ significantly speed up large scale training for low bandwidth clusters while archiving decent speedup even on high bandwidth clusters. #### 5.2.2. Throughput for different model sizes Table 2 compares training throughput for models of 18B-138B on 384 GPUs between ZeRO++ and baseline on both low and high bandwidth clusters. On low bandwidth cluster, ZeRO++ consistently achieves over 31.5% and 18.1% peak performance for 2K and 1K batch sizes on all models. Compared with the baseline peak performance of 16.6% and 9.3%, the speedup is up to **2.16x**. On high bandwidth cluster, ZeRO++ peak performances are 44.7% and 31.5%, which is 1.3x over the baseline peak performance of 31.5% and 26.0%. ZeRO++ is robust and offers consistent speedup across different model and batch sizes as well as across clusters with different network bandwidths. \begin{table} \begin{tabular}{c c c c c c|c c c} \hline \hline & & \multicolumn{3}{c|}{1 IB Connection} & \multicolumn{3}{c}{8 IB Connections} \\ \hline Model & Tokens & Baseline & ZeRO++ & \multirow{2}{*}{Speedup} & Baseline & ZeRO++ & \multirow{2}{*}{Speedup} \\ Size & per GPU & TFLOPs & TFLOPs & & TFLOPs & TFLOPs & \\ \hline 138B & 2K & 19.96 & 37.90 & 1.90x & 47.55 & 53.50 & 1.16x \\ 138B & 1K & 11.25 & 21.81 & 1.94x & 34.19 & 44.38 & 1.30x \\ \hline 91B & 2K & 19.99 & 38.06 & 1.90x & 47.74 & 56.26 & 1.18x \\ 91B & 1K & 11.27 & 21.93 & 1.95x & 34.49 & 44.36 & 1.29x \\ \hline 49B & 2K & 20.06 & 38.08 & 1.90x & 48.05 & 56.24 & 1.17x \\ 49B & 1K & 11.27 & 21.95 & 1.95x & 34.54 & 44.46 & 1.29x \\ \hline 18B & 2K & 25.98 & 46.40 & 1.79x & 47.31 & 53.65 & 1.13x \\ 18B & 1K & 14.15 & 30.57 & 2.16x & 31.27 & 37.87 & 1.21x \\ \hline \hline \end{tabular} \end{table} Table 2. End-to-end speedup of ZeRO++ on 384 GPUs with different model sizes Figure 11. Scalability on up to 384 GPUs of 18B model with different numbers of InfiniBand connections and tokens per GPU Figure 12. ZeRO++ achieving high bandwidth cluster performance with significantly lower bandwidth #### 5.2.3. Democratization for large scale training Figure 12 compares the throughput of ZeRO++ on a low cross-node bandwidth (200 Gbps as 2 IB) cluster with the baseline running on 800 Gbps high-bandwidth (8 IB) cluster. For small model of 18B, ZeRO++ achieves a higher peak performance of 41.6% compared with baseline peak performance of 39.1% despite running with 4x lower cross-node bandwidth. For large model of 138B, ZeRO++ and baseline achieve the same peak performance of 40%, but baseline runs at 4x higher cross-node bandwidth. This evaluation shows that ZeRO++ makes large scale training more accessible by significantly decreasing the minimum cross-node bandwidth requirement for efficient training. Furthermore, it demonstrates that optimized ZeRO++ implementation effectively translates the 4x communication reduction of ZeRO++ into real end-to-end system throughput gain. ### Throughput Breakdown and Analysis #### 5.3.1. Impact of Individual Techniques In Figure 13, we show the individual and combined impact of qwZ, hpZ, and qgZ, on the throughput of 18B model on 128 GPUs. On low bandwidth clusters, each of these techniques enables a speedup ranging from 1.3-1.4x compared with baseline, while achieving an aggregated speedup of up to 2.26x. Note that our TFLOPs throughput is calculated from wall-clock time measurement, ZeRO++ aggregated throughput gain is not equivalent to sum of qgZ, qwZ, hpZ gain. We can validate the theoretical speedup with composition of our techniques by accumulating the speedup multiplicatively: \(1.4*1.26*1.3=2.29\), which is very near to what we achieved as 2.26x. For high bandwidth clusters, the individual speedup ranges between 1.13-1.16x, for a combined speedup of up to 1.3x. The figure demonstrates that each of these techniques has a similar impact towards throughput improvement and they compose effectively to produce a much larger aggregated speedup. #### 5.3.2. Impact of Kernel Optimizations Here, we evaluate the impact of our optimized kernels on ZeRO++ throughput using a 18B model running on 64 GPUs. **Quantization Kernel:** As shown in Table 3, compared with the baseline that uses PyTorch quantization (Pasz and Komodakis, 2017), our optimize quantization kernels can achieve up to 1.67x speedup in terms of end-to-end throughput. Also, the baseline implementation suffers performance degradation as group number increases which means the throughput gap will be larger when used with larger models. **Kernel Fusion:** As described in Section 4.2, kernel fusion is one of our key optimizations to improve memory throughput when executing sequences of CUDA kernels. Our fusion includes 1) tensor-reorder and quantization fusion 2) intra-node dequant, intra-node reduction and inter-node quant fusion. As shown in Table 3, we achieve up to 1.15x speedup on the end-to-end throughput. #### 5.3.3. Comparing hpZ with MICS As previously discussed in Section 2, closely related to hierarchical weight partition for ZeRO (_hpZ_) is _MiCS_(Komodakis et al., 2017). Key difference of the two methods is what data are replicated in secondary group; model weights are replicated in _hpZ_, entire model states are replicated in _MiCS_. Table 4 shows per-GPU throughput of both methods for different model and token size configurations. The table also shows that given a secondary partition size of a single node (16 V100 GPUs), _hpZ_ can support 18 billion parameter model where as _MiCS_ reports out-of-memory (OOM) at this scale. ### Model convergence analysis Next we evaluate ZeRO++'s impact on model convergence by training GPT-350M model with 30B tokens on the pile dataset (Beng et al., 2017) using ZeRO++, ZeRO++ with basic (non-blocked) quantization, and ZeRO-3 as baseline. All hyperparameters are kept the same between baseline training and ZeRO++ trainings to ensure fair comparison. The convergence is measured by the validation LM loss. As shown in Figure 14, we present end-to-end training trace. The training with basic (non-blocked) quantization diverged at the beginning so there is no visible data, on the contrary, ZeRO++ \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Model Size & Token Size & ZeRO & hpZ & MiCS \\ TFLOPs & TFLOPs & TFLOPs & TFLOPs \\ \hline 7.5B & 1K & 36.99 & 38.39 & 38.96 \\ \hline 7.5B & 2K & 53.3 & 54.4 & 52.72 \\ \hline 18B & 1K & 51.47 & 52.42 & OOM \\ \hline 18B & 2K & 60.94 & 61.44 & OOM \\ \hline \end{tabular} \end{table} Table 4. hpZ vs MiCS evaluation on a 4 node cluster (16 V100 GPUs per node) Figure 13. Throughput of 18B models on128 GPUs with ZeRO++, qwZ, qgZ, hpZ, and baseline on different numbers of InfiniBand connections \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Optimized & Optimized & \multirow{2}{*}{TFLOPs} \\ & Quantization & Fusion & \\ & Kernel & Kernel & \\ \hline Baseline & N/A & N/A & 15 \\ \hline ZeRO++ & No & No & 19.73 \\ \hline ZeRO++ & No & Yes & 21.6 \\ \hline ZeRO++ & Yes & No & 31.40 \\ \hline ZeRO++ & Yes & Yes & **36.16** \\ \hline \end{tabular} \end{table} Table 3. End-to-end performance when using ZeRO++ w.|wo. optimized kernels is closely aligned with the baseline, and also confirms our previous analysis of better quantization accuracy by using block based quantization. We further extended the convergence evaluation by comparing the final evaluation loss at the end of training. As shown in Table 5, even with all three optimizations on, the final evaluation loss is only off by 1%. We further merged this convergence gap by using a straightforward interleaving schedule where the hierarchical partitioning and quantized weights are turned on throughout the training and the quantized gradient is only turned on for the first 50% of training. For a more extended case, we also evaluate hierarchical partitioning and quantized weights alone. The results suggest our convergence is identical to the baseline in this case. ## 6. Conclusion This paper present ZeRO++, an efficient collective communication solution for giant model training using ZeRO stage-3. We optimize both model weights and gradients communication in forward and backward pass of each training iteration. To reduce communication volume of model weights in forward propagation, we adopt block-based quantization and data pre-fetching. To remove cross-node communication of weights during backward pass, we hold secondary model partition on each node to trade memory for communication. To minimize gradient communication during backward propagation, we design and implement a novel all-to-all based gradient quantization and reduction scheme. By incorporating all the three optimizations above, we improve system throughput up to 2.16x in large scale model training using 384 V100 GPUs. We envision ZeRO++ as the next generation of easy-to-use framework for training giant models at trillion-level model scale. ## 7. Authorship and Major Credit Attribution * **Guanhua Wang:** design and implementation of qgZ, code integration, high performance quantization kernels design and implementation, solving all CUDA kernel conflicts in code merging, majority of paper writing. * **Heyang Qin:** design and implementation of qwZ, code integration/resolve conflicts in code merging, experimental design and evaluation, in depth convergence study. * **Sam Ade Jacobs:** design and implementation of hpZ, code integration/resolve conflicts in code merging. * **Connor Holmes:** design and implementation of high performance quantization kernels. * **Samyam Rajbhandari:** chief architect * **Olatunji Ruwase:** technical support * **Yuxiong He:** team lead Figure 14. Training convergence for GPT-350M on 30B tokens \begin{table} \begin{tabular}{|c|c|} \hline & Evaluation LM loss \\ \hline Baseline & 2.121762 \\ \hline ZeRO++ & 2.165584 \\ (hpZ8qwZ8qgZ on) & \\ \hline ZeRO++ & \\ (hpZ8qwZ on; qgZ on for first 50\%) & \\ \hline ZeRO++ & \\ (hpZ8qwZ on; qgZ off) & \\ \hline \end{tabular} \end{table} Table 5. Validation loss at the end of training (GPT 350M / 30B tokens)
2305.11447
A note on Samelson product in $Sp(n)$
Let $m$ and $n$ be two positive integers such that $m < n$. Let $Q_{n-m+1}$ be the symplectic quasi-projective space of rank $n-m+1$. In this article, we will study the order of the Samelson product $S^{4m-1}\wedge Q_{n-m+1}\rightarrow Sp(n)$.
Sajjad Mohammadi
2023-05-19T05:56:56Z
http://arxiv.org/abs/2305.11447v4
# A note on Samelson product in \(Sp(n)\) ###### Abstract Let \(m\) and \(n\) be two positive integers such that \(m<n\). Let \(Q_{n-m+1}\) be the symplectic quasi-projective space of rank \(n-m+1\). In this article, we will study the order of the Samelson product \(S^{4m-1}\wedge Q_{n-m+1}\to Sp(n)\). **Keywords:** Samelson product; Symplectic group. 2010 Mathematics Subject Classification: 55\(Q15\); 55\(P10\). ## 1 Introduction For two maps \(f_{1}\colon A_{1}\to X\) and \(f_{2}\colon A_{2}\to X\) into a homotopy associative \(H\)-space \(X\) with inverse, the Samelson product of \(f_{1}\) and \(f_{2}\) is defined as the following composition \[A_{1}\wedge A_{2}\stackrel{{ f_{1}\wedge f_{2}}}{{\longrightarrow}}X \wedge X\stackrel{{ c}}{{\longrightarrow}}X\] and is denoted by \(\langle f_{1},f_{2}\rangle\), where the map \(c\) is the reduced commutator map. Let \(G\) be a simple compact connected Lie group. Samelson products play a crucial role in the homotopy theory and have many applications in the homotopy classification of gauge groups of principal \(G\)-bundles, the study of \(H\)-spaces, homotopy commutativity, homotopy normality, and the study of self groups of Lie groups. In recent years, Samelson products have been studied extensively for classical Lie groups and valuable results have been obtained. For example, see [1], [3], [4], [7], [8], [9] and [11]. Let \(m\) and \(n\) be two positive integers such that \(m<n\). Let \(Q_{n-m+1}\) be the symplectic quasi-projective space of rank \(n-m+1\), we denote the inclusion \(Q_{n-m+1}\to Sp(n)\) by \(\epsilon_{m,n}\). Also, let \(\varepsilon_{m,n}\colon S^{4m-1}\to Sp(n)\) represents the generator of \(\pi_{4m-1}(Sp(n))\). In this article, we will study the order of the Samelson product \(\langle\varepsilon_{m,n},\epsilon_{m,n}\rangle\colon S^{4m-1}\wedge Q_{n-m+1} \to Sp(n)\). We will prove the following theorem. **Theorem 1.1**.: _The order of the Samelson product \(\langle\varepsilon_{m,n},\epsilon_{m,n}\rangle\colon S^{4m-1}\wedge Q_{n-m+1} \to Sp(n)\) is equal to_ \[\left\{\begin{array}{ll}\frac{(2n+1)!}{(2n-2m+1)!}&\mbox{if $m$ is even}\;,\\ \\ \frac{2(2n+1)!}{(2n-2m+1)!}&\mbox{if $m$ is odd}\;.\end{array}\right.\] The theorem recovers the known case in [6] when \(m=1\). In Section 2, in cases where \(m\) is an even integer and \(m\) is an odd integer, we will study the group \([\Sigma^{4m-1}Q_{n-m+1},Sp(n)]\). Then in Section 3, regarding the Samelson product \(\langle\varepsilon_{m,n},\epsilon_{m,n}\rangle\) as an element of \([\Sigma^{4m-1}Q_{n-m+1},Sp(n)]\), we will study the order of the Samelson product \(\langle\varepsilon_{m,n},\epsilon_{m,n}\rangle\). ## 2 The group \([\Sigma^{4m-1}Q_{n-m+1},Sp(n)]\) Our main goal in this section is to study the Samelson product \(\langle\varepsilon_{m,n},\epsilon_{m,n}\rangle\), where the map \(\epsilon_{m,n}\colon Q_{n-m+1}\to Sp(n)\) is the inclusion map. We denote \(Sp(\infty)/Sp(n)\) by \(X_{n}\) and \([X,Sp(n)]\) by \(Sp_{n}(X)\). Let \(Q_{n}\) be the symplectic quasi projective space of rank \(n\) defined in [5]. This space has the following cellular structure \[Q_{n}=S^{3}\cup e^{7}\cup e^{11}\cup\cdots\cup e^{4n-1}.\] Put \(X=S^{4m-1}\wedge Q_{n-m+1}\). Note that \(X\) has the following cellular structure \[X\simeq S^{4m+2}\cup e^{4m+6}\cup e^{4m+10}\cup\cdots\cup e^{4n+2}.\] Recall that as an algebra \[H^{*}(Sp(n);\mathbb{Z})=\bigwedge(y_{3},y_{7},\cdots,y_{4n-1}),\] \[H^{*}(Sp(\infty);\mathbb{Z})=\bigwedge(y_{3},y_{7},\cdots),\] \[H^{*}(BSp(\infty);\mathbb{Z})=\mathbb{Z}[q_{1},q_{2},\cdots],\] where \(y_{4i-1}=\sigma q_{i}\), \(\sigma\) is the cohomology suspension and \(q_{i}\) is the \(i-\)th symplectic Pontrjagin class. Consider the projection map \(\pi:Sp(\infty)\to X_{n}\), also an algebra we have \[H^{*}(X_{n};\mathbb{Z})=\bigwedge(\bar{y}_{4n+3},\bar{y}_{4n+7}, \cdots),\] \[H^{*}(\Omega X_{n};\mathbb{Z})=\mathbb{Z}\{b_{4n+2},b_{4n+6}, \cdots,b_{8n+2}\}\quad(*\leq 8n+2),\] where \(\pi^{*}(\bar{y}_{4i+3})=y_{4i+3}\) and \(b_{4n+4j-2}=\sigma(\bar{y}_{4n+4j-1})\). Consider the following fibre sequence \[\Omega Sp(\infty)\stackrel{{\Omega\pi}}{{\longrightarrow}} \Omega X_{n}\stackrel{{\delta}}{{\longrightarrow}}Sp(n) \stackrel{{ j}}{{\longrightarrow}}Sp(\infty)\stackrel{{ \pi}}{{\longrightarrow}}X_{n}. \tag{2.1}\] By applying the functor \([X,\quad]\) to fibration (2.1), we get the following exact sequence \[[X,\Omega Sp(\infty)]\stackrel{{(\Omega\pi)}}{{\longrightarrow}} [X,\Omega X_{n}]\stackrel{{\delta_{*}}}{{\longrightarrow}}Sp_{n} (X)\stackrel{{ j_{*}}}{{\longrightarrow}}[X,Sp(\infty)]\stackrel{{ \pi_{*}}}{{\longrightarrow}}[X,X_{n}].\quad(\star)\] Note that \(X_{n}\) has the cellular structure as following \[X_{n}\simeq S^{4n+3}\cup e^{4n+7}\cup e^{4n+11}\cup\cdots.\] Also, we have \[\Omega X_{n}\simeq S^{4n+2}\cup e^{4n+6}\cup e^{4n+10}\cup\cdots.\] According to the \(CW\)-structure of \(X_{n}\), we have the following isomorphisms \[\pi_{i}(X_{n})=0\quad(for\;\;i\leq 4n+2),\qquad\pi_{4n+3}(X_{n})\cong\mathbb{Z}.\] Observe that \([X,Sp(\infty)]\cong[\Sigma X,BSp(\infty)]\cong\widetilde{KSp}^{-1}(X)\). Apply \(\widetilde{KSp}^{-1}\) to the cofibration sequence \(S^{4m-1}\wedge Q_{n-m}\to X\to S^{4n+2}\). We know that \(\widetilde{KSp}^{-1}(S^{4n+2})=0\), for every \(n\geq 1\), so by use of induction, we can conclude that \(\widetilde{KSp}^{-1}(X)=0\). On the other hand, we know that \(\Omega X_{n}\) is \((4n+1)\)-connected and \(H^{4n+2}(\Omega X_{n})\cong\mathbb{Z}\) which is generated by \(b_{4n+2}=\sigma(\bar{y}_{4n+3})\). The map \(b_{4n+2}\colon\Omega X_{n}\to K(\mathbb{Z},4n+2)\) is a loop map and is a \((4n+3)\)-equivalence. Since \(\text{dim}X\leq 4n+2\), the map \((b_{4n+2})_{*}\colon[X,\Omega X_{n}]\to H^{4n+2}(X)\) is an isomorphism of groups. Thus we rewrite the exact sequence \((\star)\) as the following exact sequence \[\widetilde{KSp}^{-2}(X)\overset{\psi}{\longrightarrow}H^{4n+2}(X)\to Sp_{n} (X)\to 0,\] where we use the isomorphism \(\widetilde{KSp}^{-i}(X)\cong[\Sigma^{i}X,BSp(\infty)]\). So we have the exact sequence \[0\to Coker\psi\overset{\iota}{\longrightarrow}Sp_{n}(X)\to 0.\] Therefore we get the following lemma. **Lemma 2.1**.: \(Sp_{n}(X)\cong Coker\psi\)_. \(\Box\)_ In what follows, we will calculate the image of \(\psi\). Let \(Y\) be a \(CW\)-complex with \(\text{dim}\ Y\leq 4n+2\), we denote \([Y,U(2n+1)]\) by \(U_{2n+1}(Y)\). By Theorem 1.1 in [2], there is an exact sequence \[\tilde{K}^{-2}(Y)\overset{\varphi}{\longrightarrow}H^{4n+2}(Y)\to U_{2n+1}( Y)\to\tilde{K}^{-1}(Y)\to 0,\] for any \(f\in\tilde{K}^{-2}(Y)\) the map \(\varphi\) is defined as follows \[\varphi(f)=(2n+1)!ch_{2n+1}(f),\] where \(ch_{i}\) denotes the \(2i\)-dimensional part of the Chern character. Also, we use the isomorphism \(\tilde{K}^{-i}(Y)\cong[\Sigma^{i}Y,BU(\infty)]\). In this paper, we denote both the canonical inclusion \(Sp(n)\hookrightarrow U(2n)\) and the induced map \(\widetilde{KSp}^{*}(X)\to\tilde{K}^{*}(X)\) by \(c^{\prime}\). By Theorem 1.3 in [10], there is a commutative diagram (2.2) Therefore to calculate the image of \(\psi\), we first calculate the image of \(\varphi\). We denote the free abelian group with a basis \(e_{1},e_{2},\cdots\), by \(\mathbb{Z}\{e_{1},e_{2},\cdots\}\) and the direct sum of \(k\) copies of \(\mathbb{Z}\) by \(\mathbb{Z}^{k}\). We need the following lemmas. **Lemma 2.2**.: _The following hold:_ \((a)\)_: for \(k\) even, \(\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{k})\) is isomorphic to \(\mathbb{Z}_{2}\),_ \((b)\)_: for \(k\) odd, \(\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{k})\) is isomorphic to zero if \(m\) is even and is isomorphic to \(\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\) or \(\mathbb{Z}_{4}\) if \(m\) is odd._ Proof.: To prove this lemma we will use induction. Note that we have the following isomorphisms \[\widetilde{KSp}^{-2}(S^{4i+2})\cong\mathbb{Z}, \widetilde{KSp}^{-2}(S^{4n+1})\cong 0,\] \[\widetilde{KSp}^{-2}(S^{4n})\cong\left\{\begin{array}{ll} \mathbb{Z}/2\mathbb{Z}&\text{if $n$ is even },\\ \\ 0&\text{if $n$ is odd},\end{array}\right. \widetilde{KSp}^{-2}(S^{4n-1})\cong\left\{\begin{array}{ll}0& \text{if $n$ is even },\\ \\ \mathbb{Z}/2\mathbb{Z}&\text{if $n$ is odd}.\end{array}\right.\] Consider the homotopy cofibration sequence \[S^{4m+7}\overset{\Sigma^{4m+1}v^{\prime}}{\longrightarrow}S^{4m+4}\to \Sigma^{4m+1}Q_{2}\to S^{4m+8}\to S^{4m+5},\] where \(v^{\prime}\) is a generator of \(\pi_{6}(S^{3})\cong\mathbb{Z}_{12}\). By applying \(\widetilde{KSp}^{-2}\), we get the following exact sequence \[\widetilde{KSp}^{-2}(S^{4m+5})\to\widetilde{KSp}^{-2}(S^{4m+8})\to \widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{2})\to\widetilde{KSp}^{-2}(S^{4m+4})\] \[\overset{(\Sigma^{4m+1}v^{\prime})^{*}}{\longrightarrow} \widetilde{KSp}^{-2}(S^{4m+7}).\] Now, in cases where \(m\) is even and \(m\) is odd, we obtain the following exact sequences \[0\to\mathbb{Z}_{2}\to\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{2})\to 0,\] \[0\to\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{2})\to\mathbb{Z}_{2} \overset{(\Sigma^{4m+1}v^{\prime})^{*}}{\longrightarrow}\mathbb{Z}_{2},\] respectively. Therefore when \(m\) is even then the group \(\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{2})\) is isomorphic to \(\mathbb{Z}_{2}\). Let \(m\) be odd. Since \(Sp(\infty)\) is homotopy euivalent to \(\Omega^{4}O(\infty)\), we can determine the map \(\widetilde{KSp}^{-2}(S^{4m+4})\overset{(\Sigma^{4m+1}v^{\prime})^{*}}{ \longrightarrow}\widetilde{KSp}^{-2}(S^{4m+7})\) by \((\Sigma^{4m+1}v^{\prime})^{*}\colon\pi_{4m+9}(SO(\infty))\to\pi_{4m+12}(SO( \infty))\). Let \(t_{1}\) be a generator of \(\pi_{4m+9}(SO(\infty))\cong\mathbb{Z}_{2}\). Then \((\Sigma^{4m+1}v^{\prime})^{*}(t_{1})\) is the composite \(S^{4m+12}\overset{\Sigma^{4m+6}v^{\prime}}{\longrightarrow}S^{4m+9}\overset{ t_{1}}{\longrightarrow}SO(\infty)\). Note that \(t_{1}\circ\Sigma^{4m+6}v^{\prime}\simeq t_{1}\circ 2v_{4m+9}\simeq 2t_{1}\circ v_{4m +9}\) is null homotopic, where by [12] we have \(\Sigma^{4m+6}v^{\prime}\simeq 2v_{4m+9}\), stably. Thus the map \((\Sigma^{4m+1}v^{\prime})^{*}\) is the zero map. Therefore we get the group \(\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{2})\) is also isomorphic to \(\mathbb{Z}_{2}\). Consider the homotopy cofibration sequence \[S^{4m+11}\overset{\theta}{\longrightarrow}\Sigma^{4m+1}Q_{2}\to\Sigma^{4m+1 }Q_{3}\to S^{4m+12}\to\Sigma^{4m+2}Q_{2},\] where \(\theta\) is the connecting map. By applying \(\widetilde{KSp}^{-2}\), we get the following exact sequence \[\widetilde{KSp}^{-2}(\Sigma^{4m+2}Q_{2})\to\widetilde{KSp}^{-2}(S^{4m+12}) \to\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{3})\to\widetilde{KSp}^{-2}(\Sigma^{4m +1}Q_{2})\] \[\overset{\theta^{*}}{\longrightarrow}\widetilde{KSp}^{-2}(S^{4m +11}).\] In cases where \(m\) is even and \(m\) is odd, we obtain the following exact sequences \[0\rightarrow\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{3})\rightarrow \mathbb{Z}_{2}\stackrel{{\theta^{*}}}{{\longrightarrow}}\mathbb{Z}_ {2},\] \[0\rightarrow\mathbb{Z}_{2}\rightarrow\widetilde{KSp}^{-2}( \Sigma^{4m+1}Q_{3})\rightarrow\mathbb{Z}_{2}\to 0,\] respectively. Let \(m\) be even. Let \(t_{1}\) be a generator of \(\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{2})\cong[\Sigma^{4m+1}Q_{2},\Omega Sp( \infty)]\)\(\cong\mathbb{Z}_{2}\). By applying \(\pi_{4m+11}\) to homotopy cofibration sequence \(S^{4m+4}\rightarrow\Sigma^{4m+1}Q_{2}\to S^{4m+8}\) and homotopy groups of sphere [12] we can show that the map \(\theta\colon S^{4m+11}\rightarrow\Sigma^{4m+1}Q_{2}\) is nontrivial. Therefore the composition \(t_{2}\colon S^{4m+11}\stackrel{{\theta}}{{\longrightarrow}} \Sigma^{4m+1}Q_{2}\stackrel{{ t_{1}}}{{\longrightarrow}}\Omega Sp(\infty)\) is nontrivial and has order \(2\), so \(t_{2}\) generates \(\widetilde{KSp}^{-2}(S^{4m+11})\cong[S^{4m+11},\Omega Sp(\infty)]\cong \mathbb{Z}_{2}\). Since the map \(\theta^{*}\colon\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{2})\rightarrow\widetilde {KSp}^{-2}(S^{4m+11})\) sends \(t_{1}\) to \(t_{2}\) so \(\theta^{*}\) is injective. Therefore by exactness we can conclude that the group \(\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{3})\) is isomorphic to zero. When \(m\) is odd then the group \(\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{3})\) is isomorphic to \(\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\) or \(\mathbb{Z}_{4}\). Now consider the following homotopy cofibration sequence \[S^{4(m+k)-1}\stackrel{{\theta}}{{\longrightarrow}}\Sigma^{4m+1} Q_{k-1}\rightarrow\Sigma^{4m+1}Q_{k}\to S^{4(m+k)}\rightarrow\Sigma^{4m+2}Q_{k-1}.\] By applying \(\widetilde{KSp}^{-2}\), we get the following exact sequence \[\widetilde{KSp}^{-2}(\Sigma^{4m+2}Q_{k-1})\rightarrow\widetilde{ KSp}^{-2}(S^{4(m+k)})\rightarrow\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{k}) \rightarrow\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{k-1})\] \[\stackrel{{\theta^{*}}}{{\longrightarrow}} \widetilde{KSp}^{-2}(S^{4(m+k)-1}).\] Now, when \(k\) is even, in cases where \(m\) is even and \(m\) is odd, we obtain the following exact sequences \[0\rightarrow\mathbb{Z}_{2}\rightarrow\widetilde{KSp}^{-2}( \Sigma^{4m+1}Q_{k})\to 0,\] \[0\rightarrow\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{k}) \rightarrow\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\ \text{or}\ \ \mathbb{Z}_{4}\stackrel{{\theta^{*}}}{{\longrightarrow}}\mathbb{Z}_ {2},\] respectively. Thus when \(m\) is even then \(\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{k})\) is isomorphic to \(\mathbb{Z}_{2}\). Let \(m\) be odd. Let \(t_{1}\) be a generator of \(\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{k-1})\cong\mathbb{Z}_{2}\oplus\mathbb{Z}_ {2}\ \text{or}\ \ \mathbb{Z}_{4}\). Apply \(\pi_{4(m+k)-1}\) to homotopy cofibration sequence \(\Sigma^{4m+1}Q_{k-2}\rightarrow\Sigma^{4m+1}Q_{k-1}\to S^{4(m+k)-4}\). By induction and [12] we can show that the map \(\theta\colon S^{4(m+k)-1}\rightarrow\Sigma^{4m+1}Q_{k-1}\) is nontrivial, where for \(k=2\), the map \(\theta\) is isomorphic to \(\Sigma^{4m+1}v^{\prime}\). Thus the composition \(t_{2}\colon S^{4(m+k)-1}\stackrel{{\theta}}{{\longrightarrow}} \Sigma^{4m+1}Q_{k-1}\stackrel{{ t_{1}}}{{\longrightarrow}} \Omega Sp(\infty)\) is nontrivial and generates \(\widetilde{KSp}^{-2}(S^{4(m+k)-1})\cong[S^{4(m+k)-1},\Omega Sp(\infty)] \cong\mathbb{Z}_{2}\). Since the map \(\theta^{*}\colon\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{k-1})\rightarrow \widetilde{KSp}^{-2}(S^{4(m+k)-1})\) sends \(t_{1}\) to \(t_{2}\) so \(\theta^{*}\) is surjective. Therefore by exactness we can conclude that the group \(\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{k})\) is also isomorphic to \(\mathbb{Z}_{2}\). When \(k\) is odd, in cases where \(m\) is even and \(m\) is odd, we obtain the following exact sequences \[0\rightarrow\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{k})\rightarrow \mathbb{Z}_{2}\rightarrow\mathbb{Z}_{2},\] \[0\rightarrow\mathbb{Z}_{2}\rightarrow\widetilde{KSp}^{-2}( \Sigma^{4m+1}Q_{k})\rightarrow\mathbb{Z}_{2}\to 0,\] respectively. Thus when \(m\) is even then by similar discussion for case \(Q_{3}\) we can show that the map \(\theta^{*}\colon\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{k-1})\cong\mathbb{Z}_{2} \rightarrow\widetilde{KSp}^{-2}(S^{4(m+k)-1})\cong\mathbb{Z}_{2}\) is injective and thus we obtain the group \(\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{k})\) is isomorphic to zero. When \(m\) is odd then the group \(\widetilde{KSp}^{-2}(\Sigma^{4m+1}Q_{k})\) is isomorphic to \(\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\) or \(\mathbb{Z}_{4}\). **Lemma 2.3**.: _The following hold: \((a)\): there is an isomorphism \(\widetilde{KSp}^{-2}(X)\rightarrow\widetilde{KSp}^{-2}(S^{4m+2})\oplus \cdots\oplus\widetilde{KSp}^{-2}(S^{4n+2})\cong\mathbb{Z}^{n-m+1}\) such that \(\widetilde{KSp}^{-2}(X)=\mathbb{Z}\{\xi_{1},\xi_{2},\cdots,\xi_{n-m+1}\}\), where \(\xi_{1}\in\widetilde{KSp}^{-2}(S^{4m+2})\), \(\xi_{2}\in\widetilde{KSp}^{-2}(S^{4m+6})\), \(\cdots\), and also \(\xi_{n-m+1}\in\widetilde{KSp}^{-2}_{p}(S^{4n+2})\), \((b)\) there is an isomorphism \(\tilde{K}^{-2}(X)\rightarrow\tilde{K}^{-2}(S^{4m+2})\oplus\cdots\oplus\tilde {K}^{-2}(S^{4n+2})\cong\mathbb{Z}^{n-m+1}\) such that \(\tilde{K}^{-2}(X)=\mathbb{Z}\{\xi^{\prime}_{1},\xi^{\prime}_{2},\cdots,\xi^{ \prime}_{n-m+1}\}\), where \(\xi^{\prime}_{1}\in\tilde{K}^{-2}(S^{4m+2})\), \(\xi^{\prime}_{2}\in\tilde{K}^{-2}(S^{4m+6})\), \(\cdots\), and also \(\xi^{\prime}_{n-m+1}\in\tilde{K}^{-2}(S^{4n+2})\), \((c):\) if \(m\) is even integer then_ \[c^{\prime}(\xi_{1})=\xi^{\prime}_{1},\quad c^{\prime}(\xi_{2})=2\xi^{\prime}_{2 },\quad\cdots,\quad c^{\prime}(\xi_{n-m+1})=\left\{\begin{array}{ll}\xi^{ \prime}_{n-m+1}&\mbox{$n$ is even}\;,\\ \\ 2\xi^{\prime}_{n-m+1}&\mbox{$n$ is odd}\;,\end{array}\right.\] _and if \(m\) is odd integer then_ \[c^{\prime}(\xi_{1})=2\xi^{\prime}_{1},\quad c^{\prime}(\xi_{2})=\xi^{\prime}_{ 2},\quad\cdots,\quad c^{\prime}(\xi_{n-m+1})=\left\{\begin{array}{ll}\xi^{ \prime}_{n-m+1}&\mbox{$n$ is even}\;,\\ \\ 2\xi^{\prime}_{n-m+1}&\mbox{$n$ is odd}\;.\end{array}\right.\] Proof.: First, we consider the case \(Q_{2}\). Put \(L_{1}=\widetilde{KSp}^{-2}(S^{4m+3})\) and \(L_{2}=\widetilde{KSp}^{-2}(S^{4m+5})\). The cofibration sequence \[S^{4m+2}\to S^{4m-1}\wedge Q_{2}\to S^{4m+6}\] induces the following commutative diagram of exact sequences where \(\sigma_{i}=1,\sigma_{j}=2\) when \(m\) is even and \(\sigma_{i}=2,\sigma_{j}=1\) when \(m\) is odd, (for example, see [6]). Note that there is an isomorphism \(\tilde{K}^{-2}(S^{2i})\cong\mathbb{Z}\). We know that \(L_{1}\) is isomorphic to \(\mathbb{Z}_{2}\) if \(m\) is even and is zero if \(m\) is odd. Also, we have that \(L_{2}\) is zero. So when \(m\) is even then we get the following exact sequence \[\mathbb{Z}_{2}\rightarrow\mathbb{Z}\rightarrow\widetilde{KSp}^{-2}(S^{4m-1} \wedge Q_{2})\rightarrow\mathbb{Z}\to 0.\] Thus by the exactness we get \(\widetilde{KSp}^{-2}(S^{4m-1}\wedge Q_{2})\) is isomorphic to \(\mathbb{Z}\oplus\mathbb{Z}\). When \(m\) is odd then we get the following exact sequence \[0\rightarrow\mathbb{Z}\rightarrow\widetilde{KSp}^{-2}(S^{4m-1}\wedge Q_{2}) \rightarrow\mathbb{Z}\to 0.\] Thus this exact sequence splits and we get \(\widetilde{KSp}^{-2}(S^{4m-1}\wedge Q_{2})\) is isomorphic to \(\mathbb{Z}\oplus\mathbb{Z}\). Note that in both cases \(\tilde{K}^{-2}(S^{4m-1}\wedge Q_{2})\) is a free abelian group isomorphic to \(\mathbb{Z}\oplus\mathbb{Z}\). Therefore we have \[\widetilde{KSp}^{-2}(S^{4m-1}\wedge Q_{2})=\mathbb{Z}\{\xi_{1},\xi_{2}\},\quad \tilde{K}^{-2}(S^{4m-1}\wedge Q_{2})=\mathbb{Z}\{\xi_{1}^{\prime},\xi_{2}^{ \prime}\},\] where \(\xi_{1}\in\widetilde{KSp}^{-2}(S^{4m+2})\), \(\xi_{2}\in\widetilde{KSp}^{-2}(S^{4m+6})\), \(\xi_{1}^{\prime}\in\tilde{K}^{-2}(S^{4m+2})\) and \(\xi_{2}^{\prime}\in\tilde{K}^{-2}(S^{4m+6})\). Put \(L_{3}=\widetilde{KSp}^{-2}(S^{4m}\wedge Q_{n-m})\). There is a cofibration sequence \[S^{4m-1}\wedge Q_{n-m}\to X\to S^{4n+2}.\] This sequence induces the following commutative diagram of exact sequences where \(\sigma_{k}=1\) when \(n\) is even and \(\sigma_{k}=2\) when \(n\) is odd. By induction, we show that the group \(L_{3}\) is isomorphic to \(\mathbb{Z}_{2}\). By applying \(\widetilde{KSp}^{-2}\) to the homotopy cofibration sequence \[S^{4m+6}\stackrel{{\Sigma^{4m}v^{\prime}}}{{\longrightarrow}}S^{4 m+3}\to S^{4m}\wedge Q_{2}\to S^{4m+7}\stackrel{{\Sigma^{4m+1}v^{ \prime}}}{{\longrightarrow}}S^{4m+4},\] we get the following exact sequence \[\widetilde{KSp}^{-2}(S^{4m+4})\stackrel{{(\Sigma^{4m+1}v^{ \prime})^{*}}}{{\longrightarrow}}\widetilde{KSp}^{-2}(S^{4m+7})\to\widetilde{ KSp}^{-2}(S^{4m}\wedge Q_{2}) \to\widetilde{KSp}^{-2}(S^{4m+3})\] \[\to\widetilde{KSp}^{-2}(S^{4m+6}).\] Now, in cases where \(m\) is even and \(m\) is odd, we obtain the following exact sequences \[0\to\widetilde{KSp}^{-2}(S^{4m}\wedge Q_{2})\to\mathbb{Z}_{2}\to \mathbb{Z},\] \[\mathbb{Z}_{2}\stackrel{{(\Sigma^{4m+1}v^{\prime})^{ *}}}{{\longrightarrow}}\mathbb{Z}_{2}\to\widetilde{KSp}^{-2}(S^{4m}\wedge Q_{ 2})\to 0,\] respectively. Since \(\mathbb{Z}\) is torsion-free, any map \(\mathbb{Z}_{2}\to\mathbb{Z}\) is zero map. When \(m\) is even then by exactness we obtain the group \(\widetilde{KSp}^{-2}(S^{4m}\wedge Q_{2})\) is isomorphic to \(\mathbb{Z}_{2}\). When \(m\) is odd then by proof of Lemma 2.2, we know that the map \((\Sigma^{4m+1}v^{\prime})^{*}\) is zero map. Therefore we can conclude that the group \(\widetilde{KSp}^{-2}(S^{4m}\wedge Q_{2})\) is also isomorphic to \(\mathbb{Z}_{2}\). Consider the cofibration sequence \[S^{4n-2}\stackrel{{\theta}}{{\longrightarrow}}S^{4m}\wedge Q_{n- m-1}\to S^{4m}\wedge Q_{n-m}\to S^{4n-1}\stackrel{{\Sigma \theta}}{{\longrightarrow}}S^{4m+1}\wedge Q_{n-m-1}.\] This sequence induces an exact sequence \[\widetilde{KSp}^{-2}(S^{4m+1}\wedge Q_{n-m-1})\stackrel{{( \Sigma\theta)^{*}}}{{\longrightarrow}}\widetilde{KSp}^{-2}(S^{4n-1})\to \widetilde{KSp}^{-2}(S^{4m}\wedge Q_{n-m})\] \[\to\widetilde{KSp}^{-2}(S^{4m}\wedge Q_{n-m-1}).\] When \(n\) is even then we get the following exact sequence \[0\rightarrow\widetilde{KSp}^{-2}(S^{4m}\wedge Q_{n-m})\rightarrow\mathbb{Z}_{2} \rightarrow\mathbb{Z}.\] Therefore by exactness we obtain the group \(\widetilde{KSp}^{-2}(S^{4m}\wedge Q_{n-m})\) is isomorphic to \(\mathbb{Z}_{2}\). When \(n\) is odd then by Lemma 2.2, we know that the group \(\widetilde{KSp}^{-2}(S^{4m+1}\wedge Q_{n-m-1})\) is isomorphic to \(\mathbb{Z}_{2}\) when \(m\) is even and is isomorphic to \(\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\) or \(\mathbb{Z}_{4}\) when \(m\) is odd. Therefore in cases where \(m\) is even and \(m\) is odd, we obtain the following exact sequences \[\mathbb{Z}_{2}\xrightarrow{(\Sigma\theta)^{*}}\mathbb{Z}_{2} \rightarrow\widetilde{KSp}^{-2}(S^{4m}\wedge Q_{n-m})\rightarrow\mathbb{Z}_ {2}\rightarrow\mathbb{Z},\] \[\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\ or\ \ \mathbb{Z}_{4} \xrightarrow{(\Sigma\theta)^{*}}\mathbb{Z}_{2}\rightarrow\widetilde{KSp} ^{-2}(S^{4m}\wedge Q_{n-m})\rightarrow\mathbb{Z}_{2}\rightarrow\mathbb{Z},\] respectively. In case \(m\) is even, let \(t_{1}\) be a generator of \(\widetilde{KSp}^{-2}(S^{4m+1}\wedge Q_{n-m-1})\cong\mathbb{Z}_{2}\). By similar discussion in proof of Lemma 2.2 we can show that the map \(S^{4n-1}\xrightarrow{\Sigma\theta}S^{4m+1}\wedge Q_{n-m-1}\) is nontrivial. Thus the composition \(t_{2}\colon S^{4n-1}\xrightarrow{\Sigma\theta}S^{4m+1}\wedge Q_{n-m-1} \xrightarrow{t_{1}}\Omega Sp(\infty)\) is nontrivial and generates \(\widetilde{KSp}^{-2}(S^{4n-1})\cong[S^{4n-1},\Omega Sp(\infty)]\cong\mathbb{Z }_{2}\). Now, since the map \((\Sigma\theta)^{*}\colon\widetilde{KSp}^{-2}(S^{4m+1}\wedge Q_{n-m-1}) \rightarrow\widetilde{KSp}^{-2}(S^{4n-1})\) sends \(t_{1}\) to \(t_{2}\) so \((\Sigma\theta)^{*}\) is injective. Thus the group \(\widetilde{KSp}^{-2}(S^{4m}\wedge Q_{n-m})\) is isomorphic to \(\mathbb{Z}_{2}\). In case \(m\) is odd, by similar discussion in proof of Lemma 2.2 we can show that the map \((\Sigma\theta)^{*}\) is surjective. Therefore we can conclude that the group \(\widetilde{KSp}^{-2}(S^{4m}\wedge Q_{n-m})\) is also isomorphic to \(\mathbb{Z}_{2}\). Now, since the group \(L_{3}\) is isomorphic to \(\mathbb{Z}_{2}\) we can rewrite the upper row of the exact sequence in diagram \((*)\) as the following exact sequence \[0\rightarrow\widetilde{KSp}^{-2}(S^{4n+2})\rightarrow\widetilde{KSp}^{-2}( X)\rightarrow\widetilde{KSp}^{-2}(S^{4m-1}\wedge Q_{n-m})\to 0.\] Therefore the upper and the down rows of the exact sequences in diagram \((*)\) splits. Thus by induction, we can conclude that \(\widetilde{KSp}^{-2}(X)\) and \(\tilde{K}^{-2}(X)\) are free abelian groups that are isomorphic to \(\mathbb{Z}^{n-m+1}\) with their basis \(\xi_{1},\xi_{2},\cdots,\xi_{n-m+1}\) and \(\xi^{\prime}_{1},\xi^{\prime}_{2},\cdots,\xi^{\prime}_{n-m+1}\), respectively, where \(\xi_{n-m+1}\in\widetilde{KSp}^{-2}(S^{4n+2})\) and \(\xi^{\prime}_{n-m+1}\in\tilde{K}^{-2}(S^{4n+2})\). Now according to the definition of the maps \(c^{\prime}\), we can choose \(\xi_{1}\), \(\xi^{\prime}_{1}\), \(\xi_{2}\), \(\xi^{\prime}_{2}\), \(\cdots\), \(\xi_{n-m+1}\) and \(\xi^{\prime}_{n-m+1}\) such that if \(m\) is even integer then we have \[c^{\prime}(\xi_{1})=\xi^{\prime}_{1},\quad c^{\prime}(\xi_{2})=2\xi^{\prime}_{2 },\quad\cdots,\quad c^{\prime}(\xi_{n-m+1})=\left\{\begin{array}{ll}\xi^{ \prime}_{n-m+1}&n\text{ is even },\\ \\ 2\xi^{\prime}_{n-m+1}&n\text{ is odd },\end{array}\right.\] and if \(m\) is odd integer then we have \[c^{\prime}(\xi_{1})=2\xi^{\prime}_{1},\quad c^{\prime}(\xi_{2})=\xi^{\prime}_{2 },\quad\cdots,\quad c^{\prime}(\xi_{n-m+1})=\left\{\begin{array}{ll}\xi^{ \prime}_{n-m+1}&n\text{ is even },\\ \\ 2\xi^{\prime}_{n-m+1}&n\text{ is odd }.\end{array}\right.\] Consider the map \(c^{\prime}\colon Sp(n-m+1)\to SU(2n-2m+2)\). By restriction of the map \(c^{\prime}\) to their quasi-projective spaces, we get to map \(\bar{c}^{\prime}\colon Q_{n-m+1}\to\Sigma\mathbb{C}P^{2n-2m+1}\). The cohomologies of \(Q_{n-m+1}\) and \(\Sigma\mathbb{C}P^{2n-2m+1}\) are given by \[H^{*}(Q_{n-m+1})=\mathbb{Z}\{\bar{y}_{3},\bar{y}_{7},\cdots,\bar{ y}_{4n-4m+3}\},\] \[H^{*}(\Sigma\mathbb{C}P^{2n-2m+1})=\mathbb{Z}\{\bar{x}_{3},\bar{ x}_{5},\bar{x}_{7},\cdots,\bar{x}_{4n-4m+3}\},\] such that \(\bar{c}^{\prime}(\bar{x}_{3})=\bar{y}_{3}\), \(\bar{c}^{\prime}(\bar{x}_{5})=0\), \(\bar{c}^{\prime}(\bar{x}_{7})=\bar{y}_{7}\), \(\cdots\), and \(\bar{c}^{\prime}(\bar{x}_{4n-4m+3})=\bar{y}_{4n-4m+3}\). We denote a generator of \(\tilde{K}(S^{2n})\cong\mathbb{Z}\) by \(\zeta_{n}\). Recall that \[H^{*}(\mathbb{C}P^{2n-2m+1})=\mathbb{Z}[t]/(t^{2n-2m+2}),\qquad K(\mathbb{C}P ^{2n-2m+1})=\mathbb{Z}[x]/(x^{2n-2m+2}),\] where \(|t|=2\). Note that \(\tilde{K}^{-2}(\Sigma^{4m}\mathbb{C}P^{2n-2m+1})\cong\tilde{K}^{0}(\Sigma^{4m +2}\mathbb{C}P^{2n-2m+1})\) is a free abelian group generated by \(\zeta_{2m+1}\otimes x\), \(\zeta_{2m+1}\otimes x^{2}\), \(\cdots\), \(\zeta_{2m+1}\otimes x^{2n-2m+1}\), with the following Chern characters \[ch_{2n+1}(\zeta_{2m}\otimes x)=ch_{2m}(\zeta_{2m})ch_{2n-2m+1}(x)=\frac{1}{(2n -2m+1)!}\sigma^{4m}t^{2n-2m+1},\] \[ch_{2n+1}(\zeta_{2m}\otimes x^{2})=ch_{2m}(\zeta_{2m})ch_{2n-2m+1}(x^{2})=A \sigma^{4m},\] \[ch_{2n+1}(\zeta_{2m}\otimes x^{3})=ch_{2m}(\zeta_{2m})ch_{2n-2m+1}(x^{3})=B \sigma^{4m},\] \[\vdots\] \[ch_{2n+1}(\zeta_{2m}\otimes x^{2n-2m+1})=ch_{2m}(\zeta_{2m})ch_{2n-2m+1}(x^{2n- 2m+1})=C\sigma^{4m},\] where \(A\), \(B\) and \(C\) are equal to \[A=ch_{2n-2m+1}(x^{2})=\sum_{\begin{subarray}{c}i_{1}+i_{2}=2n-2m+1,\\ 1\leq i_{1}\leq n-m\end{subarray}}ch_{i_{1}}xch_{i_{2}}x=\sum_{k=1}^{n-m}\frac{ 1}{k!(2n-2m+1-k)!}t^{2n-2m+1},\] \[B=ch_{2n-2m+1}(x^{3})= ch_{1}x\sum_{\begin{subarray}{c}i_{1}+i_{2}=2n-2m,\\ 1\leq i_{1}\leq i_{2}\end{subarray}}ch_{i_{1}}xch_{i_{2}}x+ch_{2}x\sum_{ \begin{subarray}{c}i_{1}+i_{2}=2n-2m-1,\\ 2\leq i_{1}\leq i_{2}\end{subarray}}ch_{i_{1}}xch_{i_{2}}x+\cdots\] \[+ch_{n-m-1}x\sum_{\begin{subarray}{c}i_{1}+i_{2}=n-m+2,\\ n-m-1\leq i_{1}\leq i_{2}\end{subarray}}ch_{i_{1}}xch_{i_{2}}x+\sum_{ \begin{subarray}{c}i_{1}+i_{2}=2n-2m+1,\\ i_{1},i_{2}\geq 1\end{subarray}}ch_{i_{1}}xch_{i_{2}}x^{2},\] \[C=ch_{2n-2m+1}(x^{2n-2m+1})= ch_{1}x\sum_{\begin{subarray}{c}i_{1}+\cdots+i_{2n-2m}=2n-2m,\\ 1\leq i_{1}\leq i_{2}\leq\cdots\leq i_{2n-2m}\end{subarray}}ch_{i_{1}}x^{i_{1}} \cdots ch_{i_{2n-2m}}x^{i_{2n-2m}}\] \[+ch_{2}x^{2}\sum_{\begin{subarray}{c}i_{1}+\cdots+i_{n-m-1}=2n-2m- 1,\\ 2\leq i_{1}\leq i_{2}\leq\cdots\leq i_{n-m-1}\end{subarray}}ch_{i_{1}}x^{i_{1}} \cdots ch_{i_{n-m-1}}x^{i_{n-m-1}}\] \[+ch_{3}x^{3}\sum_{\begin{subarray}{c}i_{1}+\cdots+i_{n-m-2}=2n-2m- 2,\\ 3\leq i_{1}\leq i_{2}\leq\cdots\leq i_{n-m-2}\end{subarray}}ch_{i_{1}}x^{i_{1}} \cdots ch_{i_{n-m-2}}x^{i_{n-m-2}}+\cdots\] \[+ch_{n-m}x^{n-m}\sum_{\begin{subarray}{c}i_{1}=n-m+1,\\ i_{1}\geq n-m\end{subarray}}ch_{i_{1}}x^{i_{1}}.\] Consider the map \(\bar{c}^{\prime}\colon\tilde{K}^{-2}(\Sigma^{4m}\mathbb{C}P^{2n-2m+1})\to \tilde{K}^{-2}(S^{4m-1}\wedge Q_{n-m+1})\), we can put \(\xi^{\prime}_{1}\), \(\xi^{\prime}_{2}\), \(\cdots\) and \(\xi^{\prime}_{n-m+1}\) such that \(\xi^{\prime}_{1}=\bar{c}^{\prime}(\zeta_{2m}\otimes x)\), \(\xi^{\prime}_{2}=\bar{c}^{\prime}(\zeta_{2m}\otimes x^{3})\), \(\cdots\), and \(\xi^{\prime}_{n-m+1}=\bar{c}^{\prime}(\zeta_{2m}\otimes x^{2n-2m+1})\). We have the following proposition. **Proposition 2.4**.: _The image of \(\psi\) is generated by_ \[\left\{\begin{array}{ll}\frac{(2n+1)!}{(2n-2m+1)!}\sigma^{4m-1}\otimes\bar{y }_{4n-4m+3}&\text{if $m$ is even}\;,\\ \\ \frac{2(2n+1)!}{(2n-2m+1)!}\sigma^{4m-1}\otimes\bar{y}_{4n-4m+3}&\text{if $m$ is odd}\;. \end{array}\right.\] Proof.: Consider the following commutative diagram \[\begin{array}{ccc}\tilde{K}^{-2}(\Sigma^{4m}\mathbb{C}P^{2n-2m+1})& \xrightarrow{\varphi^{\prime}}&H^{4n+2}(\Sigma^{4m}\mathbb{C}P^{2n-2m+1})\\ \tilde{c}^{\prime}\Big{\downarrow}&\raisebox{-10.0pt}{\includegraphics[]{ ccc}}&\cong\\ \tilde{K}^{-2}(S^{4m-1}\wedge Q_{n-m+1})&\xrightarrow{\varphi}&H^{4n+2}(S^{4m-1} \wedge Q_{n-m+1}),\end{array} \tag{2.3}\] where the map \(\varphi^{\prime}\) is defined similarly to the map \(\varphi\). By definition of the map \(\varphi^{\prime}\), we have \[\varphi^{\prime}(\zeta_{2m}\otimes x)=(2n+1)!ch_{2n+1}(\zeta_{2m}\otimes x)= \frac{(2n+1)!}{(2n-2m+1)!}\sigma^{4m}t^{2n-2m+1},\] \[\varphi^{\prime}(\zeta_{2m}\otimes x^{2})=(2n+1)!ch_{2n+1}(\zeta_{2m}\otimes x ^{2})=A(2n+1)!\sigma^{4m},\] \[\varphi^{\prime}(\zeta_{2m}\otimes x^{3})=(2n+1)!ch_{2n+1}(\zeta_{2m}\otimes x ^{3})=B(2n+1)!\sigma^{4m},\] \[\vdots\] \[\varphi^{\prime}(\zeta_{2m}\otimes x^{2n-2m+1})=(2n+1)!ch_{2n+1}(\zeta_{2m} \otimes x^{2n-2m+1})=C(2n+1)!\sigma^{4m}.\] Therefore according to the commutativity of diagram (2.3), we have \[\varphi(\xi_{1}^{\prime})=\varphi^{\prime}(\zeta_{2m}\otimes x)=\frac{(2n+1)!}{(2n -2m+1)!}\sigma^{4m-1}\otimes\bar{y}_{4n-4m+3},\] \[\varphi(\xi_{2}^{\prime})=\varphi^{\prime}(\zeta_{2m}\otimes x^{3})=B^{\prime}(2 n+1)!\sigma^{4m-1}\otimes\bar{y}_{4n-4m+3},\] \[\vdots\] \[\varphi(\xi_{n-m+1}^{\prime})=\varphi^{\prime}(\zeta_{2m}\otimes x^{2n-2m+1})= C^{\prime}(2n+1)!\sigma^{4m-1}\otimes\bar{y}_{4n-4m+3},\] where \(B=B^{\prime}t^{2n-2m+1}\) and \(C=C^{\prime}t^{2n-2m+1}\). Thus by the commutativity of diagram (2.2), we get \[\psi(\xi_{1})=\varphi(c^{\prime}(\xi_{1}))=\pm\left\{\begin{array}{ll}\frac {(2n+1)!}{(2n-2m+1)!}\sigma^{4m-1}\otimes\bar{y}_{4n-4m+3}&\mbox{if $m$ is even },\\ \\ \frac{2(2n+1)!}{(2n-2m+1)!}\sigma^{4m-1}\otimes\bar{y}_{4n-4m+3}&\mbox{if $m$ is odd }, \end{array}\right.\] \[\psi(\xi_{2})=\varphi(c^{\prime}(\xi_{2}))=\pm\left\{\begin{array}{ll}2B^{ \prime}(2n+1)!\sigma^{4m-1}\otimes\bar{y}_{4n-4m+3}&\mbox{if $m$ is even },\\ \\ B^{\prime}(2n+1)!\sigma^{4m-1}\otimes\bar{y}_{4n-4m+3}&\mbox{if $m$ is odd },\end{array}\right.\] \[\vdots\] \[\psi(\xi_{n-m+1})=\varphi(c^{\prime}(\xi_{n-m+1}))=\left\{\begin{array}{ll}-( 2n+1)!C^{\prime}\sigma^{4m-1}\otimes\bar{y}_{4n-4m+3}&\mbox{if $n$ is even },\\ \\ 2(2n+1)!C^{\prime}\sigma^{4m-1}\otimes\bar{y}_{4n-4m+3}&\mbox{if $n$ is odd }.\end{array}\right.\] Since the coefficients \(B^{\prime},\cdots,C^{\prime}\) are divisible by \(\frac{1}{(2n-2m+1)!}\), the image of \(\psi\) is included in the submodule generated by \(\frac{(2n+1)!}{(2n-2m+1)!}\sigma^{4m-1}\otimes\bar{y}_{4n-4m+3}\) if \(m\) is even and is included in the submodule generated by \(\frac{2(2n+1)!}{(2n-2m+1)!}\sigma^{4m-1}\otimes\bar{y}_{4n-4m+3}\) if \(m\) is odd. Thus we can conclude that \[Im\psi\cong\left\{\begin{array}{ll}\mathbb{Z}\{\frac{(2n+1)!}{(2n-2m+1)!} \sigma^{4m-1}\otimes\bar{y}_{4n-4m+3}\}&\mbox{if $m$ is even },\\ \\ \mathbb{Z}\{\frac{2(2n+1)!}{(2n-2m+1)!}\sigma^{4m-1}\otimes\bar{y}_{4n-4m+3}\} &\mbox{if $m$ is odd }.\end{array}\right.\] Therefore by Lemma 2.1 and Proposition 2.4, we get the following theorem. **Theorem 2.5**.: _There is an isomorphism_ \[[X,Sp(n)]\cong\left\{\begin{array}{ll}\mathbb{Z}_{\frac{(2n+1)!}{(2n-2m+1)!} }&\mbox{if $m$ is even },\\ \\ \mathbb{Z}_{\frac{2(2n+1)!}{(2n-2m+1)!}}&\mbox{if $m$ is odd }.\end{array}\right.\qed\] Proof of Theorem 1.1 In this section, we will prove Theorem 1.1. We recall that \(X=S^{4m-1}\wedge Q_{n-m+1}\). In the following theorem that was proved in [10], we obtain the commutators in the group \([X,Sp(n)]\). **Theorem 3.1**.: _Let \(X\) be a \(CW\)-complex with dim \(X\leq 4n+2\). For \(\alpha,\beta\in Sp_{n}(X)\), the commutator \([\alpha,\beta]\) in \(Sp_{n}(X)\) is given as_ \[[\alpha,\beta]=\iota([\sum_{i+j=n+1}\alpha^{*}(y_{4i-1})\beta^{*}(y_{4j-1})]). \quad\Box\] Put \(X^{\prime}=S^{4m-1}\times Q_{n-m+1}\). Consider the following compositions \[S^{4m-1}\times Q_{n-m+1}\stackrel{{ p_{1}}}{{\longrightarrow}}S ^{4m-1}\stackrel{{\varepsilon_{m,n}}}{{\longrightarrow}}Sp(n),\] \[S^{4m-1}\times Q_{n-m+1}\stackrel{{ p_{2}}}{{\longrightarrow}}Q _{n-m+1}\stackrel{{\varepsilon_{m,n}}}{{\longrightarrow}}Sp(n),\] where the maps \(p_{1}\) and \(p_{2}\) are the first and the second projections. Put \(\varepsilon_{m,n}\circ p_{1}=\alpha\) and \(\epsilon_{m,n}\circ p_{2}=\beta\). The commutator \([\alpha,\beta]\in[X^{\prime},Sp(n)]\) is the following composition \[X^{\prime}\stackrel{{\bar{\Delta}}}{{\longrightarrow}}X^{ \prime}\wedge X^{\prime}\stackrel{{\alpha\wedge\beta}}{{ \longrightarrow}}Sp(n)\wedge Sp(n)\stackrel{{\gamma}}{{ \longrightarrow}}Sp(n),\] where \(\bar{\Delta}\) and \(\gamma\) are the reduced diagonal map and the commutator map. Consider the following diagram where \(\Delta\) is the diagonal map and \(q:X^{\prime}\to X\) is the quotient map. Since the composition \[X^{\prime}=S^{4m-1}\times Q_{n-m+1}\stackrel{{\Delta}}{{ \longrightarrow}}X^{\prime}\times X^{\prime}\stackrel{{ p_{1}\times p_{2}}}{{ \longrightarrow}}S^{4m-1}\times Q_{n-m+1}=X^{\prime}\] is \(1\), the Samelson product \(\langle\varepsilon_{m,n},\epsilon_{m,n}\rangle=\gamma\circ(\varepsilon_{m,n} \wedge\epsilon_{m,n})\) is equal to the commutator \([\alpha,\beta]\) in the group \([X,Sp(n)]\). So by Theorem 3.1, we have \[[\alpha,\beta]=\iota([\varepsilon_{m,n}{}^{*}(y_{4m-1})\otimes\epsilon_{m,n}{ }^{*}(y_{4n-4m+3})])=\iota([\sigma^{4m-1}\otimes\bar{y}_{4n-4m+3}]).\] Therefore by Theorem 2.5, when \(m\) is even then the order of the Samelson product \(\langle\varepsilon_{m,n},\epsilon_{m,n}\rangle\) is equal to \(\frac{(2n+1)!}{(2n-2m+1)!}\) and when \(m\) is odd then the order of the Samelson product \(\langle\varepsilon_{m,n},\epsilon_{m,n}\rangle\) is equal to \(\frac{2(2n+1)!}{(2n-2m+1)!}\). \(\quad\Box\)
2305.05432
WikiWeb2M: A Page-Level Multimodal Wikipedia Dataset
Webpages have been a rich resource for language and vision-language tasks. Yet only pieces of webpages are kept: image-caption pairs, long text articles, or raw HTML, never all in one place. Webpage tasks have resultingly received little attention and structured image-text data underused. To study multimodal webpage understanding, we introduce the Wikipedia Webpage 2M (WikiWeb2M) suite; the first to retain the full set of images, text, and structure data available in a page. WikiWeb2M can be used for tasks like page description generation, section summarization, and contextual image captioning.
Andrea Burns, Krishna Srinivasan, Joshua Ainslie, Geoff Brown, Bryan A. Plummer, Kate Saenko, Jianmo Ni, Mandy Guo
2023-05-09T13:20:59Z
http://arxiv.org/abs/2305.05432v1
# WikiWeb2M: A Page-Level Multimodal Wikipedia Dataset ###### Abstract Webpages have been a rich resource for language and vision-language tasks. Yet only pieces of webpages are kept: image-caption pairs, long text articles, or raw HTML, never all in one place. Webpage tasks have resultingly received little attention and structured image-text data underused. To study multimodal webpage understanding, we introduce the Wikipedia Webpage 2M (WikiWeb2M)10 suite; the first to retain the full set of images, text, and structure data available in a page. WikiWeb2M can be used for tasks like page description generation, section summarization, and contextual image captioning. Footnote 10: Data is readily available at [https://github.com/google-research-datasets/wit/blob/main/wikiweb2m.md](https://github.com/google-research-datasets/wit/blob/main/wikiweb2m.md) Multimodal Data, Webpages, Machine Learning, Text Generation, Vision and Language ## Introduction Webpages are multimodal, structured content which can been used for pretraining and fine-tuning. Large scale noisy datasets scraped from the web have been used to pretrain large language or contrastive models (Raffel et al., 2020; Jia et al., 2021). Downstream tasks built from webpages have included instruction following, image captioning, news captioning, image-sentence retrieval, and image-article retrieval (Gur et al., 2022; Biten et al., 2019; Tan et al., 2022). Yet little prior work has studied tasks to evaluate multimodal webpage understanding itself. Many classification and generation problems could be studied with webpages: taxonomic webpage classification, webpage retrieval, web image captioning, and webpage summarization. However, to date there is no open source, multimodal dataset that retains all webpage content. _E.g._, the Wikipedia Image Text (WIT) dataset (Srinivasan et al., 2021) does not keep HTML structure and misses out on many text sections, as shown in Table 1. Unified text, image, and structure data would allow for greater study of multimodal content understanding with many-to-many text and image relationships. As a result, we propose the new Wikipedia Webpage (WikiWeb2M) dataset of over 2M pages, which unifies webpage content to include all text, images, and their location (_e.g._, section index) in one example. Table 2 (left) includes the number of pages, sections, and images, along with sample counts for downstream tasks. Figure 1 (left) shows how one webpage can be used for page description, section summarization, and contextual captioning. These tasks can improve interaction with web content, _e.g._, a page description may provide a user who is blind more agency by allowing them to preview content before listening to the entire body with a screen reader (Vtyurina et al., 2019). On top of aiding assistive technology, tasks like contextual image captioning and section summarization can be used for modern content generation, as there is growing interest in providing multimodal snippets from the web (Nkemelu et al., 2023). ## The WikiWeb2M Dataset WikiWeb2M is created by rescraping the \(\sim\)2M English articles in WIT. Each webpage sample includes the page URL and title, section titles, text, and indices, images and their captions, and more; see Figure 1 (right). This differs from WIT which defined individual samples as image-caption pairs with additional metadata (_e.g._, originating section title). We shuffle the WIT webpages to define a random 1.8M/100K/100K train/val/test split. Table 2 (left) shows the number of pages, sections, and images in our dataset after additional processing. In particular, we only retain content sections (_e.g._, not the "See Also" section). For images, we keep JPEG and PNG and require the dimensions be greater than 1px to allow for a greater diversity of images to be included (_e.g._, icons)1. We include metadata on image dimensions to allow for additional filtering. Footnote 1: We release image URLs, where they can be fetched. In Table 1, we report the number of sections and images compared to the English subset of WIT. We add nearly 1M total images to the dataset by keeping the images on a webpage regardless of whether they have image captions. We break down section counts by type: structural, heading, text, image, and both text and image. Structural and heading sections do not contain immediate section text (the former have subsections). For heading sections, the section content either linked to a different article, was empty, or only had tables. A notable 6.8M text sections are in WikiWeb2M, none of which were available in WIT. ### The WikiWeb2M Tasks We now describe WikiWeb2M's suite of multimodal generation tasks and task data processing. Table 2 (left) shows data statistics and (right) downstream task performance when using T5 and ViT base models (Raffel et al., 2020; Dosovitskiy et al., 2021). **Page Description Generation** The goal is to generate a description of a page given the rest of the webpage's image, text, and structure. We use the Wikipedia-provided page descriptions for each article. We retain a page if the description has at least five words. A small subset of Wikipedia pages are lists2; we remove pages that explicitly have "list_of" in their URL or fewer than two rich content sections. Footnote 2: For example, [https://en.wikipedia.org/wiki/List_of_mammals_of_the_United_States](https://en.wikipedia.org/wiki/List_of_mammals_of_the_United_States) **Section Summarization** The goal is to generate a sentence that highlights the section's content given images and (non-summary) text in the section and other context sections. We take advantage of the leading sentence bias and use the first sentence of a section its pseudo summary. In a small pilot, a majority of human annotators also deemed the first sentence as a reasonable summary. A section serves as a target section if it has at least five sentences, contains neither a table nor list, and is not the root section. We filter out the root because the root (first) section is often the page description. **Contextual Image Captioning**(Nguyen et al., 2022) proposed Wikipedia image captioning given the image's webpage context. With WikiWeb2M, we can now utilize the entire webpage context for the image instead of just the section it originally came from. We only allow target images to be those from WIT to ensure quality captions. Following prior work, we also use the reference description as the ground truth caption to be generated and require it must have at least three words. But, we do not input the attribution description, as it often contains large overlap with the reference description. **Results** Table 2 (right) shows results for each task. For contextual image captioning and section summarization we verify that WikiWeb2M's additional sections (compared to only inputting the target section for image captioning or summarization) improve task performance; page description generation is only made possible with our dataset.
2307.04155
The WQN algorithm for EEG artifact removal in the absence of scale invariance
Electroencephalogram (EEG) signals reflect brain activity across different brain states, characterized by distinct frequency distributions. Through multifractal analysis tools, we investigate the scaling behaviour of different classes of EEG signals and artifacts. We show that brain states associated to sleep and general anaesthesia are not in general characterized by scale invariance. The lack of scale invariance motivates the development of artifact removal algorithms capable of operating independently at each scale. We examine here the properties of the wavelet quantile normalization algorithm, a recently introduced adaptive method for real-time correction of transient artifacts in EEG signals. We establish general results regarding the regularization properties of the WQN algorithm, showing how it can eliminate singularities introduced by artefacts, and we compare it to traditional thresholding algorithms. Furthermore, we show that the algorithm performance is independent of the wavelet basis. We finally examine its continuity and boundedness properties and illustrate its distinctive non-local action on the wavelet coefficients through pathological examples.
Matteo Dora, Stéphane Jaffard, David Holcman
2023-07-09T11:56:26Z
http://arxiv.org/abs/2307.04155v1
# The WQN algorithm for EEG artifact removal in the absence of scale invariance 1 ###### Abstract Electroencephalogram (EEG) signals reflect brain activity across different brain states, characterized by distinct frequency distributions. Through multifractal analysis tools, we investigate the scaling behaviour of different classes of EEG signals and artifacts. We show that brain states associated to sleep and general anaesthesia are not in general characterized by scale invariance. The lack of scale invariance motivates the development of artifact removal algorithms capable of operating independently at each scale. We examine here the properties of the wavelet quantile normalization algorithm, a recently introduced adaptive method for real-time correction of transient artifacts in EEG signals. We establish general results regarding the regularization properties of the WQN algorithm, showing how it can eliminate singularities introduced by artefacts, and we compare it to traditional thresholding algorithms. Furthermore, we show that the algorithm performance is independent of the wavelet basis. We finally examine its continuity and boundedness properties and illustrate its distinctive non-local action on the wavelet coefficients through pathological examples. ## 1 Introduction For nearly a century, brain activity has been measured using electroencephalography (EEG), a technique that uses electrodes placed on the scalp of a patient to record the electrical activity of the brain [1]. This physiological signal reflects the collective activity of neuronal populations [2]. By analyzing the statistical and spectral properties [3] of the EEG, it is possible to identify transient brain oscillations within a frequency range between zero and hundreds of Hz that reflects key cognitive events, such as specific responses to sensory stimulations, learning and memory, sleep stages, meditation, coma, and more. Because of its non-invasive nature, EEG recordings have been widely adopted in the clinical setting for screening or monitoring tasks. An example is general anaesthesia (GA), a procedure consisting in placing the brain into an artificial but reversible coma state, which can now be routinely monitored in real-time during surgery by recording the EEG signal from few electrodes. This setting provides a continuous feedback about the depth of anaesthesia, allowing for accurate control of the anaesthetic dose required to keep the patient in a safe unconscious state. However, the amplitude of the EEG signal typically varies in the microvolt range, making the EEG highly susceptible to contamination by artifacts originating from several types of sources. The issue is particularly pronounced in the clinical setting, where there is a limited control over the environment and the potential for artifact contamination. Artifacts include noise from electrical equipments, muscle contractions, eye movements, or small displacements of the electrodes, that alter the EEG signal. Eliminating artifacts from the EEG signal is thus a crucial concern. Wavelet-based methods [4; 5; 6] have been used extensively in this regard by thresholding coefficients to remove artifacts from the EEG signal, taking advantage of the different properties of the artifact and the physiological EEG [7; 8; 9] distributions. In this direction, we recently introduced an empirical method, the WQN algorithm [10; 11], designed to remove artifacts from single-channel EEGs for real-time applications in clinical monitoring. This adaptive approach allows attenuating transient artifacts in the EEG by normalizing the wavelet coefficient distribution during the artifact so that it matches the one obtained from a preceding uncontaminated interval. While our previous studies have demonstrated the high effectiveness of the WQN algorithm in removing transient artifacts from EEG [10; 11], the underlying reasons for its efficiency remain unclear. In the current article, we provide a comprehensive analysis of the WQN algorithm in two directions: First, we study the statistical properties of the EEG signal, and second, we derive several properties of the WQN algorithm. The manuscript is organized as follows. In Section 2, we use multifractal analysis techniques [12; 13; 14] in order to characterize the scaling behaviour of different classes of EEG signals and artifacts, showing that scale-invariance is not always observed. This result justifies the need to modify wavelet coefficients separately for each scale, as proposed by the WQN algorithm. In Section 3, we describe the WQN algorithm and study its properties. In particular, we show that the WQN algorithm cannot introduce unwanted singularities in the signal. We then show that the WQN algorithm is robust with respect to changes of the wavelet basis. We also put in evidence its non-local action on the wavelet coefficients and explore some of its consequences; finally we demonstrate how it performs with respect to pathological cases where signals are perturbed by different types of singularities and random noise. ## 2 Scaling properties of the EEG signal In the spectral analysis of the EEG signal, the scale invariance property is characterized by a power-law decay present in the power spectrum \[S(f)\sim 1/f^{a}, \tag{1}\] with exponent \(a\). A scale invariant behavior can provide valuable insights into the underlying mechanisms of neuronal networks [15]. In this section we investigate in which contexts the EEG signal shows such scale-invariant behaviour. We recall that power spectrum of the activity of local neuronal ensembles can be fitted by a power law [16], however the EEG signal often contains additional oscillatory activity revealed by the presence of specific brain waves, such as the \(\alpha\) wave in the range 8-12 Hz, which cannot be filtered without significantly altering the properties of the signal. ### Wavelet multifractal analysis framework To estimate the scaling properties of the EEG, we resort to the setting supplied by multifractal analysis [13; 17; 18], which extends the notion of scale invariance to signals that cannot be characterized by a single scaling exponent; this approach allows to overcome the limitation to second-order statistics of the power spectrum by replacing the Fourier transform with a multiresolution tool such as the wavelet transform. Practically, in the wavelet-based multifractal framework, the scale invariance of a signal \(x(t)\) is measured through a _wavelet scaling function_\(\eta_{x}(q)\), which encapsulates the scaling exponents associated with the moments of order \(q\) (\(q>0\)) of the wavelet coefficients. This approach can be seen as a generalization of traditional spectral scale-invariance analysis (as in eq. (1)), which is recovered for \(q=2\)[18]. To define the scaling function \(\eta_{x}\), we start with the wavelet decomposition for functions defined on \(\mathbb{R}\)[19; 20; 21; 13]. We use a _mother wavelet_\(\psi\), such that an orthonormal basis of \(L^{2}(\mathbb{R})\) is obtained by dilations and translations of \(\psi\): \[\left\{\psi_{j,k}\left(t\right)=2^{-j/2}\,\psi\left(2^{-j}\,t-k\right),\,j,k \in\mathbb{Z}\right\}. \tag{2}\] For a given signal \(x(t)\), its decomposition on the wavelet basis is given by \[x(t)=\sum_{j,k\in\mathbb{Z}}c_{j,k}\,\psi_{j,k}(t), \tag{3}\] where \(c_{j,k}=\langle x,\psi_{j,k}\rangle\) are the wavelet coefficients1 of \(x\) at scale \(j\). We define the _wavelet structure functions_ of \(x\) based on its normalized wavelet coefficients, as follows: Footnote 1: The scalar product is defined by \(\langle f,g\rangle=\int_{\mathbb{R}}f(t)\,g(t)dt\). \[\forall q>0,\qquad S_{x}(j,q)=2^{j}\sum_{k}\left|2^{-j/2}c_{j,k}\right|^{q}, \tag{4}\] and finally the _wavelet scaling function_\(\eta_{x}(q)\) is implicitly defined by \[\forall q>0,\qquad S_{x}(j,q)\sim 2^{\eta_{x}(q)j}, \tag{5}\] in the limit of small scales, i.e. when \(j\to-\infty\). The mathematical interpretation is \[\eta_{x}(q)=\liminf_{j\to-\infty}\frac{\log_{2}(S_{x}(j,q))}{j}.\] This quantity provides information about the global regularity of \(x\). The particular value \(q=2\) allows to recover the information supplied by the traditional spectral analysis [18], i.e. the _Hurst exponent_\(\alpha=\eta_{x}(2)+1\) for eq. (1). Recall that the Sobolev spaces \(H^{s,q}(\mathbb{R})\) are composed of functions whose fractional derivatives belong to \(L^{q}(\mathbb{R})\); \(\eta_{x}\) has the following function space interpretation: \[\eta_{x}(q)=q\cdot\sup\{s:f\in H^{s,q}(\mathbb{R})\}.\] In practice, we shall compute the wavelet scaling function \(\eta_{x}\) using a log-log plot regression, i.e. fitting \(\log_{2}S_{x}(j,q)\) as a linear function of \(j\), in the range of scales where the EEG signal is most informative (typically in the range 0.1-100 Hz). Note that the multifractal framework is pertinent only if the log-log plot is approximately linear in a sufficiently large range of scales, i.e. if the data exhibit an average scaling invariance. We will examine below whether this holds or not for different types of EEG signals. Another relevant parameter which has proved useful in the setting of scale invariance is the _uniform Holder exponent_\(H_{x}^{\min}\) which can be defined as follows. The largest normalized wavelet exponent at scale \(j\) is computed as \[D_{x}(j)=\sup_{k}2^{-j/2}|c_{j,k}|,\] then the scaling exponent \(H_{x}^{\min}\) is implicitly defined by \[D_{x}(j)\sim 2^{H_{x}^{\min}j}\quad\text{ when }j\to-\infty,\] i.e. \[H_{x}^{\min}=\liminf_{j\to-\infty}\frac{\log_{2}(D_{x}(j))}{j}.\] Its function space interpretation is obtained through the Holder spaces \(C^{s}(\mathbb{R})\): \[H_{x}^{\min}=\sup\{s:x\in C^{s}(\mathbb{R})\}.\] Similarly to the scaling function, the coefficient \(H_{x}^{\min}\) can be estimated by log-log regression of \(D_{x}(j)\) versus \(2^{j}\). As above, this approach is only applicable when the log-log plot is well described by a straight line on a relevant range of scales. ### Multifractal analysis of the EEG signal We now investigate the scaling properties of EEG signals under this multifractal framework. To test whether EEGs present a general scale-invariant structure, we analyze signals acquired in different conditions: * EEG recorded during active task execution [22]; * EEG for subjects at rest, with eyes closed [23; 24]; * EEG recorded during sleep [25; 24]; In general, the high frequency content of EEGs is higher during mental activity and decreases with rest and sleep, as slow and more regular patterns progressively emerge. We visualize this tendency in figure 1, where we present a comparison of three EEG signals: during task (fig. 1-A1), rest (fig. 1-A2), and sleep (fig. 1-A3). The emergence of the slow patterns associated with sleep is appreciable already in the time domain (compare A1 to A3). The power spectrum of the EEG during task execution can be well fitted by a power law (fig. 1.B1), while rest and sleep EEG show significant deviations (fig. 1.B2-B3). In particular, EEG during rest is characterized by a strong alpha wave (8 Hz to 12 Hz), manifested as a peak in the power spectrum in the alpha-band frequencies (fig. 1.B2). The EEG during sleep presents a predominance of lower frequencies in the delta (1 Hz to 4 Hz) and theta (4 Hz to 8 Hz) bands (fig. 1.B3), which characterize non-rapid eye movement sleep. Thus, both rest and sleep EEG examples present a clear deviation from a typical \(1/f^{a}\) scale-invariant behaviour. We thus adopted the multifractal framework described above to test whether it is possible to characterize these types of EEG under a more general definition of scale invariance. For each EEG sample, we computed the wavelet structure functions \(S_{x}(j,q)\) and estimated the uniform Holder exponent \(H_{x}^{\min}\) using Daubechies wavelets with 3 vanishing moments [19] (the result confirms that this choice yields sufficiently smooth wavelets to analyze such data). In fig. 1.C we show the structure functions \(S_{x}(j,q)\) for \(q=1,2\). For the EEG during task, \(\log_{2}S_{x}(j,q)\) can be well fitted by a linear function of \(j\) (fig. 1.C1, dashed lines), making it possible to define a multifractal scaling via the scaling function \(\eta_{x}(q)\). Note that we restricted the fit to the frequency range 0.1 Hz to 50 Hz which contains the most relevant physiological information and is not affected by filtering and acquisition limitations (indicated by the shaded area in fig. 1.C). Both EEG during rest and sleep show significant deviations from linear behaviour of the structure functions in the log-log plots (fig. 1.C2-C3), which prevents possible estimation of the respective scaling functions. Similarly, while we can reasonably define a \(H_{x}^{\min}\) exponent for the EEG during active task execution (fig. 1.D1), the result is unreliable for the other two cases (fig. 1.D2-D3). To conclude, we report here that scale invariance of the EEG signal depends on the context of acquisition and is not a general feature. In particular, while scale invariance can be defined for EEG during mental activity, the same cannot be said of those brain states which are characterized by more regular rhythms and patterns such as sleep or rest. This result is not surprising as the EEG reflects processes occurring at various timescale (from milliseconds [26] to minutes), generated by processes of different natures: ionic channels, neuronal spiking and bursting, transient changes of the membrane potential, or oscillations that reflect communication between different brain regions [27]. Depending on the context, the EEG signal can thus become dominated by patterns at specific timescales which break a possible underlying scale invariance. This observation justifies, in the context of EEG artifact correction to use wavelet algorithms such as WQN, which act on each scale independently. ### Wavelet coefficient distributions for EEG artifacts We now characterize the EEG and artifactual signals by examining the distribution of their wavelet coefficients. We considered EEGs containing two common types of artifacts, ocular (EOG) and muscular (EMG), and compare them to uncontaminated EEGs. Ocular artifacts are caused by eye movements, which can alter the recorded electric potential since the eye acts as a dipole (with a difference of potential between the cornea and the fundus) while muscular artifacts are derived from the interfering electrical activity produced by muscle contraction. We decomposed two-second signals from the Denoise-Net dataset [28] using a Daubechies wavelet with two vanishing moments to obtain the coefficient distributions for each scale, shown in fig. 2.A. The distributions of wavelet coefficients for the uncontaminated EEG can be approximated by a normal distribution (fig. 2.A, first row, dashed line) with minor deviations. In the case of ocular artifacts (fig. 2.A, second row), the non-zero coefficients are concentrated at large scales (low frequency) with distinctive asymmetric distribution. Wavelet coefficients for muscular artifacts (fig. 2.A, third row) show a deviation from the normal distribution, but are spread across multiple scales. We investigate a possible scale invariant behaviour by plotting the variance of the wavelet coefficients versus the scale \(j\) (fig. 2.B), corresponding to the wavelet scaling function \(S(j,q=2)\). We notice that the variances of the uncontaminated EEGs can be approximated by a linear relation in log-log scale, thus showing some form of scale invariance, as Figure 1: **Multifractal analysis of the EEG signal.****A.** Samples of EEG signals recorded during task execution (A1), rest (A2), and sleep (A3). **B.** Power spectral density plots for the three cases (log-log). **C.** Wavelet structure functions for \(q=1,2\), as described in eq. (4). The shaded area indicates the range of scales corresponding to a frequency range \(0.1\,\mathrm{Hz}\) to \(50\,\mathrm{Hz}\). The dashed lines indicate the linear fit that can be used to estimate the scaling function (see eq. (5)). **D.** Estimations of uniform Hölder exponent \(H^{\mathrm{min}}\). EEG during task execution shows a good fit (D1), while rest and sleep give unreliable results (D2–3). shown in fig. 1. The variance of ocular artifacts decays rapidly with smaller scales, while muscular artifacts are characterized by an almost constant variance across all scales. To conclude, we have shown that EEG artifacts present significant deviations from the uncontaminated EEG signal both in distribution of the wavelet coefficients and their persistence across scales. Moreover, significant differences in the scaling behaviour can be observed between distinct families of artifacts, such as ocular and muscular. ## 3 Properties of the WQN algorithm We briefly recall the wavelet quantile normalization (WQN) algorithm [10]. WQN is an adaptive method that allows to attenuate transient artifacts based on statistics estimated from clean regions of the signal. The algorithm was especially designed to be applied on EEG signals in the context of brain monitoring, where signal corruption is a consequence of artifacts generated by motion of electrodes, muscular activity or eye motion that alter the physiological signal generated by neuronal activity [29; 1]. First, the signal is decomposed on a wavelet basis according to eq. (3). We assume here that the artifacted intervals are well identified and isolated. They usually consist of a small portion of the total EEG signal. Once an artifact segment has been localized, it is associated to a clean reference segment where we expect the underlying signal to be similar but uncontaminated by artifacts. Note that these uncontaminated statistics depend on the patient and timing so that they cannot be acquired in advance. In the original implementation, such reference intervals are defined by considering a temporally adjacent uncontaminated portion of the signal of roughly the same length as the contaminated portion. For each decomposition scale \(j\), we define the wavelet coefficients associated with the artifacted and reference intervals by \(c_{j}^{\rm(art)}\) and \(c_{j}^{\rm(ref)}\) respectively. In practice, since we work with finite intervals, the wavelet decomposition is carried on up to scale \(M\) (i.e. \(j=1,\ldots,M\)) guaranteeing the presence of a sufficient number of coefficients for the largest scale (e.g. \({\rm Card}\left\{c_{M,k}^{\rm(art)}\right\}>30\)). In the second step, the wavelet coefficients \(c_{j}^{\rm(art)}\) of the artifacted sample are modified in order to fit the statistics of the reference coefficients \(c_{j}^{\rm(ref)}\) at the corresponding scale \(j\). This normalization is performed, scale by scale, by computing the empirical cumulative density functions (CDF) \(F_{j}^{\rm(ref)}\), \(F_{j}^{\rm(art)}\) of the coefficients amplitude for artifacted and reference signal respectively, defined as \[F_{j}(x)=\frac{1}{N_{j}}\sum_{k=1}^{N_{j}}1_{|c_{j,k}|<\,x}, \tag{6}\] where \(N_{j}\) indicates the number of coefficients at scale \(j\) and \(1_{|c|<x}\) is the indicator function which takes value 1 if \(|c|<x\) and 0 otherwise. The wavelet coefficients \(c_{j}^{\rm(art)}\) are then modified so that the distribution of their amplitude matches that of the reference segment, via the mapping \(T_{j}\) defined as \[T_{j}(x)=F_{j}^{\rm(ref)\,-1}\left(F_{j}^{\rm(art)}(x)\right), \tag{7}\] where \(F_{j}^{\rm(ref)\,-1}\) indicates the generalized inverse of \(F_{j}^{\rm(ref)}\) (in the sense of completed graphs for discontinuous increasing functions) see [11] and Fig. 3. Finally, the normalization function \[\lambda_{j}(c)=sign(c)\cdot\min\left\{\left|c\right|,\,\left|T_{j}\left(c \right)\right|\right\}, \tag{8}\] maps a coefficient \(c\) from \(c_{j}^{\rm(art)}\) to its possibly attenuated value. The corrected coefficients are thus defined by \[c_{j,k}^{\rm(corr)}=\lambda_{j}\left(c_{j,k}^{\rm(art)}\right). \tag{9}\] Equation (8) ensures that the norm of wavelet coefficients is never increased, a key requirement to guarantee the regularity of the algorithm, as we will show in section 3.2. The corrected version of the signal is obtained by replacing the artifacted coefficients \(c_{j,k}^{\rm(art)}\) by the corrected coefficients \(c_{j,k}^{\rm(corr)}\) and then inverting the discrete wavelet transform. ### Illustration of the WQN algorithm We present two examples to illustrate how the WQN algorithm works and how it adapts to artifacts affecting different scales. To this aim, we have contaminated an EEG signal by adding an ocular artifact (EOG) and a muscular artifact (EMG), as illustrated in fig. 3 (A1-B1, first row). In fig. 3A2-B2 (left) we represent how the wavelet coefficients are transported from the artifacted cumulative density function to the reference one. In the case of ocular artifact (fig. 3A2) most of the transport occurs in the first scales (level 5), corresponding to low frequency components, while shorter scales (levels 1-5) which are not affected by the EOG artifact are left almost unmodified. The reconstruction of the signal, which can be seen as the sum of the corrected wavelet projections, is shown in fig. 3A1 (blue). The energy of the EOG artifact is mostly concentrated in the low frequencies, corresponding to the approximation coefficients \(c_{5}^{app}\) (the coarsest scale of the wavelet decomposition). Appropriately, most of the correction takes place at this scale (fig. 3A2, first row). In a second example (fig. 3B) we present an EEG signal corrupted by a muscular artifact (EMG), which is characterized by a wide frequency signature (as shown in fig. 2), with high frequency components significantly more powerful than the EEG signal. In fig. 3B2, we show how the WQN algorithm adapts to this different artifactual signature by attenuating the artifactual component in both shorter scales (levels 1-3) and larger scales (level 5). In conclusion, although the EMG and EOG artifacts are characterized by different statistics, the adaptive approach of the WQN algorithm makes it effective at reconstructing the original signal by performing appropriate corrections on a scale-by-scale basis. ### Regularity properties of the WQN algorithm In this section, we use the wavelet decomposition to derive a regularization property of the WQN algorithm. We recall that wavelets are unconditional bases of most classical function spaces [12], such as Sobolev spaces \(H^{s,p}\) for \(1<p<\infty\) or closely related Besov spaces \(B_{p}^{s,q}\) for \(0<p,q<\infty\) see e.g. [20; 13]. This implies that these spaces have a wavelet characterization which bears on the moduli of the wavelet coefficients, and which is an increasing function of each of these moduli. For example, a function \(f\) belongs to \(B_{p}^{s,q}\) if its wavelet coefficients satisfy the condition \[\left(\sum_{n}(2^{(1/p-1/2-s)m}|c_{m,n}|)^{p}\right)^{1/p}\in l^{q}, \tag{10}\] and (10) yields a norm (or a quasi-norm, when \(p\) or \(q\) is less than 1) which is equivalent to the Besov norm. By construction, the WQN algorithm is wavelet decreasing (i.e. it does not increase the size of the wavelet coefficients) and thus has the following regularity property: For \(0<p,q\leq\infty\), and for any \(s\in\mathbb{R}\), it maps functions of a Besov or Sobolev space into the same space. Since the mapping is not linear, this property does not imply that it is continuous on the corresponding functional space; this question is relevant as the continuity of a denoising algorithm is a prerequisite to guarantee its numerical robustness. In order to investigate this problem, it is useful to compare the present algorithm with the classical wavelet thresholding and wavelet shrinkage algorithms, which are conceptually simpler, and where the same problem arises. #### 3.2.1 Comparison with wavelet thresholding and wavelet shrinkage Wavelet thresholding and wavelet shrinkage were introduced to perform denoising of signals or images without smoothing the signal to be recovered (in contradistinction with convolution-based techniques). In this context, the eliminated "noise" is assumed informally to be characterized with statistics of wavelet coefficients which, at a given scale, are assumed to be stationary, of small amplitude and with short range correlations. The algorithm is efficient if the statistical properties of the signal to be recovered strongly differ from the noise i.e. if its wavelet coefficients form a _sparse sequence_ (most of them almost vanish), and the other coefficients are large. Note that this situation is opposite to the one we consider in the present article, where the artifacts to be eliminated have a sparse signature while the signal to be recovered presents the statistical properties of such a noise. However, wavelet thresholding and wavelet shrinkage have also been used in such contexts: once the splitting has been performed, one keeps the "noisy" part instead of the sparse one [8; 30; 9]. It is therefore legitimate to compare their performance with the WQN algorithm. We briefly recall the wavelet thresholding. Given a threshold level \(t>0\), wavelet thresholding in a given wavelet basis is defined as follows: once an appropriate normalization of the wavelet coefficients has been chosen, the wavelet thresholding mapping is the operator \(T\) which maps the wavelet coefficient \(c_{m,n}\) to \[d_{m,n}=f_{t}(c_{m,n}).\] where \[f_{t}(x)=x1_{[-t,t]}(x). \tag{11}\] Figure 2: **Characterization of ocular and muscular artifacts compared to EEG in the wavelet domain.****A.** Distribution of the wavelet coefficients at different scales for EEG, EOG (ocular artifacts), and EMG (muscular artifacts). The EEG distributions can be fitted by Gaussian distribution with minor deviations (dashed line, standard deviation \(\sigma\)). **B.** Plot of the variance versus scale. Variances for the pure EEG can be well approximated by a linear function (in log-log scale), compatibly with scale invariance shown in fig. 1, while artifacts show deviations. Figure 3: **Correction of EEG signals perturbed by ocular and muscular artifacts.****A1.** EEG signal corrupted by ocular artifact (EOG) and its reconstruction by WQN. **A2.** Left: cumulative density function and its mapping from artifact (orange) to reference (black). Right: Projections on the wavelet basis showing attenuation of the artifactual components on different scales. **B1–B2.** Similar presentation of WQN correction for an EEG signal contaminated by muscular artifact (EMG). The mapping \(T\) is wavelet decreasing, and therefore maps functions in a Besov or Sobolev space to the same function space. Nonetheless, the function \(f_{t}\) is discontinuous and consequently, the operator \(T\) is not continuous on any of these spaces. Indeed, to show this property, we consider two functions \(f\) and \(g\) with respectively wavelet coefficients \(c^{1}_{m,n}\) and \(c^{2}_{m,n}\) which coincide, except for one wavelet coefficient, such that these coefficients for \(f\) and for \(g\) respectively are \(t-\varepsilon\), and \(t+\varepsilon\). Taking \(\varepsilon\) arbitrarily small, the Besov norm of \(f-g\) can be made arbitrarily small, but the Besov norm of \(T(f)-T(g)\) is (up to the normalization factor of the corresponding wavelet coefficient) \(t+\varepsilon\), and therefore does not tend to zero, when \(\varepsilon\to 0\). This lack of continuity implies numerical instabilities of the algorithm which are well documented, see e.g.[31; 32; 33]. We now discuss the wavelet shrinkage algorithm, which is based on the function \[g_{t}(x)=sgn(x)\cdot(|x|-t)^{+}. \tag{12}\] Once an appropriate normalization of the wavelet coefficients has been chosen, the wavelet shrinkage operator \(U\) is defined as mapping the wavelet coefficient \(c_{m,n}\) to \[e_{m,n}=g_{t}(c_{m,n}).\] The mapping \(U\) is wavelet decreasing, and therefore maps functions in a Besov or Sobolev space to the same function space. But, additionally, in contradistinction with the previous case, \(g_{t}\) is continuous, and therefore two functions \(f\) and \(g\) whose wavelet coefficients are close are now mapped to functions \(U(f)\) and \(U(g)\), which also have close wavelet coefficients. More precisely, since \(g_{t}\) is Lipschitz, it follows easily that, for a given Besov or Sobolev space \(E\), \(\parallel U(f)-U(g)\parallel_{E}\leq C\parallel f-g\parallel_{E}\), i.e. the mapping \(U\) is Lipschitz in the corresponding Besov space. Refined stability properties of wavelet shrinkage can be attributed to \(g_{t}\) continuity, i.e. (at least implicitly) to the continuity of the mapping \(U\), see [31; 32; 33]. One drawback of both of these algorithms is that they are _local in the wavelet domain_, i.e. each wavelet coefficient is modified independently of the other ones, and therefore, they do not preserve the statistics of wavelet coefficients [11]. This phenomenon had no negative impact when wavelet thresholding and wavelet shrinkage were used for their initial purpose, i.e. to restore the "sparse" part of the signal, but it becomes a major drawback when it is used in the opposite direction of recovering the "noisy" component, in which case restoring the right statistics of the signal can be a major issue. For instance, where the eliminated artefact was localized, wavelet thresholding or shrinkage set the wavelet coefficients to zero, thus leading to inhomogeneities in the restored signal. One of the purposes of the WQN algorithm is to circumvent this drawback by restoring everywhere the correct anticipated statistics of wavelet coefficients [10]. As a consequence, it is not local in the wavelet domain. The value attributed to a coefficient depends on the entire statistic of coefficients at a given scale, and therefore the analysis of the regularity properties of the algorithm is more involved than for wavelet thresholding and wavelet shrinkage; nonetheless, a preliminary investigation of its main features will be performed in subsection 3.5. ### Robustness with respect to wavelet basis To examine the performance of the WQN algorithm, we use five classical wavelet bases, currently used in signal processing (sym5, db5, coif3, bio3.5 and dmey) [21]. After we added an electrode moving artifact on an EEG signal, we use the WQN to remove the artifact (Fig. 4), resulting in the green curves in the different sub-figures. To quantify this performance, we computed the Average Root Mean Squared Error on the ensemble of data consisting in two types of artifacts (EOG ad EMG). We can conclude that the RMSE is quite independent of the wavelet bases with a mean around 0.03 for both types of artifacts (fig. 4.A). Finally, we tested the effect of increasing the number of vanishing moments of the wavelets. Again, we found that there was no noticeable consequence on the corrected artifact with the three following wavelet bases: Daubechies, Symlets and Coiflets. To conclude, the different wavelets bases have little influence on the corrected artifacts. ### Boundedness properties of the WQN algorithm in functional spaces As already mentioned above, since the WQN algorithm does not increase the size of the wavelet coefficients, it maps functions of a Besov or Sobolev space into the same space. One can also consider other function spaces which are based on histograms of wavelet coefficients at each scale and thus encapsulate more functional information than Besov spaces do [12; 34]. The maximal information which is invariant under the change of (smooth) wavelet basis, is encapsulated through the _wavelet profile_ of \(f\), which is defined as follows. For a function \(f\), we define \[F_{m}(\alpha)=\mathrm{Card}\left\{n:\ \ |c_{n,m}|\geq 2^{(\alpha+1/2)m}\right\};\] and the wavelet profile \(\nu_{f}(\alpha)\) is \[\nu_{f}(\alpha)=\lim_{\varepsilon\to 0}\ \left[\limsup_{j\to\infty}\left( \frac{\log(F_{m}(\alpha+\varepsilon))}{\log(2^{j})}\right)\right].\] This definition formalizes the following heuristic: There are about \(2^{-\nu_{f}(\alpha)m}\) wavelet coefficients larger than \(2^{(\alpha+1/2)m}\). The corresponding function spaces are defined similarly: Let \(\nu(\alpha)\) be a nondecreasing function which takes values in \(\{-\infty\}\cup[0,1]\). A function \(f\) belongs to the space \(S^{\nu}\) if its wavelet coefficients satisfy. \(\forall\alpha\in\mathbb{R}\), \(\forall\varepsilon>0\), \[\forall C>0,\ \exists M\ \forall m\leq M\ \ \ F_{m}(\alpha)\leq 2^{-(\nu( \alpha)+\varepsilon)m}.\] Since the several wavelet algorithms that we considered are wavelet decreasing, it follows that these algorithms map functions which belong to a \(S^{\nu}\) space to the same \(S^{\nu}\) space. Figure 4: **Correction of electrode movement artifacts using the WQN algorithm. We show the time-plot of the corrections (green) of an added artifact (orange) using different wavelet bases: sym5, db5, coif3, bio3.5 and dmey [35].** Figure 5: **WQN performance under different wavelet bases.****A.** Average Root Mean Squared Error (RMSE) of WQN for bootstrapped EEG signals contaminated by EOG and EMG, for the following wavelet bases: Symlet with 5 vanishing moments (sym5), Daubechies with 5 vanishing moments (db5), Coiflet with 6 vanishing moments (coif3), biorthogonal spline with 3 and 5 vanishing moments in synthesis and analysis wavelet respectively (bior3.5), discrete Meyer wavelet (dmey). **B.** WQN performance for different number of vanishing moments, calculated on the EOG contaminated dataset. ### Continuity properties After having considered the issue of the boundedness of the wavelet transport algorithm on several classes of function spaces, we now turn to the problem of its continuity. We start by a simple remark which will allow to position the problem correctly. Assume that the unaltered data on which the histograms of wavelet coefficients are recorded has a given finite length \(L\). Then one computes the number \(N_{m}\sim L2^{-m}\) of wavelet coefficients at scale \(2^{m}\), which constitute the reference signal on which the altered data will be mapped. The transport algorithm maps the wavelet coefficients of the altered signal on this finite set of cardinality \(N_{m}\). Now assume that, for the altered signal, two wavelet coefficients of consecutive size are extremely close; if their sizes are exchanged, the coefficients \(c_{m,n}\) and \(c_{m,l}\) on which they are mapped will also be exchanged; it follows that, no matter how close the two starting functions are picked, one wavelet coefficient of their image will differ by the value \(c_{m,n}-c_{m,l}\); it follows that, strictly speaking, the WQN algorithm is not continuous on any Besov or Sobolev space. However, this theoretical argument does not necessarily constitute a drawback in applications if we make the assumption that the coefficient distribution for the reference (unaltered) data is continuous and is sampled with enough precision; indeed, in practice consecutive wavelet coefficients will be mapped to very close values, and the discontinuity of the mapping would be of no practical consequence, since its "jumps" would be of very small size. Note that this assumption is satisfied by the data we consider since the histograms of wavelet coefficients follows a generalized Gaussian distribution (Fig. 2). At this point, another phenomenon has to be taken into account: even if the repartition function of coefficients for the reference data is continuous (so that the "target" coefficients are very close), it is still possible that the altered signal has a large number of coefficients which are close to each other, so that a very small perturbation in the size of coefficients would exchange two coefficients of very different ranks. This argument shows that a continuity result for the functional operator underlying the algorithm cannot follow from making only the assumption of a regularity of the target probability density function (PDF). However, though such situations yield mathematical counterexamples, they are not met in practice, since the data on which the algorithm is applied also display smooth PDFs (Fig. 2), and therefore do not exhibit large clusters of coefficients taking almost the same value. ## 4 WQN algorithm on pathological examples To further highlight the properties of the WQN algorithm, we present its application to pathological examples and compare its behaviour with soft (eq. 12) and hard (eq. 11) thresholding. We initially study how the algorithm operates on a single level of wavelet coefficients to highlight the non-local action of the WQN, which we previously discussed in section 3.5. We consider as a reference signal (in the wavelet space) a ramp \(c_{\text{ref}}(t)=t\), as the simplest signal having a simple invertible CDF. We then perturb the wavelet coefficients by adding three types of artifacts to the reference coefficients: a square artifact \(c_{\text{sq}}(t)=(H(t-t_{1})-H(t-t_{2}))\) where \(H\) is the Heaviside step function, a triangle \[c_{\text{tr}}(t)=\begin{cases}\dfrac{2}{t_{2}-t_{1}}t,&\text{if }\,t_{1}\leq t \leq\dfrac{t_{1}+t_{2}}{2},\\ 1-\dfrac{2}{t_{2}-t_{1}},&\text{if }\,\dfrac{t_{1}+t_{2}}{2}\leq t\leq t_{2},\\ 0,&\text{otherwise},\end{cases}\] and a cosine artifact \[c_{\text{cos}}(t)=\begin{cases}\dfrac{1+\cos{(\pi(2(t-t_{1})/T-1))}}{2},&\text {if }\,t_{1}\leq t\leq t_{2},\\ 0,&\text{otherwise},\end{cases}\] with \(T=t_{2}-t_{1}\). Considering \(c_{\text{art}}\in\{c_{\text{sq}},c_{\text{tr}}\,c_{\text{cos}}\}\), the final artifacted coefficients are given by \[c(t)=c_{\text{ref}}(t)+c_{\text{art}}(t). \tag{13}\] In fig. 6 we show the correction of coefficients \(c(t)\) when applying the WQN quantile remapping and compare it with soft and hard thresholding. In the case of the square artifact (fig. 6.A), due to the non-locality of the WQN algorithm (see section 3.5), the mapping of the coefficients performed by WQN throught the inverse CDF (quantile function) can swap the values of group of coefficients (compare fig. 6.A first and second column). For wavelet thresholding, we consider the threshold to be equal to the maximum value of the unperturbed coefficients (which equals 1 in these examples). In the case of the square artifact, this results in perfect isolation of the artifacted coefficients, which are set to zero and 1 by hard and soft thresholding respectively. Interestingly, while the thresholding methods act locally, they do not result in a lower mean squared error with respect to WQN. The cases of the triangle and cosine perturbation are presented in fig. 6.B and C respectively. In both cases, the WQN remapping still shows non-local swapping of coefficients, although the quantile function is continuous. Coefficients with the same amplitude exist in the unperturbed (\(t<t_{1},t>t_{2}\)) and perturbed (\(t_{1}<t<t_{2}\)) intervals, thus thresholding methods cannot perfectly isolate the perturbed coefficients. We note that the difference in smoothness between the triangle and cosine artifacts has no effect on the non-local action of the WQN algorithm, which produces similar reconstructions in the two cases. However, in both cases thresholding methods do not result in a lower mean squared error with respect to WQN. As a final example, we evaluated how the WQN algorithm performs on a sine wave perturbed by white noise. This toy example was used to mimic a rhythmic brain activity perturbed by random artifact such as an epileptic Figure 6: **Comparison of WQN and thresholding methods on pathological examples.****A.** Reference coefficients perturbed by additive square artifact \(x_{\text{sq}}\). The WQN remapping causes swapping of coefficients, while thresholding methods act locally. **B.** Signal with additive triangle perturbation \(x_{\text{tr}}\). **C.** Signal with a smooth perturbation \(x_{\text{cos}}\). Non-local effects of WQN are similar to the triangle case. seizure. The artifacted signal is given here as the sum \[x_{\text{art}}(t)=\sin(\omega t)+\sigma w(t), \tag{14}\] where \(w\) is a centered Brownian noise of unit variance. In fig. 7.A we show the signal reconstructed by WQN as we vary the noise amplitude \(\sigma=1,2,5,10\). This example shows that WQN allows a robust reconstruction of rhythmic signals even in the presence of strong perturbations (\(\sigma=10\)). We quantify the noise reduction achieved by WQN in fig. 7.B. ## 5 Conclusion and final remarks We explored here the properties of EEG signals for different classes of brain states and artifacts. We presented evidence that, for particular states such as sleep and general anaesthesia, EEGs are not in general fully scale invariant; the deviations are characterized by alternating dominance of specific frequency bands. Moreover, while the wavelet coefficients statistics at different scales are well approximated by Gaussian distributions in the case of clean EEG, those of common artifactual signals such as EOG and EMG are described by generalized Gaussians. We then studied the properties of the WQN algorithm for EEG artifact removal. We showed how the WQN algorithm transports the wavelet coefficients from the distribution of an artifacted EEG signal into a distribution compatible with the uncontaminated EEG, allowing to restore the signal statistics. We found here that the WQN algorithm smoothens the discontinuities introduced by artifactual signals and provided insight into its regularity, continuity, and boundedness properties. We also highlighted how the remapping of wavelet coefficients through the quantile function can produce non-local effects, as opposed to traditional thresholding methods which always operate locally on the wavelet coefficients. Indeed, WQN can transport a wavelet coefficient to another position in time possibly located far away from the original position. This effect can thus generate local distortion and some information contained in the correlation structure of the wavelet coefficients can be lost. The classical functional space such as Besov spaces are unable to "detect" correlations between locations of wavelet coefficients, but other functional spaces, such as "oscillations spaces" can be more appropriate to detect them. In particular, they are not invariant under the "shuffling" of wavelet coefficients [36]. Further investigations are needed in that direction. Lastly, pathological cases can arise where the unperturbed physiological signal exhibits irregularities such as spikes and waves. For instance, this is particularly evident in certain forms of epilepsy or during deep anesthesia [37]. When an artifact occurs in correspondence of these dynamics, it would be interesting to test how WQN algorithm can map a singular signal into a reference that also contains singularities. It is of interest to evaluate the effectiveness of the WQN algorithm when an artifact occurs in correspondence of these dynamics, analyzing the ability of the WQN algorithm in effectively mapping a singular signal, affected by artifacts, to a reference signal that likewise exhibits singularities. Given its clinical relevance, such analysis warrants further exploration. ## Code and data availability EEG, EMG, and EOG data used in the present article can be obtained from publicly available datasets [22, 23, 24, 25, 28]. The Python code reproducing all results and figures is available on Zenodo ([https://doi.org/10.5281/zenodo.8127712](https://doi.org/10.5281/zenodo.8127712)). ## Competing interests The Authors declare no Competing Financial Interests.
2306.13700
Exploring the Potential of AI-Generated Synthetic Datasets: A Case Study on Telematics Data with ChatGPT
This research delves into the construction and utilization of synthetic datasets, specifically within the telematics sphere, leveraging OpenAI's powerful language model, ChatGPT. Synthetic datasets present an effective solution to challenges pertaining to data privacy, scarcity, and control over variables - characteristics that make them particularly valuable for research pursuits. The utility of these datasets, however, largely depends on their quality, measured through the lenses of diversity, relevance, and coherence. To illustrate this data creation process, a hands-on case study is conducted, focusing on the generation of a synthetic telematics dataset. The experiment involved an iterative guidance of ChatGPT, progressively refining prompts and culminating in the creation of a comprehensive dataset for a hypothetical urban planning scenario in Columbus, Ohio. Upon generation, the synthetic dataset was subjected to an evaluation, focusing on the previously identified quality parameters and employing descriptive statistics and visualization techniques for a thorough analysis. Despite synthetic datasets not serving as perfect replacements for actual world data, their potential in specific use-cases, when executed with precision, is significant. This research underscores the potential of AI models like ChatGPT in enhancing data availability for complex sectors like telematics, thus paving the way for a myriad of new research opportunities.
Ryan Lingo
2023-06-23T15:15:13Z
http://arxiv.org/abs/2306.13700v1
Exploring the Potential of AI-Generated Synthetic Datasets: A Case Study on Telematics Data with ChatGPT ###### Abstract This research delves into the construction and utilization of synthetic datasets, specifically within the telematics sphere, leveraging OpenAI's powerful language model, ChatGPT. Synthetic datasets present an effective solution to challenges pertaining to data privacy, scarcity, and control over variables - characteristics that make them particularly valuable for research pursuits. The utility of these datasets, however, largely depends on their quality, measured through the lenses of diversity, relevance, and coherence. To illustrate this data creation process, a hands-on case study is conducted, focusing on the generation of a synthetic telematics dataset. The experiment involved an iterative guidance of ChatGPT, progressively refining prompts and culminating in the creation of a comprehensive dataset for a hypothetical urban planning scenario in Columbus, Ohio. Upon generation, the synthetic dataset was subjected to an evaluation, focusing on the previously identified quality parameters and employing descriptive statistics and visualization techniques for a thorough analysis. Despite synthetic datasets not serving as perfect replacements for actual world data, their potential in specific use-cases, when executed with precision, is significant. This research underscores the potential of AI models like ChatGPT in enhancing data availability for complex sectors like telematics, thus paving the way for a myriad of new research opportunities. Synthetic Datasets Telematics ChatGPT Prompt Engineering ## 1 Introduction The advent of Artificial Intelligence (AI) and Machine Learning (ML) has revolutionized numerous fields, expanding the realm of possibilities with vast computational abilities and innovative techniques. A critical element fueling this advancement is the accessibility of large-scale datasets. However, data availability often confronts challenges such as privacy concerns, scarcity of information, and constraints in manipulating variables for research. In response, the creation of synthetic datasets has emerged as a promising solution. Synthetic datasets, produced artificially, circumvent many of these issues while preserving the statistical properties of the original data. This allows for experimental control and confidentiality, making them particularly beneficial in data-sensitive fields like healthcare, finance, and the automotive industry. While the potential of synthetic datasets is significant, their effectiveness is fundamentally dependent on their quality, a characteristic primarily defined by three aspects: diversity, relevance, and coherence. Ensuring high standards in these aspects is of utmost importance for synthetic data to be a reliable tool in research and applications. This paper employs OpenAI's language model, ChatGPT, as a tool for generating synthetic data, with a focus on the telematics domain. The telematics sector, encompassing telecommunications and vehicular technologies, stands to benefit substantially from an increased availability of data for research, planning, and technological advancement. This paper showcases the procedure of crafting synthetic telematics data using ChatGPT through a case study, delving into the complications and corresponding solutions tied to this process, and assessing the quality of the resulting dataset. The intention lies in offering a glimpse into the potential of AI-generated synthetic datasets, underscoring their promise as a vital resource in the realm of data science. Related Work The emergent utility of synthetic data in recent years, owing to its potential in enhancing privacy and representation, has sparked research interest in various sectors. This section synthesizes the most significant studies in this field, focusing particularly on the application of synthetic data and the role of artificial intelligence. Savage [2] pioneers the discourse in the paper "Synthetic data could be better than real data", asserting that machine-generated data sets could potentially augment both privacy and representation in artificial intelligence. However, Savage cautions that the successful application of synthetic data hinges on the delicate balance between achieving accurate representation and mitigating privacy invasion. Complementing Savage's narrative, Stadler et al. [3] introduce an empirical perspective in their paper "Synthetic Data - Anonymisation Groundhog Day." The authors execute a quantitative evaluation of the privacy gain associated with synthetic data publishing. In a stark comparison with preceding anonymisation techniques, they assert that synthetic data either inadequately thwarts inference attacks or it fails to retain data utility, indicating the need for a balanced approach. Simultaneously, the application of synthetic data in healthcare is aptly highlighted by Gonzales et al. [1] in their paper "Synthetic data in health care: A narrative review." They identify seven potential use cases of synthetic data in healthcare, which encompass research, hypothesis testing, public health research, health IT development, education, training, public data release, and data linking. Hassani and Silva [4] diversify this conversation by examining the implications of artificial intelligence, particularly ChatGPT, in the realm of data science. In their paper "The Role of ChatGPT in Data Science: How AI-Assisted Conversational Interfaces Are Revolutionizing the Field", they elucidate how ChatGPT can augment various aspects of the data science workflow. They contend that it can aid in data cleaning, preprocessing, model training, and result interpretation, and provide fresh insights to drive decision-making. Despite the research surrounding synthetic data and ChatGPT, there exists a conspicuous knowledge gap regarding the use of ChatGPT in creating synthetic telematics data. The current body of literature hence necessitates further exploration into the prospective benefits and challenges of employing ChatGPT for synthetic telematics data generation. ## 3 Theoretical Background and Potential of Synthetic Datasets ### Introduction to Synthetic Data: A Paradigm Shift The digital revolution and the rise of data-centric decision making has catalyzed the emergence of a potent instrument--synthetic data. This data, algorithmically crafted without real-world collection, has gained prominence due to an escalating need for extensive, adaptable data that adheres to privacy and accessibility constraints. This study investigates the concept of synthetic data, exploring its creation, applications, potential shortcomings, and its utility as a powerful resource for beginners in data analysis and data science. ### Exploring the Characteristics of Synthetic Datasets In essence, a synthetic dataset is a conglomerate of artificially devised data points that ideally retain the statistical properties of a corresponding real-world dataset. This type of data emulates the features and behaviors of actual data without encompassing any confidential or sensitive information. The generation of synthetic data deploys intricate algorithms and techniques, including a range of machine learning methodologies, to assure that the generated data faithfully mirrors the attributes of the original dataset. The ideal synthetic dataset demands meticulous replication of inherent patterns present in the original data. However, the dataset generated with ChatGPT might not conform to such rigorous standards. Considering the data is generated from randomized sources, there is an inevitable compromise in the depth and breadth of information compared to a comprehensive synthetic dataset. Critical to note is the primary objective - not to mirror reality flawlessly, but rather to enhance research capabilities and nurture educational ventures. The generated synthetic dataset in this paper, though perhaps not as exhaustive, provides a controlled environment. This environment allows researchers and learners to conduct experiments, explore, and derive insights, all the while ensuring privacy and confidentiality. In the pursuit of knowledge, such a dataset proves adept in facilitating understanding and analysis of intricate data structures and patterns. The Implications and Applications of Synthetic Datasets ### A Panorama of Applications for Synthetic Datasets Synthetic datasets permeate a multitude of disciplines, with the following areas being notably dominant: * **Machine Learning and Artificial Intelligence:** Synthetic data can prove instrumental in training machine learning models, particularly when the procurement of real-world data is restricted or infeasible due to privacy considerations. * **Testing and Validation:** Synthetic data is frequently employed in software testing to assess system performance across varying scenarios, ensuring no confidential data is disclosed during the process. * **Research and Development:** When accessing real-world data is challenging or ethically dubious, researchers often depend on synthetic data for experimental design and hypothesis testing. * **Data Augmentation:** Synthetic data can rectify dataset imbalance by generating data for underrepresented classes, thereby enhancing machine learning model performance. * **Privacy-Preserving Data Sharing:** In sectors dealing with sensitive data, such as healthcare, finance, and automotive industries, synthetic datasets facilitate secure data sharing while preserving privacy. * **Education and Learning:** Synthetic data can play a pivotal role in education, particularly for experiential learning in data science and machine learning. These datasets can be tailored to demonstrate specific concepts or techniques, negating the need for exhaustive data collection or dealing with privacy issues. This allows students, educators, and competition participants to experiment, make mistakes, and learn, all without jeopardizing real-world sensitive data. Thus, synthetic datasets play a vital role in our progressively data-driven world, striking a balance between utility, privacy, and accessibility. Given the emphasis on the educational applications of synthetic data, it becomes highly relevant for the objectives of this study. ### The Role of Synthetic Data in Real-World Scenario Modelling Synthetic data is invaluable when modeling various scenarios. For instance, in telematics, which includes data collection about a vehicle's usage, location, and driver behavior, acquiring real-world data for every possible driving scenario can be arduous, if not impossible. Synthetic data can simulate a broad spectrum of conditions, from normal to extreme, enabling comprehensive testing and training of telematics systems, contributing to the evolution of safer, more efficient vehicle technologies. ### Synthetic Data: An Ally for Data Privacy In our digital era, data privacy is of paramount concern. Synthetic data emerges as a privacy champion, allowing organizations to draw insights from data without disclosing sensitive or confidential information. For instance, in telematics, the data collected often includes sensitive data, such as precise location data and potentially personally identifiable information (PII). By generating synthetic datasets that maintain the statistical properties of the original without including any actual PII, privacy can be preserved during in-depth analyses, predictive model development, or data sharing with third parties. It's also worth noting that in scenarios where achieving a synthetic dataset that is statistically identical to the original proves unfeasible, a dataset that is statistically close enough can still serve effectively for educational and experimental applications. This slightly less precise alternative yet maintains essential characteristics for exploration and understanding, offering a robust solution for privacy-preserving data utilization. ### Synthetic Data and its Impact on Scalability and Flexibility Algorithmically generated synthetic data provides scalability and flexibility, allowing organizations to produce as much (or as little) data as necessary for their distinct applications. This aspect is crucial when stress-testing systems or training intricate machine learning models. The versatility of synthetic data enables its adaptation to represent specific scenarios, populations, or conditions, thereby enabling more targeted and precise analyses and predictions. It is noteworthy to add that the specific nature of data and its level of congruence with the original source will vary from one use case to another. This reflects the bespoke nature of synthetic data generation, making it a vital tool for a wide range of scenarios. The unprecedented ease in creating synthetic data today necessitates further scholarly exploration of its various potential applications. Hence, the need for continued research to decipher the maximum potential of synthetic data in all possible use cases remains integral to advancing our knowledge in this rapidly evolving field. ### Addressing Data Imbalance with Synthetic Data Imbalanced data poses a significant challenge in machine learning, often leading to models biased towards overrepresented classes. Synthetic data offers a resolution by generating additional data for the underrepresented classes, thus creating a more balanced dataset that improves learning and facilitates fairer predictions. For instance, in telematics, certain infrequent driving behaviors can be artificially generated, producing a more balanced dataset for model training and leading to advancements in driver safety and vehicle performance. Building upon this notion, not just the creation, but also the augmentation of synthetic data using large language models (LLMs) could emerge as one of the most crucial use cases. When a sample of original data is available, it is feasible to map its distributions onto the synthetic data, which enhances the authenticity and richness of the synthetic dataset. This specific application, which encompasses both creation and augmentation of synthetic data with LLMs, represents a crucial frontier in machine learning. Given its immense potential to reshape our approach to data imbalances, it justifiably necessitates extensive, focused research. ## 5 The Inherent Challenges of Utilizing Synthetic Data ### Limitations in the Complexity and Variability of Synthetic Data While synthetic data provides a multitude of possible benefits, it is not without its inherent challenges. A primary concern is the potential lack of full complexity and variability that characterizes real-world data. Being algorithmically generated, synthetic data might fail to fully encapsulate the intricate details, correlations, and unpredictability found in its real-world counterparts. This shortfall may result in models that perform well with synthetic data but do not generalize effectively to real-world scenarios. To illustrate this point, consider telematics. A synthetic dataset might simulate a range of driving conditions, but it might not account for subtle, influential factors such as driver fatigue, distractions, or even the nuanced effects of billboard advertisements along a route. These omissions could significantly impact the performance of predictive models when applied to real-world data. ### The Risk of Inherited Biases in Synthetic Data Another inherent challenge of synthetic data is its potential to propagate the biases embedded in the model used for its generation. If the model creating the synthetic data contains biases, whether from the original training data or algorithmic bias, these biases can be transferred to the synthetic data, leading to skewed results and potentially biased conclusions. For instance, suppose a model generating synthetic telematics data was primarily trained on urban driving conditions. In that case, it might inadvertently over-represent these conditions in the synthetic data, leading to an under-representation of rural or off-road driving scenarios. Such biases could potentially skew the conclusions drawn from the analysis or predictions based on this data, thereby affecting the validity of decisions made using these insights. ### Overfitting Risk Associated with Synthetic Data The exclusive use of synthetic data for model training can lead to overfitting--a situation where a model exhibits high performance on the training data but struggles to generalize to new, unseen data. This issue can arise if the synthetic data does not adequately represent the variability inherent in real-world scenarios. In the domain of telematics, consider a model trained on synthetic data to predict engine failure based on various parameters, such as driving style, vehicle load, and environmental conditions. If the synthetic data does not capture the full spectrum of real-world variations and complexities, the model might become excessively specialized to the synthetic dataset, leading to overfitting. Consequently, the model's predictive performance may decline when applied to real-world data ### Addressing the Challenges of Synthetic Data The objections raised towards synthetic data, although reasonable, are not insurmountable. With strategic advancements and diligent application, these concerns can be reduced. The challenge of fully capturing the complexity and variability of real-world data can be approached by refining the data generation algorithms and including a wider set of influencing factors and conditions. In the context of telematics, this could mean extending the generation process to factor in additional elements such as driver fatigue or distractions. While this introduces an added layer of complexity, it holds the potential for enhancing the practicality and precision of synthetic data. The potential risk of inherited biases in synthetic data can be mitigated through balanced and careful training of the generative models. This involves incorporating diverse and representative datasets during the training phase to prevent over- or under-representation of specific conditions or classes. Continuous monitoring and correction of any observable bias in the generated synthetic data through iterative improvements in the data generation process is also essential. As for the risk of overfitting associated with synthetic data, a mixed approach utilizing both real and synthetic data for model training could be adopted. This hybrid strategy could leverage the benefits of synthetic data while preserving the capacity to generalize to real-world conditions. Techniques such as regularization, cross-validation, and ensemble methods can also be deployed to counter overfitting. For instance, in the telematics example, a blend of real-world data and synthetic data encompassing various driving conditions could lead to a more robust and generalizable model. To summarize, while synthetic data indeed presents challenges, these are not insurmountable barriers. It's essential to underscore that while all these concerns are valid and warrant attention, the scope of this paper is primarily to initiate the conversation around generating a telematics dataset using ChatGPT. With careful application, continuous improvement, and strategic advancements, synthetic data can prove an invaluable tool in data science, particularly in data-sensitive fields, and further research in this area is merited ## 6 An Examination of ChatGPT's Role in Synthetic Dataset Generation ### Leveraging ChatGPT in Language Modelling for Synthetic Data Generation ChatGPT, a product of OpenAI, represents an advanced language model utilizing machine learning to generate text reminiscent of human discourse. Its potential extends beyond mere text generation, enabling the creation of structured data in response to specific prompts. Leveraging the capabilities of ChatGPT, this paper proposes a practical demonstration involving the generation of a synthetic dataset. The focus will be on crafting a rudimentary telematics dataset. This dataset, while simplified, is intended to serve as an educational resource, encapsulating key data points inherent in telematics analytics. This study emphasizes the creation of a synthetic dataset with composition limited to randomly generated values. While this approach may not replicate the direct correspondence found in real-world data, it serves as a highly conducive environment for exploratory research, analytical processing, and in-depth learning. In fact, the apparent simplicity of this randomly generated data allows for clearer understanding and manipulation, providing an ideal introductory tool in the realm of telematics data. Throughout the process of generating and exploring this synthetic dataset, it is critical to bear in mind the primary objective is not an exact emulation of real-world telematics data. Instead, the focus lies in providing an accessible gateway for individuals aiming to develop a foundational understanding of such complex data. Thus, the value of this synthetic dataset lies not in its replication of real-world data, but in its ability to facilitate learning, discovery, and comprehension of telematics data structure and interpretation. ### Methodological Considerations: Utilizing ChatGPT for Synthetic Dataset Creation In the generation of synthetic datasets using ChatGPT, the initial stage involves the delineation of the data's anticipated format and structure. When contemplating a scenario involving tabular traffic pattern data, for instance, the structure might comprise elements such as date, time, location, traffic volume, and weather conditions. Following this, the process segues into the formulation of a sequence of prompts which serve to guide ChatGPT about the character of the data intended for generation. Armed with these well-structured prompts, ChatGPT stands primed to assist effectively in the orchestration of a synthetic telematics dataset. ### Extending Synthetic Data Applications Beyond Demonstrative Purposes Engaging with the process of synthetic data creation entails acknowledging its extensive potential that extends beyond mere research and demonstration. In scenarios where synthetic data are expected to integrate into production or operational environments, the process is not restricted to the act of data generation alone. A vital part of the process is the incorporation of a robust validation and testing phase, essential for validating the quality of the synthetic data and evaluating its suitability for its intended function. This crucial phase could involve statistical comparisons with real data and the employment of synthetic data within various models. Though this paper primarily concentrates on delving into potential applications, rather than preparing for real-world operational use, the importance of a comprehensive validation process should not be diminished for those intending to broaden their exploration of synthetic data. This pivotal step facilitates the unlocking of a synthetic dataset's full potential, thus charting the path towards robust, privacy-conscious, and holistic data solutions. ## 7 An Exploration of Synthetic Telematics Data Generation using ChatGPT ### Introduction to Telematics Telematics, the synergistic blend of telecommunications and informatics, plays a pivotal role in today's data-driven era, specifically within the realms of vehicular operation and urban mobility. At its core, telematics is concerned with the collection, processing, and transmission of information pertaining to remote entities, predominantly vehicles, facilitated by telecommunication devices. Consequently, the breadth of telematics data encapsulates numerous informational parameters, including a vehicle's precise location, speed, idle time, instances of harsh acceleration or abrupt braking, fuel consumption, along with other intricate diagnostic data. This wealth of information is typically harvested via a combination of GPS technology, onboard diagnostics, and sensory devices. The richness and scope of telematics data make it a compelling field for the application of synthetic data generation. ### The Influence of Telematics Data Telematics data holds the power to deliver transformative insights across diverse sectors, with particular prominence in urban planning and traffic management. A comprehensive understanding of traffic patterns, derived from this data, is an indispensable asset for managing road infrastructure planning and traffic congestion. This knowledge facilitates the pinpointing of peak congestion times and zones, the formulation of strategies for optimizing traffic flow, the planning of enhanced public transportation routes, and the overall improvement of the commuter experience. With the growing ubiquity of Internet of Things (IoT) devices and the enhancement of connectivity, telematics data has emerged as a critical component in smart city initiatives. Real-time traffic data can be harnessed to steer emergency services towards quicker response times, or to inform eco-routing strategies aimed at curtailing carbon emissions. Therefore, the potential and impact of telematics data extend far beyond its conventional applications, leading the charge towards a smarter and more sustainable urban environment. ### Telematics Data in Research and Innovation Beyond immediate operational applications, telematics data also has immense potential for research and innovation. Researchers can leverage this data to predict future traffic trends, improve urban planning, and facilitate the development of autonomous vehicles. Given the growing concern for privacy and data security, synthetic telematics data, mirroring real-world data without compromising privacy, has become a significant research area. Synthetic datasets can aid researchers in bypassing privacy-related obstacles and uncovering new insights in urban mobility. ### Generation of Synthetic Telematics Data using ChatGPT The process of synthesizing a telematics dataset with ChatGPT involves prompt experimentation and iterative refinement, with gradual increases in prompt complexity. The initial step starts with a simple open-ended prompt such as, "Can you create a telematics dataset for me?" This step provides an insight into ChatGPT's interpretation and structuring of telematics data. This initial, basic prompt serves as our first insight into how ChatGPT understands and structures telematics data. It forms the foundation upon which we build, enhancing our prompts iteratively for better and more specific outputs. The beauty of this approach lies in its progressive nature, reminiscent of a teaching or training process where we start with the basics and gradually introduce more complex elements. From our first prompt, "Can you create a telematics dataset for me?", we glean several key observations. Firstly, ChatGPT demonstrates an understanding of what telematics data is, signifying that it is capable of generating data relevant to our use case. However, the output is quite short, consisting of only five rows, which indicates that we may need to provide clearer instructions about the size of the dataset we desire. Moreover, while the model generates telematics data, the output format is not readily usable. The data is not in a format, such as a CSV or Pandas' DataFrame, that is amenable to further analysis or processing. These crucial insights inform us that while ChatGPT understands the concept of a 'telematics dataset', it requires more specific instructions to generate the type and format of data we're targeting. Consequently, these revelations guide us as we craft more complex prompts in our subsequent iterations. Moving forward in our data generation journey, our second prompt brings more specificity to the table, directly addressing some limitations we noticed with our initial, more open-ended prompt. Our refined prompt reads, "Can you create a telematics dataset for me that has 100,000 rows and is exportable into CSV format or a Pandas DataFrame?" This expanded prompt offers a few key enhancements: 1. **Dataset Size:** Explicitly requesting 100,000 rows sets a clear expectation for the scale of the dataset, influencing potential data generation methods and potential optimizations for handling larger datasets. 2. **Format:** The prompt's specification for an exportable CSV format or a Pandas' DataFrame offers clear guidance on the dataset's structure. Figure 1: Initial output from ChatGPT in response to the open-ended prompt: ’Can you create a telematics dataset for me?’ - illustrating the AI’s interpretation and structuring of telematics data. The second attempt underscores the value of providing explicit instructions in the prompts. Precise requirements can minimize the likelihood of needing multiple iterations and can enhance the chances of producing a synthetic dataset aligned with research needs from the outset. Although the second prompt exhibits marked improvement over the initial attempt, the refinement process is iterative, offering continual opportunities for enhancement. The objective is not to produce a flawless synthetic telematics dataset at the first instance, but to understand the capabilities and limitations of the ChatGPT model more deeply, and to learn how to steer it towards producing the needed data. With this insight, the focus shifts to the creation of the third prompt. Building upon the improvements achieved so far, the objective becomes to strike an optimal balance between specificity and data diversity. This approach aims to more effectively guide ChatGPT in the generation of a synthetic dataset that is both usable and better aligned with the requirements. Venturing further into the exploration of synthetic dataset creation, the third prompt incorporates an additional layer of complexity: context. This step involves asking ChatGPT to envision a real-life scenario, providing a more tangible backdrop for the data generation process. The updated prompt is: "Imagine you are a city planner in Columbus, Ohio. You have been tasked with studying the traffic patterns around the city. The goal is to help people who have never worked with telematics data have a synthetic dataset they could practice on. Can you create a telematics dataset that has 100,000 rows and is exportable into CSV format or a Pandas DataFrame? I would like it to have the following columns: ['driver id', 'timestamp of trip start time', Figure 2: ChatGPT’s response to the refined prompt: ‘Can you create a telematics dataset for me that has 100,000 rows and is exportable into CSV format or a Pandas DataFrame?’ - showcasing the impact of increased specificity on synthetic data generation. 'timestamp of trip end time', 'longitude of trip start', 'latitude of trip start', 'longitude of the trip end', 'latitude of the trip end', 'average speed', 'day of the week','vehicle type', 'heading', 'road type', 'weather conditions'], as well as any other columns you think would be useful." Introducing not only a contextual backdrop to guide data generation, but also explicitly requesting specific columns previously unmentioned, adds further depth to the synthetic data generation. This third prompt places ChatGPT in the shoes of a city planner in Columbus, Ohio, intending to generate a synthetic telematics dataset. Simultaneously, it encourages the model to draw from its vast training data and suggest additional variables that might enhance analysis, by inviting it to introduce any other columns it deems useful. This example underscores the iterative nature of refining AI requests. By providing a context, explicitly defining the size and format of the dataset, and leaving room for the model's creativity, it edges closer to a synthetic dataset that meets research and learning requirements. The outcome and insights from this third endeavor provide the next topic of exploration. ### Dealing with Challenges and Debugging Diving deeper into the complexity of prompts naturally results in more elaborate outputs, and consequently, the likelihood of encountering issues elevates. Nonetheless, it's essential to view these problems not as setbacks but integral parts of the process. They present opportunities to better understand the workings of ChatGPT and, importantly, how to leverage its abilities to address the issues that surface. During the third iteration, the first coding issue emerged. The code produced by ChatGPT did not execute as anticipated and returned an error. This occurrence is not uncommon when working with AI models such as ChatGPT. By escalating the complexity of the task, the likelihood of encountering an issue proportionately increases. However, this is precisely where the advantage of using AI becomes apparent. Instead of manually debugging the issue, ChatGPT was utilized to identify the problem and generate a solution. Simply pasting the error back into ChatGPT elucidated the nature of the error and yielded rectified code that executed without glitches. This real-world scenario illustrates the adaptability and power of ChatGPT. Its capability to understand and correct its own errors further emphasizes its potential in generating increasingly complex synthetic datasets. ## 8 A Methodical Approach to Evaluating the Quality of a Synthetic Dataset Evaluating the quality of a synthetic dataset calls for a meticulous, systematic approach, especially when no real-world comparison dataset is available. Using the created synthetic telematics dataset as an example, the following stages are suggested to effectively assess its quality. ### Diversity Examination The dataset should exhibit a reasonable range across all variables. For example, a dataset in which all trips are of the same length, or where a single type of weather condition prevails, may not provide insightful results. A prompt for generating Python code to examine the data distribution might be as follows: "Show how to generate summary statistics for each column in a Pandas DataFrame for understanding the data distributions within a telematics dataset." Keep in mind that the name of the CSV may require modification to align with the one used during dataset creation. Figure 3: Snapshot of the synthetic telematics dataset as viewed in a Pandas DataFrame. ### Relevance Evaluation Data should align with the real-world context it is intended to reflect. Therefore, it becomes necessary to ensure that vehicle speeds fall within city-appropriate limits and that weather conditions are congruent with the climate typical to Columbus, Ohio. A Python code prompt for checking such relevance could be: "Provide a method to visualize the distribution of average speeds in a synthetic telematics dataset. Could you generate a code snippet for a histogram?" The output following this relevance examination is provided in the subsequent section. ### Coherence Verification Synthetic data should adhere to certain logical principles. For instance, start times for trips must invariably precede the corresponding end times, and vehicular headings should remain plausible within the urban framework. A prompt to generate Python code for verifying this consistency could be phrased as follows: "What is the procedure to scrutinize timestamp data within a synthetic telematics dataset? Could a Python code snippet be crafted to ascertain if trip initiation and termination times align logically?" ### Optimizing Synthetic Dataset Assessment These meticulous steps facilitate a thorough assessment of the synthetic dataset's applicability and credibility, even when a real-world comparative dataset is not available. It's crucial to emphasize that the process of generating a synthetic dataset is iterative. Each evaluation serves to enhance understanding of the dataset's strong points and areas requiring improvement, thereby fostering the generation of progressively superior synthetic datasets. Figure 4: Output of ChatGPT when queried for generating a Python code snippet to visualize the distribution of average speeds in the synthetic telematics dataset. Figure 5: Histogram of average vehicle speeds obtained from the synthetic telematics dataset. This visualization is the result of running the Python code provided by ChatGPT, demonstrating the application of ChatGPT’s generated code in analyzing synthetic data. Conclusion The process of synthetic dataset creation, as outlined in this paper, involves multiple iterations and is heavily reliant on providing precise and comprehensive prompts to guide the AI model, in this instance, ChatGPT. The capacity to generate synthetic data presents transformative opportunities across various sectors, including telematics, where privacy concerns and data scarcity can obstruct progress and innovation. This exploratory procedure highlighted the advantages of synthetic dataset creation, with emphasis on their potential to preserve data privacy, augment real-world data, and provide controlled environments for experimental purposes. Simultaneously, light was shed on potential difficulties, such as ensuring the quality and representativeness of synthetic data and the necessity for meticulous, iterative prompt development. A practical demonstration navigated the generation of a synthetic telematics dataset for Columbus, Ohio. Beginning with a basic prompt, the complexity was gradually increased, with each stage providing learning opportunities for refining the prompts to achieve the desired output. This step-by-step approach led to the creation of a dataset with 100,000 rows, demonstrating ChatGPT's capability to handle complex tasks. An assessment of the synthetic dataset's quality was also conducted, focusing on the uniqueness, diversity, and relevance of the data points. Descriptive statistics and visualizations were utilized to explore data characteristics and the interrelations among variables. Without a real-world comparative dataset, the importance of applying multiple testing techniques and verifying the logical coherence and relevance of the synthetic data was acknowledged. In conclusion, synthetic datasets generated by models like ChatGPT indeed hold significant potential. However, it is important to remain objective, acknowledging that they are not perfect replacements for real-world data and have inherent limitations. Nonetheless, their potential applications in sectors such as telematics are wide-ranging, providing new avenues for research and experimentation. With careful crafting of prompts and stringent evaluation of outputs, AI-generated synthetic data can become a powerful instrument in a data scientist's toolkit.
2301.09927
Predissociation dynamics of negative ion resonances of H$_2$ near 12 eV and 14.5 eV using velocity slice imaging technique
Dissociative electron attachment (DEA) is an important tool for investigating negative ion resonances. We have studied the negative ion resonances of H2 at 10 eV and 14 eV using the improved velocity slice imaging technique. We obtained modulations in the kinetic energy spectrum of H$^-$ ions obtained at 12 eV and 14.5 eV electron energy, consistent with the earlier reported vibrational state contributions from the higher lying bound resonances. We show that structures obtained at 12 eV are due to predissociation of the $C^2{\Sigma}_g^+$ resonance consistent with the current understanding. However, based on our angular distribution measurements, we propose that the structures obtained at 14.4 eV are due to predissociation of bound resonance of $^2{\Sigma}_g^+$ symmetry as against ${\delta}_g$ that was proposed earlier. We also report that the bound $^2{\Sigma}_g^+$ resonance contributes to the observed inversion symmetry breaking near 14 eV.
Akshay Kumar, Suvasis Swain, Jankee Upadhyay, Yogesh Upalekar, Rajesh Arya, Vaibhav S. Prabhudesai
2023-01-24T11:24:41Z
http://arxiv.org/abs/2301.09927v1
Predissociation dynamics of negative ion resonances of H\({}_{2}\) near 12 eV and 14.5 eV using velocity slice imaging technique ###### Abstract Dissociative electron attachment (DEA) is an important tool for investigating negative ion resonances. We have studied the negative ion resonances of H\({}_{2}\) at 10 eV and 14 eV using the improved velocity slice imaging technique. We obtained modulations in the kinetic energy spectrum of H\({}^{-}\) ions obtained at 12 eV and 14.5 eV electron energy, consistent with the earlier reported vibrational state contributions from the higher lying bound resonances. We show that structures obtained at 12 eV are due to predissociation of the \(C^{2}\Sigma_{g}^{+}\) resonance consistent with the current understanding. However, based on our angular distribution measurements, we propose that the structures obtained at 14.4 eV are due to predissociation of bound resonance of \({}^{2}\Sigma_{g}^{+}\) symmetry as against \(\Delta_{g}\) that was proposed earlier. We also report that the bound \({}^{2}\Sigma_{g}^{+}\) resonance contributes to the observed inversion symmetry breaking near 14 eV. ## 1 Introduction Dissociative electron attachment (DEA) is a useful probe to study negative ion states. In DEA, the electron is resonantly attached to a molecule to form temporary negative ion resonances (NIRs), which can dissociate into an anion and one or more neutral fragments. Although electron collisions have been studied for almost a century, many new facets of molecular NIRs are unraveling due to the invention of advanced experimental methods like momentum imaging [1]. For example, only recently, Krishnakumar _et al_. have provided reliable values of the experimentally measured absolute cross-section of DEA to the simplest molecules, H\({}_{2}\) and D\({}_{2}\), using the momentum imaging technique [2]. Here, velocity slice imaging (VSI), one of such techniques adopted for these measurements, helped to eliminate the contribution from the electronically excited metastable neutrals and ultraviolet light while ensuring the detection of all the ions. These cross-sections have provided a benchmark to test the theoretical tools invented [3, 4]. From a theoretical aspect, barring a few exceptions, molecular NIRs still need to be successfully modeled. Even for H\({}_{2}\), NIRs are not entirely understood. The observation of quantum coherence in the DEA process in H\({}_{2}\) and D\({}_{2}\) at 14 eV is another example of the new facets of NIRs unraveled by the VSI technique [5]. These results highlighted the role of the \({}^{2}\Sigma_{u}^{+}\) state at these energies, which has not been identified by any theoretical calculations. On the other hand, Swain _et al._ have shown that from 10.5 eV onwards, the B\({}^{2}\Sigma_{g}^{+}\) negative ion resonance state of H\({}_{2}\) lies below the parent b\({}^{3}\Sigma_{u}^{+}\) state [6], which is in contrast with the latest theoretical calculations [7]. H\({}_{2}\) is the simplest and ideal molecular system to benchmark the theoretical models that describe excited NIRs and their dissociation. Many studies have been carried out on this system in the last few decades [8, 9, 10]. DEA to H\({}_{2}\) is of fundamental importance. It leads to the formation of hydride ion that plays a significant role in the chemistry of the interstellar medium [11, 12, 13] and fusion plasmas [14, 15, 16]. The H\({}^{-}\) signal from H\({}_{2}\) in a low energy regime (\(<\)17 eV) comes from three resonant processes [2, 17]. The first resonant process occurs around 4 eV via the lowest attractive ground state X\({}^{2}\Sigma_{u}^{+}\) dissociating into H(\({}^{2}\)S) and H\({}^{-}\)(\({}^{1}\)S). This is a threshold process that occurs from the dissociation limit of the ground anion state (3.75 eV). The second resonant process occurs in the 8 to 13 eV energy range and is associated with the repulsive B\({}^{2}\Sigma_{g}^{+}\) resonant state and leads to H(\({}^{2}\)S) and H\({}^{-}\)(\({}^{1}\)S) products with high kinetic energy. The third resonant process around 14 eV, which was earlier believed to be associated with only \({}^{2}\Sigma_{g}^{+}\) resonance [18], is found to be associated with the coherent superposition of two resonant states of \({}^{2}\Sigma_{g}^{+}\) and \({}^{2}\Sigma_{u}^{+}\) symmetry, which causes forward-backward asymmetry in the angular distribution [5]. The 14 eV peak leads to H\({}^{-}\)(1s\({}^{2}\)) and H(n = 2) dissociation products. As the dissociation limit for this process is 13.95 eV, the H\({}^{-}\) is formed with low kinetic energies, similar to that of the 4 eV channel. The second B\({}^{2}\Sigma_{g}^{+}\) resonance was studied earlier. In the high electron energy resolution experiment, Dowel and Sharp [19] observed structures in H\({}^{-}\) signal intensity as a function of electron energy in the 11.3 eV-13.3 eV range. Tronc _et al._[20] have also reproduced these structures. These structures, periodic in electron energy, were in good agreement with the positions of the vibrational levels of resonance series "A" (as per the notation of Schulz _et al_. [21]) observed in the electron ejection channel [22, 23, 24]. This agreement suggests that structures are due to \(\mathrm{C}^{2}\Sigma_{g}^{+}\big{(}1s\sigma_{g}\big{)}^{1}(2p\sigma_{u})^{2}\) resonant state. The structures can be understood due to the predissociation of the vibrational levels of attractive \(\mathrm{C}^{2}\Sigma_{g}^{+}\) via the repulsive B\({}^{2}\Sigma_{g}^{+}\) state. Swain _et al_. [6] have observed the effect of \(\mathrm{C}^{2}\Sigma_{g}^{+}\) resonance in the angular distribution of H\({}^{-}\) from H\({}_{2}\) in the 8-13 eV region. However, due to the poor energy resolution of the system, they were unable to get direct evidence of these structures in the ion yield spectrum. For the 14 eV peak in the DEA yield, it has been shown that the electron attachment to the ground state (\(X^{1}\Sigma_{g}^{+}\)) of H\({}_{2}\) leads to the formation of a coherent superposition of \({}^{2}\Sigma_{g}^{+}\) and a \({}^{2}\Sigma_{u}^{+}\) resonant states [5]. It involves the simultaneous transfer of \(s\) and \(p\) partial waves from the attaching free electron breaking the inversion symmetry in the system. This symmetry breaking leads to forward-backward asymmetry in angular distribution. However, in the earlier measurements by Tronc _et al_. [25] with higher electron energy resolution, modulations similar to the 10 eV peak were observed in the H\({}^{-}\) ion-yield curve as a function of electron energy [25]. In electron transmission experiments, structures at these energies were identified as "\(band\)_f_" and were observed by Weingartshofer [26], Golden [27], and Sanchez and Schulz [24]. These authors assigned the \({}^{2}\Sigma_{g}^{+}\) symmetry for the '\(f\)' band. However, Tronc \(et\)\(al.\)[25] have assigned \(\Delta_{g}\) symmetry for corresponding resonance and proposed that these structures are due to the interaction of \(\Sigma_{g}\) and \(\Delta_{g}\) states through rotations. In this context, it is worthwhile to investigate the features observed at 14 eV using the improved VSI technique, which can show the contribution from the vibrational states of the bound resonance. The vibrational structures observed in both 10 eV and 14 eV DEA peaks were obtained using a high-resolution electron beam in the DEA cross-section measurements [20, 25]. So far, they have not yet been reported in the momentum imaging measurements due to the limited electron energy and ion momentum imaging resolution of the apparatus used. In this work, we report these vibrational structures with the improved resolution of momentum imaging set up for both 10 eV and 14 eV peaks in DEA measurements. Furthermore, using this improved spectrometer, we have obtained details of the symmetry of involved resonances. ## 2 Experimental setup A magnetically collimated electron beam generated by thermionic emission from the heated tungsten filament was crossed with the effusive molecular beam produced using a capillary array. The electron gun was operated in the pulsed mode (width 100 ns and repetition rate 3000 Hz). The electrons were collected by the Faraday cup situated co-axially at the other end of the interaction zone of the VSI spectrometer. The electron beam was collimated using the magnetic field of 50 Gauss generated using a pair of coils mounted in the Helmholtz geometry outside the vacuum chamber. The interaction volume spanned by the overlap of the electron, and the molecular beam was situated at the center of the interaction region of the VSI spectrometer. The interaction region of the spectrometer was flanked by the pusher and puller electrodes separated by 20 mm. The puller electrode has a central aperture of 30 mm lined with a molybdenum wire mesh of 64% transmission. The negative ions formed from electron molecule interaction were extracted into the lens region using a delayed pulsed extraction field. A square voltage pulse of -60 V amplitude and 1 \(\upmu\)s duration was used as the extraction pulse on the pusher electrode, which was delayed by 90 ns with respect to the electron pulse. The energy resolution of the electron gun was about 1 eV. The chamber was pumped by oil-free pumps to a base vacuum of \(1\times 10\)' \({}^{8}\) Torr. During the experiments, the base pressure of H\({}_{2}\) gas was kept at 3 X \(10^{-6}\)Torr, and the electron current was 0.36 nA. The energy calibration of the electron beam was carried out by observing the 14 eV peak of H\({}^{-}\) from H\({}_{2}\). The generated ions were velocity focused via a 4-lens assembly [28]. This 4-lens assembly allowed the momentum images to be zoomed for low-energy ions to improve the imaging resolution. The ions were detected by a 2D position-sensitive detector (PSD) mounted at the end of the flight tube. The detector was made of two 75 mm diameter active area microchannel plates (MCP) mounted in the Chevron configuration, followed by a phosphor screen. The images formed on the phosphor screen were recorded by a charge-coupled device (CCD) camera. The VSIs obtained were then analyzed after adding several such slices in the offline analysis. The detector was kept active only when the central slice of the Newton sphere arrived at it. The appropriate delay of the central slice was obtained by shifting the detector activation window with respect to the pusher pulse and by obtaining the image with the maximum radius. The biasing of the detector was obtained using the Behlke switch having a pulse duration of 10 ns synchronized with the electron pulse with a suitable delay with respect to the pusher pulse. The Behlke switch-based pulse generator is specifically developed for the experimental setup. The requirement was to generate a high voltage (HV) pulse of 2.5 kV with a pulse duration of 10 ns and a switching time of better than 2 ns at an adjustable repetition rate of more than 1 kHz. There are several techniques for achieving HV fast switching [29-31] based on switches realized by using Transistors, MOSFETs, Insulated Gate Bipolar Transistors (IGBTs), etc. In the present work, the pulse generator has been developed using a Behlke solid-state HV switch (HTS-50-08-UF). These solid-state switches are specially designed and developed to generate HV pulses of a short duration of 10 ns, and a constant fast rise time better than 2 ns. These switches, being semiconductor devices, have a very low turn-on jitter of the order of 100 ps and a longer lifetime. These switches can provide galvanic isolation of more than 10 kV, thus can be floated at the required high potential and used as high-side switches for positive as well as for negative voltages. The block diagram of the pulse generator is shown in Fig. 1. It has been realized by charging a 440 pF/6 kV capacitor to the desired voltage and discharging into a load resistance of matched impedance using HV switching in synchronization with an external trigger signal. The discharging current generates an HV pulse of half the applied biasing voltage with a duration of 10 ns fixed for the selected Behlke Fig.1: Block diagram of the high-voltage pulse generator unit switch. The pulse amplitude can be varied by varying the applied biasing voltage. High-speed switching often leads to various difficulties including self-oscillating, self-re-triggering, ringing, etc. In order to overcome these difficulties, a printed circuit board (PCB) layout has been designed by minimizing the value of parasitic reactive components, matching the load with output impedance, and implementing the star grounding scheme. EMI shielding of the pulser circuit has been achieved by specifically designed and fabricated enclosure, of size 220 mm X 110 mm X 185 mm using 3 mm MS Ni-Chromium Plated Sheet. For the 10 eV resonance, the threshold for H\({}^{-}\) formation is 3.75 eV. This implies that at 12 eV electron energy, the KE of fragments would be about 4 eV. Due to low mass and high kinetic energy, momentum imaging H\({}^{-}\) from H\({}_{2}\) at 10 eV is particularly challenging. The magnetic field applied for electron collimation also adversely affects the H\({}^{-}\) imaging. Due to Lorentz force, one side of the newton sphere moves close to the edge of the lens electrode's aperture, which results in distortion of momentum images. However, due to cylindrical symmetry around the electron beam, we can use only the other half of the image that passes close to the spectrometer axis to obtain the characteristic dissociation dynamics. In addition, due to the presence of static gas background, it produces an extended source of ions that causes distortions in images [4]. To remove these distorting features, static gas background subtraction was done by diverting the gas flow through another entrance and keeping the base pressure the same. The energy resolution of the momentum image is directly related to the annular width of the obtained image. The annular width depends upon the velocity focusing conditions, time slicing width, and the electron's energy resolution width. To determine the focussing conditions, the potentials on various electrodes were estimated using SIMION simulations. The electrode voltages were further fine-tuned in the experiment to obtain the best achievable velocity-focusing condition. The small slicing width can reduce the contribution of the non-central part of the Newton sphere and thus can help in improving the energy resolution of the image. Poor electron energy resolution has an advantage for H\({}_{2}\). As a diatomic molecule, on dissociation from a particular resonance, excess energy above the threshold of the process would appear as the fragment's kinetic energy (KE). We can access the different parts of the ion yield curve over the energy spread in the electron beam by fixing the mean electron's energy at a particular value. We used SIMION to obtain the expected energy resolution of the momentum image for a specific set of the initial kinetic energies and with the other conditions matching the experimental scheme. The effect of the thermal motion of particles from an effusive jet having an aspect ratio of 10 at room temperature, 1 cm from the end of the capillary array, and dissociating in random directions in the interaction region, was estimated. It was found that the thermal motion was adding to the energy width by about 25 meV. This spread of energy was incorporated into the SIMION simulations. The ions were flown, and the Newton sphere was time sliced in a separate analysis. The annular width of the obtained image is used to determine the corresponding momentum resolution of the imaging condition. Due to the presence of the magnetic field, the image loses the cylindrical symmetry about the spectrometer axis. As a result, energy resolution was found to be angle-dependent. This effect is more pronounced for high KE ions. For 4 eV, H\({}^{-}\) ions energy resolution is found to be 80 meV near 30\({}^{\lx@math@degree}\)-60\({}^{\lx@math@degree}\) and around 90 meV around 40\({}^{\lx@math@degree}\)-70\({}^{\lx@math@degree}\) with respect to incident electron direction for 10ns time gating and about 300 meV for 80 ns time gating. To compare our data with that from Tronc _et al_. [20], we multiplied their reported cross-section data with the electron gun profile of our experiment and convoluted the result with an obtained energy resolution of the imaging spectrometer. Fig. 2 shows the importance of 10 ns time gating compared to 80 ns time gating. Despite poor electron energy resolution, we could observe the effect of the vibrational structures in KE distribution in the 10 ns slicing due to good imaging resolution. The VSI condition was modified to map the low-energy H\({}^{-}\) ions obtained at 14.4 eV with appropriate magnification. Under that lensing condition, SIMION simulations were carried out to determine the imaging resolution of the spectrometer. After a similar analysis as done for 4 eV H\({}^{-}\) ions, we found energy resolution for 0.3 eV H\({}^{-}\) ions to be around 60 meV in all angular ranges. To obtain the expected KE distribution, we multiplied the observed ion yield data of Tronc _et al_. [25] with the electron gun profile and convoluted the result with the 60 meV energy resolution of the imaging spectrometer. The obtained spectrum is shown in Fig. 2(c). For a blob, counts near 0 eV are artificially enhanced due to the geometry factor arising from the finite width of the slice, which gives an intrinsic bias towards the low-energy ions. The simulated KE distribution obtained after multiplying the data from Tronc _et al_. near 14 eV [25] with the electron gun profile needed to be multiplied by an additional weight factor that depends on the overall spread of the time of flight peak against the width of the slicing. After multiplying the weight factor and convoluting it with 60 meV imaging resolution, the modified KE distribution is shown in Fig. 2(d). Fig.2: Expected KE distribution (solid red line) H\({}^{-}\) from H\({}_{2}\) around 10 eV for different time slicing conditions obtained after multiplying the ion yield data (solid blue line) from ref (20) with the electron gun profile centered around 12 eV and convoluting the product with (a) 300 meV imaging resolution obtained for 80 ns time gating and (b) 90 meV imaging resolution obtained for 10 ns time gating. Expected H\({}^{-}\) KE distribution (solid red line) from H\({}_{2}\) around 14 eV electron energy range obtained after multiplying the ion yield data (solid blue line) from ref (25) with the electron gun profile of 1 eV FWHM centered around 14.4 eV and convoluting the product with (c) 60 meV imaging resolution, and (d) after taking into account weight factor (see the text). Results and Discussion For a homonuclear diatomic molecule, due to the presence of inversion symmetry in the system, electron capture proceeds through the transfer of either odd or even partial waves [32]. Under the axial recoil approximation, the angular preference during electron capture will be mapped in the angular distribution of dissociation products. The general form of the angular distribution [33] of the molecule is given by \[I(k,\theta,\varphi)=\left|\sum_{l=m}^{\infty}A_{lm}(k)Y_{lm}(\theta,\varphi) \right|^{2} \tag{1}\] where \(A_{lm}\) is the transition amplitude between the target state and the resonant state, \(l\) is the orbital angular momentum of the incident electron, and \(m\) is the difference between the electronic axial angular momenta of the target state and resonant state. \(Y_{lm}(\theta,\varphi)\) are the corresponding spherical harmonics involved, angle \(\theta\) represents the angle of ejection of an anion fragment with respect to the incident electron beam, and \(\varphi\) is the corresponding azimuthal angle. ### Predissociation near 12 eV The experimentally obtained momentum image for the H\({}^{-}\) ions from H\({}_{2}\) at 12 eV after background subtraction is shown in Fig. 3(a). At 12 eV, the blob seen in the center of the image is due to the long energy tail of the electron beam, which produces negative ions from the 14 eV resonance. The observed shift of the blob away from the center is due to the effect of the magnetic field used for electron beam collimation [6]. This is consistent with the simulations carried out with the charged particle trajectory simulation program: SIMION. The thin ring obtained is from the 4 eV ions formed from the DEA via B\({}^{2}\Sigma_{\text{g}}^{+}\) resonance. The effect of thin slicing of 10 ns can be seen in the annular width of the ring as compared to earlier 80 ns slicing [6]. Near 12 eV, the transition occurs between the X\({}^{1}\Sigma_{g}^{+}\) state of H\({}_{2}\) to the repulsive B\({}^{2}\Sigma_{\text{g}}^{+}\) and bound C\({}^{2}\Sigma_{\text{g}}^{+}\) states of H\({}_{2}^{-}\)[19]. According to the selection rule of \(g\to g\) transition, only even partial waves are allowed. At low energy, lower allowed partial waves have a dominant contribution to the capture. Therefore, at 12 eV, the contribution from only \(s\) and \(d\) partial waves of the electron would suffice to describe the angular distribution. A similar analysis will also hold for the upper bound With the C\({}^{2}\Sigma_{g}^{+}\) resonance. The angular distribution from the corresponding transition can be expressed as \[I(k,\theta)=\left|A_{00}(k)Y_{00}(\theta,\varphi)+A_{20}(k)Y_{20}(\theta, \varphi)e^{-i\delta}\right|^{2} \tag{2}\] which on solving gives \[I(k,\theta)=\frac{A_{00}^{2}(k)}{4\pi}+5\frac{A_{20}^{2}(k)}{16\pi}(3cos^{2} \theta-1)^{2}+\sqrt{5}\frac{A_{00}(k)A_{20}(k)}{4\pi}(3cos^{2}\theta-1)\text{ cos}(\delta) \tag{3}\] Here \(A_{00}\) and \(A_{20}\) are transition amplitudes corresponding to the attachment of \(s\) and \(d\) waves of incident electron, respectively, and \(\delta\) is the relative phase between two partial waves. Using equation 3, the angular distribution obtained for the momentum image (Fig. 3(a)) is fitted. The corresponding fitted curve is shown by the solid red line in Fig. 3(b). Using the fitted parameters ratio of transition amplitudes (\(A_{00}/A_{20}\)) is calculated, and the ratio is found to be \(1.9\pm 0.1\), which is consistent with earlier reported results [6]. The increased \(s/d\) ratio around 12 eV is explained due to additional contribution from the resonance of identical symmetry (C\({}^{2}\Sigma_{g}^{+}\)) [6]. To see the effects of C\({}^{2}\Sigma_{g}^{+}\) resonance on the ion yield spectrum, KE distribution of H\({}^{-}\) ions was plotted in the region where the contribution due to the \(d\) wave is minimum, i.e., around \(55^{\circ}\). The obtained KE distribution for the 12 eV images in the \(40^{\circ}-70^{\circ}\) angular region is shown in Fig. 4(a). The corresponding expected KE distribution from Fig. 2(b) is also shown as the solid red line. The presence of vibrational structure can be seen as shoulders at 3.8 eV and 4.2 eV, which correspond to peaks coming at 11.3 eV and 11.9 eV, respectively, in the ion yield curve reported by Tron Fig. 4: KE distribution of H\({}^{-}\) from H\({}_{2}\) obtained at 12 eV electron energy in the angular range of (a) \(40^{\circ}-70^{\circ}\) (b) \(30^{\circ}-60^{\circ}\) using time slicing of 10 ns. The effect of thin-slicing can be seen in vibrational structures in the ion yield spectrum. The ion yield curve obtained after multiplying the data from Tronc _et al_. [20] with the electron gun profile and convoluting it with imaging resolution (see the text) is also shown in a solid red line. (c) KE distribution of \(0^{-}\) from O\({}_{2}\) obtained around 6.5 eV electron energy around \(90^{\circ}\) using time slicing of 10 ns and the same spectrometer operating condition. Fig. 3: (a) Background subtracted momentum image of H\({}^{-}\) from H\({}_{2}\) obtained at 12 eV electron energy. The direction of the electron beam is from top to bottom, as indicated by the black arrow. (b) Angular distribution of the ions in the KE range of 3-5 eV and the corresponding fitting of angular distribution is also shown (solid red line). The angular distribution is normalized at \(90^{\circ}\). can be seen for KE distribution plotted in a \(30^{{}^{\circ}}-60^{{}^{\circ}}\) angular region for which the best energy resolution is expected (80 meV), as shown in Fig. 4(b). The counts before 3.5 eV were generated due to the remaining background contribution. To verify that these structures are not from any experimental artifact, we have also obtained the VSI image of O\({}^{-}\) from O\({}_{2}\) using 10 ns slicing. For O\({}_{2}\), we don't expect any structures in the kinetic energy spectrum. As shown in Fig. 4(c), the obtained KE spectrum of O\({}^{-}\) from O\({}_{2}\) is free from any structures and thus verifies that the structures observed in H\({}_{2}\) are not imaging artifacts. ### Predissociation near 14 eV After obtaining the earlier reported vibrational structure in the KE distribution at 12 eV, we carried out similar measurements for the 14 eV peak. Since the KE of H\({}^{-}\) at 14.4 eV is \(<\) 0.7 eV, the corresponding VSI image is a blob, as shown in Fig. 5(a). The energy calibration is carried out from the 15 eV image. The corresponding KE distribution is shown in Fig. 5(b), along with the expected curve from Fig. 2(d). Earlier, 14 eV resonance was believed to be associated with only \({}^{2}\Sigma_{g}^{+}\), but recently it was shown that 14 eV resonance is associated with both \({}^{2}\Sigma_{g}^{+}\) and \({}^{2}\Sigma_{u}^{+}\) resonance. Thus, structures observed around 14 eV in principle can come from the interaction of both resonances with another high-lying bound resonance. To check the involvement of the resonances, the KE distribution of H\({}^{-}\) for the 14.4 eV electron energy image was obtained around the \(90^{{}^{\circ}}\) angular region with respect to the incoming electron beam direction and is plotted in Fig. 5(c). Since the contribution of \({}^{2}\Sigma_{u}^{+}\) is minimum around \(90^{{}^{\circ}}\), the presence of vibrational structures around \(90^{{}^{\circ}}\) shows that they are arising due to the interaction of \({}^{2}\Sigma_{g}^{+}\) with another bound resonance. Tronc _et al._[25] Figure 5: (a) Background subtracted momentum image of H\({}^{-}\) from H\({}_{2}\) obtained at 14.4 eV electron energy using the 10 ns time slice. The direction of the electron beam is from top to bottom (b) corresponding KE distribution obtained in the angular range of \(75^{{}^{\circ}}-105^{{}^{\circ}}\) (blue circles) and \(40^{{}^{\circ}}-70^{{}^{\circ}}\) (violet circles). The solid red line shows the expected KE distribution shown in Fig 1(d). (c) The comparison of the angular distribution obtained from the image and normalized at \(90^{{}^{\circ}}\) in the KE range of 0-0.15 eV (red circles), where we expect no vibrational structures with the KE ranges of 0.15-0.3 eV (blue circles) and 0.3-0.45 eV (black circles) where we expect vibrational structures. The solid lines show the corresponding fits. have also observed the structures around 90\({}^{\lx@math@degree}\), and it was proposed that these structures are due to the interaction of \({}^{2}\Sigma_{g}^{+}\) with \({}^{2}\Delta_{g}\) state through rotation coupling. To determine the symmetry of bound resonance, we have plotted the angular distribution in different KE ranges. KE distribution has no structures for 0-0.15 eV, as shown in Fig. 5(b). The angular distribution in this region would be due to the capture of only \(s\) and \(p\) partial waves of a free electron coming from transition \(X^{1}\Sigma_{g}^{+}\rightarrow{}^{2}\Sigma_{g}^{+}\) and \(X^{1}\Sigma_{g}^{+}\rightarrow{}^{2}\Sigma_{u}^{+}\) respectively. The expected angular distribution can be expressed [34] as \[I(k,\theta)=\left|A_{00}(k)Y_{00}(\theta,\varphi)+A_{10}(k)Y_{10}(\theta, \varphi)e^{-i\delta}\right|^{2} \tag{4}\] \[I(k,\theta)=A_{00}^{2}(k)+3A_{10}^{2}(k)cos^{2}\theta\ +2\sqrt{3}A_{00}(k)A_{10}(k) \text{cos}(\theta)\text{cos}(\delta) \tag{5}\] where \(A_{00}\) and \(A_{10}\) are transition amplitudes corresponding to the attachment of \(s\) wave and \(p\) wave of incident electron, respectively, \(\delta\) is the sum of the relative phase gained during the dissociation along the two paths, and the initial relative phase between the \(s\) and \(p\)-waves. The presence of the \(cos(\theta)\) term will induce forward-backward asymmetry in the angular distribution. Electron capture from \(X^{1}\Sigma_{g}^{+}\) state of H\({}_{2}\) to \(\Delta_{g}\) resonant state requires the capture of \(d\) wave of a free electron. However, the angular distribution shown in Fig. 5(c) obtained for the 0.15-0.3 eV and 0.3-0.45 eV is similar to the 0-0.15 eV KE range and could be fitted to equation (5). Obtained fitted parameters for angular distribution in different KE ranges are shown in Table 1. This indicates that there is no involvement of \(d\) wave in the angular distribution; hence, electron capture would not be happening to a \(\Delta_{g}\) state. Moreover, the Contribution of \(d\) partial wave is minimum around 55\({}^{\lx@math@degree}\) if the bound resonance is of \(\Delta_{g}\) symmetry. Hence, we should not expect any structures in KE distribution around 55\({}^{\lx@math@degree}\). On the contrary, we have observed structures in KE distribution in the form of shoulders appearing at the 0.15-0.3 eV KE range and 0.3-0.45 eV KE range. around 55\({}^{\lx@math@degree}\) similar to structures that were observed around 90\({}^{\lx@math@degree}\), as shown in Fig. 5(b). The corresponding KE distribution is also compared with the simulated distribution (Fig. 1(d)) and is shown in the solid red line in Fig. 5(b). This eliminates the possibility of the higher \begin{table} \begin{tabular}{|c|c|c|c|} \hline **KE range (eV)** & \(A_{00}\) & \(A_{10}\) & \(\delta(rad)\) \\ \hline 0 – 0.15 & 0.94 \(\pm\) 0.07 & 0.42 \(\pm\) 0.08 & 4.34 \(\pm\) 0.10 \\ \hline 0.15 – 0.3 & 0.94 \(\pm\) 0.04 & 0.49 \(\pm\) 0.05 & 4.24 \(\pm\) 0.06 \\ \hline 0.3 – 0.45 & 0.93 \(\pm\) 0.08 & 0.54 \(\pm\) 0.06 & 4.36 \(\pm\) 0.06 \\ \hline \end{tabular} \end{table} Table 1: Transition amplitudes and relative phase from the angular distributions H\({}^{-}\) from H\({}_{2}\) at 14.5 eV electron energy in different KE ranges. lying bound resonance being of the \(\Delta_{g}\) symmetry. Sanchez and Schulz have also observed 14 eV structures in the electron scattering experiment and assigned \(\Sigma_{g}\) symmetry to the corresponding resonance. From these observations, we conclude that the vibrational structures observed in the ion yield of the 14 eV resonance are arising due to the interaction of \({}^{2}\Sigma_{g}^{+}\) resonance with another high-lying bound resonance of \({}^{2}\Sigma_{g}^{+}\) symmetry. Using the energy values where Tronc _et al_. [25] have got the structures in the 14 eV region and comparing it with vibrational states [35] of H\({}_{2}\), we proposed that high-lying bound \({}^{2}\Sigma_{g}^{+}\) resonance originates from \(D^{\prime\,1}\Pi_{u}\) state with dissociation limit H + H(n = 4) of H\({}_{2}\) as the parent state [36]. To observe the effect of bound \({}^{2}\Sigma_{g}^{+}\) resonance that is causing structures in the 14 eV region on the symmetry breaking, we have determined the asymmetry parameter [34] of velocity slice image obtained at 14.4 eV in different KE ranges, as shown in Table 2. From earlier calculations [5], it was shown that if we consider only \({}^{2}\Sigma_{g}^{+}\) and \({}^{2}\Sigma_{u}^{+}\) dissociating resonances, the energy-integrated asymmetry parameter decreases with electron energy beyond 14 eV. On the other hand, if bound \({}^{2}\Sigma_{g}^{+}\) resonance contributes incoherently to the DEA process, the asymmetry parameter should decrease in the 0.15-0.45 eV KE range where there is a contribution from the bound \({}^{2}\Sigma_{g}^{+}\). But from Table 2, it can be seen that the involvement of bound \({}^{2}\Sigma_{g}^{+}\) is causing an increase in asymmetry. This shows that high lying \({}^{2}\Sigma_{g}^{+}\) resonance is coherently participating in the symmetry-breaking process observed near 14 eV. It is also important to note that the next dissociation limit available for the DEA process, namely H(n=3) + H\({}^{-}\)('S), is at 15.86 eV [37], which is just about at the edge of the energy spread of the electron beam at 14.4 eV. However, the measured cross-section at this energy is very small [2]. Consequently, this channel will have a negligible contribution to the signal in the KE range of 0-0.15 eV and will not influence the inference. ## 4 Conclusion Using a VSI spectrometer with improved energy resolution due to 10ns time slicing, we have observed the effect of vibrational structures at 12 eV and 14.4 eV electron energy in the kinetic energy distribution of H\({}^{-}\) ions from H\({}_{2}\). These structures are consistent with the earlier reports by Tronc _et al_. [20; 25]. At 12 eV, the structures are due to predissociation of bound C\({}^{2}\Sigma_{g}^{+}\) resonance state via the repulsive B \begin{table} \begin{tabular}{|c|c|} \hline **KE range (eV)** & **Experimental values of \(\eta\)** \\ \hline 0 – 0.15 & -0.31 \(\pm\) 0.02 \\ \hline 0.15 – 0.3 & -0.38 \(\pm\) 0.02 \\ \hline 0.3 – 0.45 & -0.35 \(\pm\) 0.02 \\ \hline \end{tabular} \end{table} Table 2: Comparison of measured asymmetry parameter (\(\eta\)) in different KE ranges of H\({}^{-}\) from H\({}_{2}\) at 14.4 eV electron energy. resonance. The main contribution to the repulsive resonance is from the transfer of \(s\) and \(d\) partial waves to the target molecule. The bound resonance contributes predominantly to the \(s\) wave capture. At 14.4 eV, we identify the structures in the KE distribution of ions as due to the predissociation of \({}^{2}\Sigma_{g}^{+}\) bound resonance in contrast to the earlier proposed \({}^{2}\Delta_{g}\) resonance state. We infer it from the angular distributions obtained at various KE ranges and by obtaining the KE distributions in various angular ranges, ruling out the contribution of the \(d\) partial wave. We propose that this bound resonance of \({}^{2}\Sigma_{g}^{+}\) symmetry may have the \(D^{\prime\,1}\Pi_{u}\) state of neutral H\({}_{2}\) as the parent state. We also propose that this upper bound resonance contributes to the symmetry breaking of the inversion symmetry as its contribution should be added coherently to the resultant transition. ## Acknowledgement SS, AK, and VSP acknowledge the financial support from the Department of Atomic Energy, India, under Project Identification No. RTI4002. All authors acknowledge Dr. S. V. Nakhe for support and Mr. Sudhir Kumar for help in fabrication of the pulse generator.
2304.02074
Introduction to Pylog
PyLog is a minimal experimental proof assistant based on linearised natural deduction for intuitionistic and classical first-order logic extended with a comprehension operator. PyLog is interesting as a tool to be used in conjunction with other more complex proof assistants and formal mathematics projects (such as Coq and Coq-based projects). Proof assistants based on dependent type theory are at once very different and profoundly connected to the one employed by Pylog via the Curry-Howard correspondence. The Tactic system of Coq presents us with a top-down approach to proofs (finding a term inhabiting a given type via backtracking the rules, typability and type-inference being automated) whilst the classical approach of Pylog follows how mathematical proofs are usually written. Pylog should be further developed along the lines of Coq in particular through the introduction of many "micro-automatisations" and a nice IDE.
Clarence Lewis Protin
2023-04-04T18:55:02Z
http://arxiv.org/abs/2304.02074v2
# Introduction to PyLog ###### Abstract PyLog is a minimal experimental proof assistant based on linearised natural deduction for intuitionistic and classical first-order logic extended with a comprehension operator. PyLog is interesting as a tool to be used in conjunction with other more complex proof assistants and formal mathematics projects (such as Coq and Coq-based projects). Proof assistants based on dependent type theory are at once very different and profoundly connected to the one employed by Pylog via the Curry-Howard correspondence. The Tactic system of Coq presents us with a top-down approach to proofs (find a term inhabiting a given type via backtracking the rules, typability and type-inference being automated) whilst the classical approach of Pylog follows how mathematical proofs are usually written. Pylog should be further developed along the lines of Coq in particular through the introduction of many "micro-automatisations" and a nice IDE. ## Introduction As Voevodsky pointed out in a public lecture in Princeton, it is highly desirable to obtain a foundations of mathematics that will allow automatic verification of proofs. Standard approaches employ intuitionistic higher order logic or various powerful dependent type theories that have the added bonus of the constructive computational information furnished by the Curry-Howard isomorphism. Notable examples are formal mathematics projects based on Coq and Agda and those based on Homotopy Type Theory. PyLog is a computationally and philosophically alternative approach which aims at fulfilling a number of desiderata: 1. The user environment must be easy, simple, intuitive and attractive to use for the logician or mathematician and be transparent to the programmer so as to easily facilitate access to data structures and algorithms for future development and applications. 2. The process of writing proofs (and the checking algorithm) should be agreable and resemble structurally actual mathematical practice - or at least the template laid down by the Principia Mathematica (1910). 3. It should be first-order (with a weak "parametric" second-order extension) with all its type-free simplicity and versatility. 4. It should be easy to combine different formalised theories and to organise theorems into theories. 5. Formalised proofs can be easily checked either by a human or a machine. 6. The difficulty of formalising and checking a given theory should not exceed mathematical difficulty of the theory involved. 7. Classical logic is to be seen as an extension of intuitionistic logic and the user is free to use the classical negation rule or not. The key ingredients that I propose are: * A linearised natural deduction for the predicate calculus with equality (and some weak second-order extension) extended with a Kelley-Morse style extension operator. * Can easily formalise Kelley-Morse set theory but is not restricted to it. ### The Logic of PyLog PyLog is based on the natural deduction presentation of first-order predicate logic with equality endowed with rules for a class-forming operator \(\{x:P(x)\}\) and a primitive binary predicate \(\in\). Pylog also includes second-order variables allowing us to instantiate logical validities. The language of PyLog consists of finite sets of constants, (first-order) variables, second-order variables, function symbols of different arities \(n>0\), predicate symbols of different arities \(n<0\) and the special symbol \(\bot\). Terms and formulas are defined by mutual recursion: * A constant \(c\) is a term. * A variable \(x\) is a term. * If \(t_{1},...t_{n}\) are terms and \(f\) is a \(n\)-ary function symbol then \(f(t_{1},...,t_{n})\) is a term. * If \(A\) is a formula and \(x\) is a variable then \(\{x:A\}\) is a term (called an _extension_) * If \(P\) is a \(n\)-ary predicate symbol and \(t_{1},...t_{n}\) are terms then \(P(t_{1},...,t_{n})\)is a formula. * If \(t\) and \(s\) are terms then \(t=s\) is a formula (this is a particular case of the last condition). * If \(\mathfrak{A}\) is a second-order variable then it is a formula. * If \(A\) and \(B\) are formulas then \(A\lor B\), \(A\ \&\ B\), \(A\to B\) are formulas. * If \(x\) is a variable and \(A\) is a formula then \(\forall x.A\) and \(\exists x.A\) are formulas. * \(\bot\) is a formula. We define the set \(FV(e)\) of _free variables_ of an expression \(e\) (term or formula) as follows: * \(FV(c)=\emptyset\) * \(FV(x)=\{x\}\) * \(FV(f(t_{1},...,t_{n}))=\bigcup_{i=1,...,n}FV(t_{i})\) * \(FV(P(t_{1},...,t_{n}))=\bigcup_{i=1,...,n}FV(t_{i})\) * \(FV(\bot)=\emptyset\) * \(FV(\mathfrak{A})=\emptyset\) * \(FV(A\lor B)\), \(FV(A\ \&\ B)\) and \(FV(A\to B)\) are equal to \(FV(A)\cup FV(B)\) * \(FV(\forall x.A)\), \(FV(\exists x.A)\) and \(FV(\{x:A\})\) are equal to \(FV(A)/\{x\}\) As usual we consider expression _modulo_ the renaming of quantified variables or variables within the scope of an extension: in any subexpression of the form \(\forall x.A,\,\exists x.A\) or \(\{x:A\}\) we may rename \(x\) and all free occurrences of \(x\) in \(A\) to a fresh variable \(y\) as long \(y\) does not occur within the scope of some quantifier \(\forall y,\,\exists y\) or extension \(\{y:...\}\). When we write \(A[t/x]\) we assume that the bound variables of \(A\) have been renamed so as to be distinct from \(FV(t)\) (this is a slightly stronger condition than we actually need). In PyLog proofs are always in the context of a _proof environment_. This consisting of: * A list formulas called _axioms_ * A list of formulas called _assumed theorems_ * A list of _defining equations_ for constants or function symbols of the form \(c=A\) or \(f(x_{1},...,x_{n})=A\) * A list of _predicate definitions_ consisting of triples \((P,(x_{1},...,x_{n}),A)\) defining \(n\)-ary predicate symbols \(P\). Here \(FV(A)\subseteq\{x_{1},...,x_{n}\}\). Triples are also denoted by \(P(x_{1},...,x_{n})\equiv A\). All lists above may be empty. In PyLog the default proof environment consists of a single predicate definition for \(Set(x)\) defined as \(\exists y.x\in y\). The language is endowed further with the primitive binary predicate \(\in\). There are no other functions, predicates or constants. The proof system of PyLog is based on a linearised variant of natural deduction with conservative second-order order extension. We first present the system in the standard form. We assume the reader is familiar with proof trees and the concept of dependency(the first chapters of [3] are sufficient). The proof system of PyLog consists of the following. We have _purely logical rules_ which are the rules for _minimal predicate calculus_ plus the intuitionistic and classical negation rules: \[\begin{array}{c}\begin{array}{c}A\\ \hline A\ \&\ \&\ B\end{array}\,\mbox{AndInt}\qquad\begin{array}{c}A\ \&\ B\\ \hline A\end{array}\,\mbox{AndElimL}\qquad\begin{array}{c}A\ \&\ B\\ \hline B\end{array}\,\mbox{AndElimR}\\ \begin{array}{c}\begin{array}{c}[A]\\ \hline A\to B\end{array}\,\mbox{ImpInt}\end{array}\qquad\begin{array}{c}A \ \&\ B\\ \hline B\end{array}\,\mbox{ImpElim}\end{array}\] \[\begin{array}{c}\begin{array}{c}A\\ \hline B\lor A\end{array}\,\mbox{OrIntL}\qquad\begin{array}{c}A\\ \hline A\lor B\end{array}\,\mbox{OrIntR}\qquad\begin{array}{c}[A]\\ \hline A\lor B\end{array}\,\mbox{OrIntR}\qquad\begin{array}{c}[A]\\ \hline A\lor B\end{array}\,\mbox{OrElim}\end{array}\\ \begin{array}{c}\begin{array}{c}A\\ \hline\forall x.A[x/y]\end{array}\,\mbox{ForallInt}_{y}\qquad\begin{array}{c} \frac{\forall x.A}{A[t/x]}\,\mbox{ForallElim}\end{array}\\ \begin{array}{c}A[t/x]\end{array}\,\mbox{ExistsInt}\qquad\begin{array}{c} \frac{[A[y/x]]}{C}\,\mbox{ExistsElim}\end{array}\\ \begin{array}{c}\begin{array}{c}[\sim A]\\ \hline A\end{array}\,\mbox{Abs}_{i}\qquad\begin{array}{c}\frac{\downarrow}{A} \,\mbox{Abs}_{c}\end{array}\end{array}\end{array}\] The proviso for ForallInt is that \(y\) cannot occur in any assumption on which \(A\) depends and the proviso for ExistsElim is that \(y\) does not occur in \(\exists y.A\) or in \(C\) or on any hypothesis on which \(C\) depends other than \([A[y/x]]\). \(\sim A\) is syntactic sugar for \(A\to\bot\). We have the _class rules_: \[\frac{t\in\{x:A\}}{Set(t)\ \&\ A[t/x]}\ \mathrm{ClassElim}\qquad\frac{Set(t)\ \&\ A[t/x]}{t\in\{x:A(x)\}}\, \mathrm{ClassInt}_{x}\] These rules express the _classification axiom scheme_ of Kelley-Morse set theory such as formulated in the appendix of [4]. We have also the _equality rules1_: Footnote 1: this is inspired by the treatment in [5] \[\frac{s=t}{t=s}\ \mathrm{Symmetry}\qquad\frac{A\qquad t=s}{A^{\prime}}\ \mathrm{ EqualitySub}\] where in EqualitySub \(A^{\prime}\) is \(A\) when a specified number of occurrences of \(t\) are replace by \(s\). EqualitySub is of fundamental importance in using defined constants and function symbols of the proof environment. We have then our _second-order rule_: \[\frac{A}{A^{\prime}}\ \mathrm{PolySub}_{\mathfrak{A}}\] where \(A^{\prime}\) results from \(A\) by substituting all occurrences of \(\mathfrak{A}\) in \(A\) by a formula \(B\). The proviso is that \(\mathfrak{A}\) does not occur in any hypothesis on which \(A\) depends and that no free variable in \(B\) becomes bound after the substitution. **Remark 0.1**: This rule is to be understood as a combination of an invisible second-order generalisation of the variable \(\mathfrak{A}\) followed by an instantiation by \(B\). The final set of rules concern how information in the proof environment is introduced into the proof. \[\frac{A}{A}\ \mathrm{AxInt}_{n}\qquad\frac{}{A}\,\mathrm{TheoremInt}_{n} \qquad\frac{}{t=s}\ \mathrm{DefEqInt}_{n}\qquad\frac{A}{A^{\prime}}\,\mathrm{DefExp}\qquad \frac{A}{A^{\prime}}\,\mathrm{DefSub}\] The first three rules simply add the \(n\)th formula in the lists of axioms, assumed theorems and defining equations respectively. DefExp does the following. Assume we have a definition \(P(x_{1},...,x_{n})\equiv B\). Then DefExp replaces specified occurrences of subformulas of the form \(P(t_{1},...,t_{n})\) by \(B[t_{1}/x_{1},...,t_{n}/x_{n}]\) (the resulting expression is denoted by \(A^{\prime}\) in the rule). DefSub does the inverse of this. For chosen \(t_{1},...,t_{n}\) we must specify the occurrences of expressions of the form \(B[t_{1}/x_{1},...,t_{n}/x_{n}]\) in \(A\) which we wish to "collapse" into \(P(t_{1},...,t_{n})^{2}\). When we wish to use defined functions or constants we first introduce the defining equalities into our proof by means o DefEqInt\({}_{n}\) and then make use of EqSub. The above are the core rules of PyLog. We introduce the usual abbreviation \(A\leftrightarrow B\) and so finally have two rules to to toggle this notation: \[\begin{array}{c}A\to B\ \ \&\ B\to A\\ A\leftrightarrow B\end{array}\mbox{EquivConst}\qquad\frac{A\leftrightarrow B}{A \to B\ \ \&\ B\to A}\mbox{EquivExp}\] **Remark 0.2**: Certain combinations of rules occur frequently and it is convenient to have derived rules(or shortcuts) such as \[\begin{array}{c}A\to B\ \ \&\ B\to A\\ A\leftrightarrow B\end{array}\mbox{EquivJoin}\] for \[\begin{array}{c}A\to B\ \ \&\ B\to A\\ A\leftrightarrow B\end{array}\mbox{AndInt}\] and two other obvious shortcuts EquivRight and EquivLeft. Also the important derived rule \[\begin{array}{c}\frac{A}{A[t/y]}\mbox{FreeSub}_{y,t}\end{array}\] for \[\begin{array}{c}\frac{A}{\forall x.A[x/y]}\mbox{ForeallInt}_{y}\\ A[t/x]\mbox{ForeallElim}\end{array}\] ### Linearised Natural Deduction A linear proof in PyLog (for a given proof environment) is a list of _proof elements_. Each proof element \(p\) is a triple \((A,par,dis)\) where \(A\) is a formula and \(par\) and \(dis\) are lists of integers. If \(p\) occurs in position \(m\) (we say that \(m\) is \(p\)'s number) and the list has length \(m\) then the elements of \(par\) (parents) must be strictly less than \(n\) and those \(dis\)(discharges) strictly larger than \(n\). Rules are applied by adding a new proof element to the end of the list and possibly updating previous proof entries. Given a linear proof and an element \(p\) we can recursively backtrack the parents to obtain a _dependency tree_ of proof-elements. The dependencies of \(p\) are obtained by taking the set of leaves of this tree are removing those proof-elements having \(dis\) containing a number which occurs in the dependency tree. It is also easy to see how given a proof tree we can obtain a linear proof. In PyLog rules have the general format \[(Name,Parents,Parameters)\] where Name is the name of the rule, Parents is the list of the number previous elements which the rule is applied to and Parameters can contain formulas, terms, variables, position lists, etc. In PyLog rules are entered as Python function. The arguments will specify all the required information in Parents and Parameters. The PyLog command Qed(ForNum) checks if the formula has discharged all its assumptions. It is helpful to look at a snippet in the definition of some classes: class ProofElement: def __init__(self,name,dependencies,parameters,discharging,formula): self.name = name self.dependencies = dependencies self.parameters = parameters self.discharging = discharging self.formula = formula self.dischargedby = [] self.pos = 0 self.qed=False self.comment="" class ProofEnvironment: def __init__(self,proof,name): self.proof = proof self.name = name self.definitions = {} self.definitionequations = [] self.axioms = [] self.theorems = [] self.log = [] def CheckRange(self, dependencies): for dep in dependencies: if dep > len(self.proof): return False return True def GetTree(self,proofelement): out = [proofelement.pos-1] for dep in proofelement.dependencies: out = out + self.GetTree(self.proof[dep]) return out def GetHyp(self,proofelement): if proofelement.name =="Hyp": return [proofelement.pos-1] out = [] for dep in proofelement.dependencies: out = out + self.GetHyp(self.proof[dep]) return out def CheckDischargedBy(self, hyp, proofelem): if len(self.proof[hyp].dischargedby) ==0: return False for h-1 in self.proof[hyp].dischargedby: if h in self.GetTree(self.proof[proofelem]): return True return False def GetHypDep(self,proofelement): aux = [] for h in self.GetHyp(proofelement): if len(Intersect([x-1 for x in self.proof[h].dischargedby], self.GetTree(proofelement))) == 0: aux.append(h) return aux (...) List of Core PyLog Rules Logical Rules ======================== AndInt(ForNum, ForNum) AndElimL(ForNum) AndElimR(ForNum) ImpInt(ForNum, DisNum) ImpElim(ForNum, ImpNum) OrIntL(ForNum,formula) OrIntR(ForNum,formula) OrElim(OrForNum, LeftHypNum, LeftConNum, RightHypNum, RightConNum) ForallInt(ForNum, VarName, newVarName) ForallElim(ForNum, term) ExistsInt(ForNum, term, newVarName, PositionList) ExistsElim(ExistsForNum, InstForNum,ConForNum, instVariable) AbsI(BotForNum) AbsC(NegForNum, BotConNum) Class Rules ================ ClassElim(MemForNum) ClassInt(ForNum, newVarName) Equality Rules Identity(term) Symmetry(EqForNum) EqualitySub(ForNum, EqForNum, PositionList) Second-Order Rule ======================== PolySub(ForNum, SecondOrderVarName, formula) Proof Environment Rules ======================== AxInt(number) TheoremInt(number) DefEqInt(number) DefExp(ForNum, predicateName, PositionList) DefSub(ForNum, predicateName, ArgList, PositionList) Other Rules ======================== Qed(ForNum) EquivConst(ForNum) EquivExp(ForNum) EquivLeft(ForNum) EquivRight(ForNum) FreeSub(ForNum, VarName, Term) ### Pylog Commands We have a list of commands for setting up the proof environment, that is, for introducing the axioms, assumed theorems, defined constants and symbols and defined predicates that will be used in the proof. Hyp(Formula) NewAx(Formula) AddPredicate(PredicateName, Arity, PrefixBook) NewDef(PredicateName, ArgList, Formula) AddConstants(NameList) AddFunction(FunctionName, Arity, PrefixBool) NewDefEq(EquationFormula) AddTheorem(Formula) Hyp introduces a formula as a hypothesis. When defining functions with NewDefEq() we must first use AddFunction() specifying the name, arity and whether the function is to be displayed with prefix or infix notation (for binary functions). The argument for NewDefEq must not have the exterior parenthesis. For instance NewDefEq("rus = extension z. neg Elem(z,z)") For constants we use AddConstant(). We also must take care that we have enough variables via the AddVariables(VarList) function. Then we have a list of commands which displays information about the current proof and proof environment. ShowDefinitions() displays the defined predicates. ShowDefEquations() displays the defined constants and functions. ShowAxioms() displays the axioms. ShowTheorems() displays the assumed theorems which may be used in the proof (it is not advisable to alter this list during the proof). ShowProof() displays the current state of the proof and ShowLog() shows the list of previous succesful rule commands which constitute the proof. We also have a command Undo() which deletes the last element of the proof. Hypotheses(n) shows the hypotheses which formula \(n\) depends on. If a theorem has already been saved you can view the conclusion with ViewTheorem(Name) or add it directly to the proof environment with the LoadTheorem(Name) command - _provided that the required environment has been previously loaded_. ### Using Pylog In PyLog a _theory_ is a directory whose files are _theorems_. A theorem consists of both a proof environment and a proof in this environment (either complete or incomplete). All theorems in a theory should ideally have the same proof environment. The theorem to be proved ideally should occur at the end of the proof and have been tested with the command Qed(Number). There should also be an "empty" theorem which is to be seen as the proof environment that must be loaded in order to start writing a new theorem. The command Load(Name) loads a proof environment or theorem and the command Save(Name) will save the current proof environment or theorem. The command ViewTheorem(Name) will not load anything but only display the last line of the proof. The command ViewTheory(DirName) will likewise display all the theorems in the directory. To use Pylog Python 3\(\ast\) is required. PyLog runs from a terminal through the Python CLI. Clone the repository3 on GitHub, enter the folder, and enter Footnote 3: [https://github.com/owl77/PyLog](https://github.com/owl77/PyLog) $ python -i proofenvironment.py Welcome to PyLog 1.0 Natural Deduction Proof Assistant and Proof Checker (c) 2020 C. Lewis Protin >>> Our project is to have a complete verified formalisation of all the theorems of Set Theory in the Appendix of [4]. In the PyLog folder we have the saved Kelley-Morse environment. We load this by Load("Kelley-Morse"). When a command is succesful PyLog will return True. We can now examine the axioms and definitions: >>> ShowAxioms() 0. \(\forall\)x.\(\forall\)y.((x = y) <-> \(\forall\)z.((z \(\(\epsilon\) x) <-> (z \(\epsilon\) y))) 1. Set(x) -> \(\exists\)y.(Set(y) & \(\forall\)z.((z \(\(\subset\) x) -> (z \(\epsilon\) y))) 2. (Set(x) & Set(y)) -> Set((x \(\forall\) y)) 3. (Function(f) & Set(domain(f))) -> Set(range(f)) 4. Set(x) -> Set(\(\(\backslash\)x) 5. \(\neg\)(x = 0) -> \(\exists\)y.((y \(\(\epsilon\) x) & ((y \(\(\cap\) x) = 0)) 6. \(\exists\)y.((Set(y) & (0 \(\epsilon\) y)) & \(\forall\)x.((x \(\(\epsilon\) y) -> (suc x \(\epsilon\) y))) 7. \(\exists\)f.(Choice(f) & (domain(f) = (U ~ {0}))) >>> ShowDefEquations() 0. (x \(\cup\) y) = {z: ((z \(\(\epsilon\) x) v (z \(\epsilon\) y))} 1. (x \(\cap\) y) = {z: ((z \(\(\epsilon\) x) & (z \(\epsilon\) y))} 2. "x = {y: \(\neg\)(y \(\(\epsilon\) x)} 3. (x ~ y) = (x \(\cap\) ~y) 4. 0 = {x: \(\neg\)(x = x)} 5. U = {x: (x = x)} 6. \(\cup\)x = {z: \(\exists\)y.((y \(\(\epsilon\) x) & (z \(\epsilon\) y))} 7. \(\cap\)x = {z: \(\forall\)y.((y \(\(\epsilon\) x) -> (z \(\epsilon\) y))} 8. Px = {y: (y \(\(\subset\) x)} 9. {x} = {z: ((z \(\epsilon\) U) -> (z = x))} 10. {x,y} = ({x} \(\cup\) {y}) 11. (x,y) = {x,{x,y}} 12. proj1(x) = \(\cap\)x 13. proj2(x) = (\(\cap\)Ux \(\cup\)(Ux \(\ \cup\)x)) 14. (aob) = {w: \(\exists\)x.\(\exists\)y.\(\exists\)z.((((x,y) \(\epsilon\) a) & ((y,z) \(\epsilon\) b)) & (w = (x,z)))} 15. (r)\({}^{-11}\) = {z: \(\exists\)x.\(\exists\)y.(((x,y) \(\epsilon\) r) & (z = (y,x)))} 16. domain(f) = {x: \(\exists\)y.((x,y) \(\epsilon\) f)} 17. range(f) = {y: \(\exists\)x.((x,y) \(\epsilon\) f)} 18. (f'x) = \(\cap\){y: ((x,y) \(\epsilon\) f)} 19. (x X y) = {z: \(\exists\)a.\(\exists\)b.((z = (a,b)) & ((a \(\epsilon\) x) & (b \(\epsilon\) y)))} 20. func(x,y) = {f: (Function(f) & ((domain(f) = x) & (range(f) = y)))} 21. E = {z: \(\exists\)x.\(\exists\)y.((z = (x,y)) & (x \(\epsilon\) y))} 22. ord = {x: Ordinal(x)} 23. suc x = (x \(\cup\) {x}) 24. (f|x) = (f \(\cap\) (x X U)) 25. \(\omega\) = {x: Integer(x)} >>> ShowDefinitions() Set(x) <-> \(\exists\)y.(x \(\epsilon\) y) (x \(\subset\) y) <-> \(\forall\)z.((z \(\epsilon\) x) -> (z \(\epsilon\) y)) Relation(r) <-> \(\forall\)z.((z \(\epsilon\) r) -> \(\exists\)x.\(\exists\)y.(z = (x,y))) Function(f) <-> (Relation(f) & \(\forall\)x.\(\forall\)y.\(\forall\)z.((((x,y) \(\epsilon\) f) & ((x,z) \(\epsilon\) f)) -> (y = z))) Trans(r) <-> \(\forall\)x.\(\forall\)y.\(\forall\)z.((((x,y) \(\epsilon\) r) & ((y,z) \(\epsilon\) r)) -> ((x,z) \(\epsilon\) r)) Connects(r,x) <-> \(\forall\)y.\(\forall\)z.(((y \(\epsilon\) x) & (z \(\epsilon\) x)) -> ((y = z) v (((y,z) \(\epsilon\) r) v ((z,y) \(\epsilon\) r)))) Asymmetric(r,x) <-> \(\forall\)y.\(\forall\)z.(((y \(\epsilon\) x) & (z \(\epsilon\) x)) -> (((y,z) \(\epsilon\) r) -> \(\cap\)((z,y) \(\epsilon\) r))) First(r,x,z) <-> ((z \(\epsilon\) x) & \(\forall\)y.((y \(\(\epsilon\) x) -> \(\((y,z)\(\epsilon\) r)))) WellOrders(r,x) <-> (Connects(r,x) & \(\forall\)y.(((y \(\subset\) x) & \(\neg\)(y = 0)) -> \(\exists\)z.First(r,y,z))) Section(r,x,y) <-> (((y \(\subset\) x) & WellOrders(r,x)) & \(\forall\)u.\(\forall\)v.((((u \(\epsilon\) x) & ((v \(\epsilon\) y)) & ((u,v) \(\epsilon\) r)) -> (u \(\epsilon\) y))) OrderPreserving(f,r,s) <-> ((Function(f) & (WellOrders(r,domain(f)) & WellOrders(r,range(f)))) & \(\forall\)v.\((((u \(\epsilon\) domain(f)) & (u,v) \(\epsilon\) r)) -> (((f'u),(f'v)) \(\epsilon\) r))) 1-to-1(f) <-> (Function(f) & Function((f)\({}^{-11}\)) Full(x) <-> \(\forall\)y.((y \(\(\epsilon\) x) -> (y \(\subset\) x)) Ordinal(x) <-> (Full(x) & Connects(E,x)) Integer(x) <-> (Ordinal(x) & WellOrders((E)\({}^{-11}\),x)) Choice(f) <-> (Function(f) & \(\forall\)y.((y \(\epsilon\) domain(f)) -> ((f'y) \(\epsilon\) y))) Equi(x,y) <-> \(\exists\)f.(1-to-1(f) & ((domain(f) = x) & (range(f) = y))) Card(x) <-> (Ordinal(x) & \(\forall\)y.(((y \(\epsilon\) x) & (y \(\epsilon\) ord)) -> \(\neg\)Equi(y,x))) TransIn(r,x) <-> \(\forall\)u.\(\forall\)v.\(\forall\)w.(((u \(\epsilon\) x) & ((v \(\epsilon\) x) & (w \(\epsilon\) x))) -> (((u,v) \(\epsilon\) r))) We can also check by ShowProof() that the proof is empty. By default expressions are displayed using pretty printing (Unicode character) which can use infix notation. The pretty printing can be changed via parser.prettypprint[FunctionNameString] = PrettyString. Expressions are entered in a strictly functional way (with the exception of logical connectives, extensions and quantifiers). Input and default "pretty" display ================ neg A -A bigunion(x) \(\cup\)x bigintersection \(\cap\)x union(x,y) (x \(\cup\) y) intersection(x,y) (x \(\cap\) y) extension x. A {x: A} forall x. A \(\forall\)x. A exists x. A \(\exists\)x. A Elem(x,y) (x \(\epsilon\) y) app(f,x) (f'x) pair(x,y) {x,y} singleton(x) {x} orderedpair(x,y) (x,y) prod(x,y) (x X y) complement1(x) ^x complement2(x) (x ^ y) parts(x) Px comp(a,b) (aob) inv(r) (r)-11 restrict(f,x) (f|x) int \(\omega\) Note that \(\neg A\) is the pretty print for \(A\rightarrow\bot\). When using the rules of Pylog we must think of \(\neg A\) this way. Conjunction & is usually entered in infix style (\(A\) & \(B\)) but PyLog will create and group parenthesis to the right: thus (\(A\) & \(B\) & \(C\)) is interpreted as (\(A\) & (\(B\) & \(C\))). ### First Proof in Pylog In this section we prove theorem 4 of [4] in the Kelley-Morse proof environment: ((z \(\epsilon\) (x \(\cup\) y)) <-> ((z \(\epsilon\) x) v (z \(\epsilon\) y))) & ((z \(\epsilon\) (x \(\cap\) y)) <-> ((z \(\epsilon\) x) & (z \(\epsilon\) y))) We give here full details of a session in which we prove the first half of the theorem. Welcome to PyLog 1.0 Natural Deduction Proof Assistant and Proof Checker (c) 2020 C. Lewis Protin >>> Load("Kelley-Morse") True >>> ShowProof() >>> Hyp("Elem(z,union(x,y))") 0. z \(\epsilon\) (x \(\cup\) y) Hyp True >>> ShowDefEquations() 0. (x \(\cup\) y) = {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} 1. (x \(\cap\) y) = {z: ((z \(\epsilon\) x) & (z \(\epsilon\) y))} 2. "x = {y: \(\neg(y\) \(\epsilon\) x)} 3. (x ^ y) = (x \(\cap\) "y) 4. 0 = {x: -(x = x)} 5. U = {x: (x = x)} 6. \(\cup\)x = {z: \(\exists\)y.((y \(\epsilon\) x) & (z \(\epsilon\) y))} 7. (x = {z: \(\forall\)y.((y \(\epsilon\) x) -> (z \(\epsilon\) y))} 8. Px = {y: (y \(\(\subset\) x)} 9. {x} = {z: ((z \(\epsilon\) U) -> (z = x))} 10. {x,y} = {{x} \(\cup\) {y}} 11. (x,y) = {x,x,y} 12. proj1(x) = \(\cap\)x 13. proj2(x) = (\(\cup\)x \(\cup\) (\(\cup\)\(\cup\)x - \(\cup\)x)) 14. (aob) = {w: \(\exists\)x.\(\exists\)y.\(\exists\)z.((((x,y) \(\epsilon\) a) & ((y,z) \(\epsilon\) b)) & (w = (x,z)))} 15. (r)\({}^{-11}\) = {z: \(\exists\)x.\(\exists\)y.(((x,y) \(\epsilon\) r) & (z = (y,x)))} 16. domain(f) = {x: \(\exists\)y.((x,y) \(\epsilon\) f)} 17. range(f) = {y: \(\exists\)x.((x,y) \(\epsilon\) f)} 18. (f'x) = \(\cap\){y: ((x,y) \(\epsilon\) f)} 19. (x X y) = {z: \(\exists\)a.\(\exists\)b.((z = (a,b)) & ((a \(\epsilon\) x) & (b \(\epsilon\) y)))} 20. func(x,y) = {f: (Function(f) & ((domain(f) = x) & (range(f) = y)))} 21. E = {z: \(\exists\)x.\(\exists\)y.((z = (x,y)) & (x \(\epsilon\) y))} 22. ord = {x: Ordinal(x)} 23. suc x = (x \(\cup\) {x}) 24. (f|x) = (f \(\cap\) (x X U)) 25. \(\omega\) = {x: Integer(x)} >>> DefEqInt(0) 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt True >>> EqualitySub(0,1,[0]) 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} EqualitySub 0 1 True >>> ClassElim(2) 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 True >>> ImpInt(4,0) 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 True >>> Qed(5) (...) 5. (z \(\in\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed >>> Hyp("(Elem(z,x) v Elem(z,y))") 0. z \(\in\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\in\) (z \(\(\epsilon\) x) v (z \(\epsilon\) y)) EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\in\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp True >>> Hyp("Elem(z,x)") 0. z \(\in\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\in\) (z: ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\in\) x Hyp True >>> ExistsInt(7,"x","x",[0]) 0. z \(\in\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\in\) {z: ((z \(\(\epsilon\) x) v (z \(\epsilon\) y))} EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\in\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\in\) x) v (z \(\epsilon\) y) Hyp 7. z \(\in\) x Hyp 8. \(\exists\)x.(z \(\epsilon\) x) ExistsInt 7 True >>> ShowDefinitions() Set(x) <-> \(\exists\)y.(x \(\in\) y) (x \(\subset\) y) <-> \(\forall\)z.((z \(\epsilon\) x) -> (z \(\epsilon\) y)) Relation(r) <-> \(\forall\)z.((z \(\epsilon\) r) -> \(\exists\)x.\(\exists\)y.(z = (x,y))) Function(f) <-> (Relation(f) & \(\forall\)x.\(\forall\)y.\(\forall\)z.((((x,y) \(\epsilon\) f) & ((x,z) \(\epsilon\) f)) -> (y = z))) Trans(r) <-> \(\forall\)x.\(\forall\)y.\(\forall\)z.((((x,y) \(\epsilon\) r) & ((y,z) \(\epsilon\) r)) -> ((x,z) \(\epsilon\) r)) Connects(r,x) <-> \(\forall\)y.\(\forall\)z.((((y \(\epsilon\) x) & (z \(\epsilon\) x)) -> ((y = z) v (((y,z) \(\epsilon\) r) v ((z,y) \(\epsilon\) r)))) Asymmetric(r,x) <-> \(\forall\)y.\(\forall\)z.(((y \(\epsilon\) x) & (z \(\epsilon\) x)) -> (((y,z) \(\epsilon\) r) -> -((z,y) \(\epsilon\) r))) First(r,x,z) <-> ((z \(\epsilon\) x) & \(\forall\)y.((y \(\(\epsilon\) x) -> -((y,z) \(\epsilon\) r))) WellOrders(r,x) <-> (Connects(r,x) & \(\forall\)y.(((y \(\subset\) x) & \(\neg\)(y = 0)) -> \(\exists\)z.First(r,y,z))) Section(r,x,y) <-> (((y \(\subset\) x) & WellOrders(r,x)) & \(\forall\)u.\(\forall\)v.(((((u \(\epsilon\) x) & (\(\epsilon\) y)) & ((u,v) \(\epsilon\) r)) -> (u \(\epsilon\) y))) OrderPreserving(f,r,s) <-> ((Function(f) & (WellOrders(r,domain(f)) & WellOrders(r,range(f)))) & \(\forall\)u.\(\forall\)v.((((u \(\epsilon\) domain(f)) & (v \(\epsilon\) domain(f))) & ((u,v) \(\epsilon\) r)) 1-to-1(f) <-> (Function(f) & Function(f)"1) Full(x) <-> \(\forall\)y.((y \(\epsilon\) x) -> (y \(\subset\) x)) Ordinal(x) <-> (Full(x) & Connects(E,x)) Integer(x) <-> (Ordinal(x) & WellOrders((E)"1,x)) Choice(f) <-> (Function(f) & Vy.((y \(\epsilon\) domain(f)) -> ((f'y) \(\epsilon\) y))) Equi(x,y) <-> \(\exists\)f.(1-to-1(f) & ((domain(f) = x) & (range(f) = y))) Card(x) <-> (Ordinal(x) & Vy.(((y \(\epsilon\) x) & (y \(\epsilon\) ord)) -> -Equi(y,x))) TransIn(r,x) <-> \(\forall\)u.vv.w.(((u \(\epsilon\) x) & ((v \(\epsilon\) x) & (w \(\epsilon\) x))) -> (((u,v) \(\epsilon\) r) & ((v,w) \(\epsilon\) r)) -> ((u,w) \(\epsilon\) r))) >>> DefSub(8,"Set","z",[0]) 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\(\epsilon\) x) v (z \(\epsilon\) y))) DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y)) EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\epsilon\) x Hyp 8. \(\exists\)x.(z \(\epsilon\) x) ExistsInt 7 9. Set(z) DefSub 8 True >>> Hyp("Elem(z,y)") 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\epsilon\) x Hyp 8. \(\exists\)x.(z \(\epsilon\) x) ExistsInt 7 9. Set(z) DefSub 8 10. z \(\epsilon\) y Hyp 11. \(\exists\)x.(z \(\epsilon\) x) ExistsInt 10 True >>> DefSub(11,"Set","z",[0]) 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\epsilon\) x Hyp 8. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 7 9. Set(z) DefSub 8 10. z \(\epsilon\) y Hyp 11. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 10 12. Set(z) DefSub 11 True >>> OrElim(6,7,9,10,12) 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y)) EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\epsilon\) x Hyp 8. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 7 9. Set(z) DefSub 8 10. z \(\epsilon\) y Hyp 11. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 10 12. Set(z) DefSub 11 13. Set(z) DefSub 8 14. (z \(\epsilon\) x) (z \(\epsilon\) y) ImpInt 4 Qed 15. (z \(\epsilon\) x) v (z \(\epsilon\) y) ImpInt 4 Qed 16. (z \(\epsilon\) x) v (z \(\epsilon\) y) Imp 17. z \(\epsilon\) x Hyp 18. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 7 19. Set(z) DefSub 8 20. z \(\epsilon\) y Hyp 21. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 10 22. Set(z) DefSub 11 23. Set(z) OrElim 6 7 9 10 12 24. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) AndInt 13 6 True >>> ClassInt(14,"z") 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y)) EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\epsilon\) x Hyp 8. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 7 9. Set(z) DefSub 8 10. z \(\epsilon\) y Hyp 11. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 10 12. Set(z) DefSub 11 13. Set(z) OrElim 6 7 9 10 12 14. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) AndInt 13 6 True >>> ClassInt(14,"z") 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y)) EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\epsilon\) x Hyp 8. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 7 9. Set(z) DefSub 8 10. z \(\epsilon\) y Hyp 11. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 10 12. Set(z) DefSub 11 13. Set(z) OrElim 6 7 9 10 12 14. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) AndInt 13 6 True >>> ClassInt(14,"z") 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y)) EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\epsilon\) x Hyp 8. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 7 9. Set(z) DefSub 8 10. z \(\epsilon\) y Hyp 11. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 10 12. Set(z) DefSub 11 13. Set(z) OrElim 6 7 9 10 12 14. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) AndInt 13 6 True >>> ClassInt(14,"z") 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y)) EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\epsilon\) x Hyp 8. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 7 11. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 10 12. Set(z) DefSub 11 13. Set(z) OrElim 6 7 9 10 12 14. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) AndInt 13 6 True >>> ClassInt(14,"z") 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y)) EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\epsilon\) x Hyp 8. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 7 9. Set(z) DefSub 8 10. z \(\epsilon\) y Hyp 11. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 10 12. Set(z) DefSub 11 13. Set(z) DefSub 14 14. Set(z) DefSub 15 15. (z \(\epsilon\) x) DefSub 16 16. (z \(\epsilon\) x) DefSub 17 17. z \(\epsilon\) x Hyp 18. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 10 19. Set(z) DefSub 19 2. Set(z) DefSub 19 20. Set(z) DefSub 19 21. Set(z) DefSub 22 2. Set(z) DefSub 23 23. Set(z) DefSub 24 24. (z \(\epsilon\) x) v (z \(\epsilon\) y) ImpInt 4 Qed 25. (z \(\epsilon\) x) DefSub 25 26. Set(z) DefSub 27 27. Set(z) DefSub 28 28. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 2 29. Set(z) DefSub 29 30. z \(\epsilon\) x Hyp 31. Set(z) DefSub 32 33. Set(z 7. z \(\epsilon\) x Hyp 8. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 7 9. Set(z) DefSub 8 10. z \(\epsilon\) y Hyp 11. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 10 12. Set(z) DefSub 11 13. Set(z) OrElim 6 7 9 10 12 14. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) AndInt 13 6 15. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} ClassInt 14 True >>>> Symmetry(1) 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\epsilon\) x Hyp 8. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 7 9. Set(z) DefSub 8 10. z \(\epsilon\) y Hyp 11. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 10 12. Set(z) DefSub 11 13. Set(z) OrElim 6 7 9 10 12 14. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) AndInt 13 6 15. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} ClassInt 14 16. {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} = (x \(\cup\) y) Symmetry 1 True >>>> EqualitySub(15,16,[0]) 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\epsilon\) x Hyp 8. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 7 9. Set(z) DefSub 8 10. z \(\epsilon\) y Hyp 11. \(\exists\)x. (z \(\epsilon\) x) ExistsInt 10 12. Set(z) DefSub 11 13. Set(z) OrElim 6 7 9 10 12 14. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) AndInt 13 6 15. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} ClassInt 14 16. {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} = (x \(\cup\) y) Symmetry 1 17. z \(\epsilon\) (x \(\cup\) y) EqualitySub 15 16 True >>>> ImpInt(17,6) 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\epsilon\) x Hyp 8. \(\exists\)x.(z \(\epsilon\) x) ExistsInt 7 9. Set(z) DefSub 8 10. z \(\epsilon\) y Hyp 11. \(\exists\)x.(z \(\epsilon\) x) ExistsInt 10 12. Set(z) DefSub 11 13. Set(z) OrElim 6 7 9 10 12 14. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) AndInt 13 6 15. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} ClassInt 14 16. {z: ((z \(\(\epsilon\) x) v (z \(\epsilon\) y))} = (x \(\cup\) y) Symmetry 1 17. z \(\epsilon\) (x \(\cup\) y) EqualitySub 15 16 18. ((z \(\epsilon\) x) v (z \(\epsilon\) y)) -> (z \(\epsilon\) (x \(\cup\) y)) ImpInt 17 True >>> Qed(18) True >>>> ShowProof() (...) 18. ((z \(\epsilon\) x) v (z \(\epsilon\) y)) -> (z \(\epsilon\) (x \(\cup\) y)) ImpInt 17 Qed >> AndInt(5,18) 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\epsilon\) x Hyp 8. \(\exists\)x.(z \(\epsilon\) x) ExistsInt 7 9. Set(z) DefSub 8 10. z \(\epsilon\) y Hyp 11. \(\exists\)x.(z \(\epsilon\) x) ExistsInt 10 12. Set(z) DefSub 11 13. Set(z) OrElim 6 7 9 10 12 14. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) AndInt 13 6 15. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} ClassInt 14 16. {z: ((z \(\(\epsilon\) x) v (z \(\epsilon\) y))} = (x \(\cup\) y) Symmetry 1 17. z \(\epsilon\) (x \(\cup\) y) EqualitySub 15 16 18. ((z \(\epsilon\) x) v (z \(\epsilon\) y)) -> (z \(\epsilon\) (x \(\cup\) y)) ImpInt 17 Qed 19. ((z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y))) & (((z \(\epsilon\) x) v (z \(\epsilon\) y)) -> (z \(\(\epsilon\) (x \(\cup\) y))) AndInt 5 18 True >>> EquivConst(19) 0. z \(\epsilon\) (x \(\cup\) y) Hyp 1. (x \(\cup\) y) = {z: ((z \(\(\epsilon\) x) v (z \(\epsilon\) y))} DefEqInt 2. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} EqualitySub 0 1 3. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ClassElim 2 4. (z \(\epsilon\) x) v (z \(\epsilon\) y) AndElimR 3 5. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ImpInt 4 Qed 6. (z \(\epsilon\) x) v (z \(\epsilon\) y) Hyp 7. z \(\epsilon\) x Hyp 8. \(\exists\)x.(z \(\epsilon\) x) ExistsInt 7 9. Set(z) DefSub 8 10. z \(\epsilon\) y Hyp 11. \(\exists\)x.(z \(\epsilon\) x) ExistsInt 10 12. Set(z) DefSub 11 13. Set(z) OrElim 6 7 9 10 12 14. Set(z) & ((z \(\epsilon\) x) v (z \(\epsilon\) y)) AndInt 13 6 15. z \(\epsilon\) {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} ClassInt 14 16. {z: ((z \(\epsilon\) x) v (z \(\epsilon\) y))} = (x \(\cup\) y) Symmetry 1 17. z \(\epsilon\) (x \(\cup\) y) EqualitySub 15 16 18. ((z \(\epsilon\) x) v (z \(\epsilon\) y)) -> (z \(\epsilon\) (x \(\cup\) y)) ImpInt 17 Qed 19. ((z \(\epsilon\) (x \(\cup\) y)) -> (z \(\epsilon\) x v (z \(\epsilon\) y)) & (((z \(\epsilon\) x) v (z \(\epsilon\) y)) -> (z \(\epsilon\) (x \(\cup\) y))) AndInt 5 18 20. (z \(\epsilon\) (x \(\cup\) y)) <-> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) EquivConst True >> Qed(20) (...) 20. (z \(\epsilon\) (x \(\cup\) y)) <-> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) EquivConst Qed >> Save("Th4") True The proof is fully codified by the log (and the proof environment): >> ShowLog() 0. Hyp("Elem(z, union(x,y))") 1. DefEqInt(0) 2. EqualitySub(0,1,[0]) 3. ClassElim(2) 4. AndElimR(3) 5. ImpInt(4,0) 6. Hyp("(Elem(z,x) v Elem(z,y))") 7. Hyp("Elem(z,x)") 8. ExistsInt(7,"x","x",[0]) 9. DefSub(8,"Set","["z"],[0]) 10. Hyp("Elem(z,y)") 11. ExistsInt(10,"y","y",[0]) 12. DefSub(11,"Set","["z"],[0]) 13. OrElim(6,7,9,10,12) 14. AndInt(13,6) 15. ClassInt(14,"z") 16. Symmetry(1) 17. EqualitySub(15,16,[0]) 18. ImpInt(17,6) 19. AndInt(5,18) 20. EquivConst(19) The command GenerateProof() deletes the proof and then generates it again from the Proof.log field and runs Qed() on the last line. This allows any proof log to be formally checked for a given environement. The log contains all the information needed to generate the proof. The command UsedTheorems() gives the list of other theorems used in the proof. Similarly the second half of the conjunction is proven: 21. z \(\epsilon\) (x \(\cap\) y) Hyp 22. (x \(\cap\) y) = {z: ((z \(\epsilon\) x) & (z \(\epsilon\) y))} DefEqInt 23. z \(\epsilon\) {z: ((z \(\epsilon\) x) & (z \(\epsilon\) y))} EqualitySub 21 22 24. Set(z) & ((z \(\epsilon\) x) & (z \(\epsilon\) y)) ClassElim 23 25. (z \(\epsilon\) x) & (z \(\epsilon\) y) AndElimR 24 26. (z \(\epsilon\) (x \(\cap\) y)) -> ((z \(\epsilon\) x) & (z \(\epsilon\) y)) ImpInt 25 Qed 27. (z \(\epsilon\) x) & (z \(\epsilon\) y) Hyp 28. z \(\epsilon\) x AndElimL 27 29. \(\exists\)x.(z \(\epsilon\) x) ExistsInt 28 30. Set(z) DefSub 29 31. Set(z) & ((z \(\epsilon\) x) & (z \(\epsilon\) y)) AndInt 30 27 32. z \(\epsilon\) {z: ((z \(\epsilon\) x) & (z \(\epsilon\) y))} ClassInt 31 33. {z: ((z \(\epsilon\) x) & (z \(\epsilon\) y))} ClassElim 32 34. z \(\epsilon\) (x \(\cap\) y) EqualitySub 32 33 35. ((z \(\epsilon\) x) & (z \(\epsilon\) y)) -> (z \(\epsilon\) (x \(\cap\) y)) ImpInt 34 Qed 36. ((z \(\epsilon\) (x \(\cap\) y)) -> ((z \(\epsilon\) x) & (z \(\epsilon\) y))) & (((z \(\epsilon\) x) & (z \(\epsilon\) y))) -> (z \(\epsilon\) (x \(\cap\) y))) AndInt 26 35 37. (z \(\epsilon\) (x \(\cap\) y)) <-> ((z \(\epsilon\) x) & (z \(\epsilon\) y)) EquivConst Qed 38. ((z \(\epsilon\) (x \(\cup\) y)) <-> ((z \(\epsilon\) x) v (z \(\epsilon\) y))) & ((z \(\epsilon\) (x \(\cap\) y)) <-> ((z \(\epsilon\) x) & (z \(\epsilon\) y))) AndInt 20 37 with log 21. Hyp("Elem(z, intersection(x,y))") 22. DefEqInt(1) 23. EqualitySub(21,22,[0]) 24. ClassElim(23) 25. AndElimR(24) 26. ImpInt(25,21) 27. Hyp("Elem(z,x) & Elem(z,y))") 28. AndElimL(27) 29. ExistsInt(28,"x","x",[0]) 30. DefSub(29,"Set",["z"],[0]) 31. AndInt(30,27) 32. ClassInt(31,"z") 33. Symmetry(22) 34. EqualitySub(32,33,[0]) 35. ImpInt(34,27) 36. AndInt(26,35) 37. EquivConst(36) 38. AndInt(20,37) This theorem comes in the main directory of the PyLog repository and can be loaded via Load("Th4"). We also show the proof of the first half of the conjunction of theorem 5 of [4] which illustrates how we use previous theorems and apply an axiom: 0. z \(\epsilon\) (x \(\cup\) x) Hyp 1. ((z \(\epsilon\) (x \(\cup\) y)) <-> ((z \(\epsilon\) x) v (z \(\epsilon\) y))) & ((z \(\epsilon\) (x \(\cap\) y)) <-> ((z \(\epsilon\) x) & (z \(\epsilon\) y))) Theo 2. (z \(\epsilon\) (x \(\cup\) y)) <-> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) AndElimL 1 3. ((z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y))) & (((z \(\epsilon\) x) v (z \(\epsilon\) y)) -> (z \(\epsilon\) (x \(\cup\) y)) Equiv] 4. (z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) AndElimL 3 5. Vy.((z \(\epsilon\) (x \(\cup\) y)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) y)) ForallInt 4 6. (z \(\epsilon\) (x \(\cup\) x)) -> ((z \(\epsilon\) x) v (z \(\epsilon\) x)) ForallElim 5 7. (z \(\epsilon\) x) v (z \(\epsilon\) x) ImpElim 0 6 8. z \(\epsilon\) x Hyp 9. z \(\epsilon\) x Hyp 10. z \(\epsilon\) x OrElim 7 8 8 9 9 11. (z \(\epsilon\) (x \(\cup\) x)) -> (z \(\epsilon\) x ImpInt 10 Qed 12. z \(\epsilon\) x Hyp 13. (z \(\epsilon\) x) v (z \(\epsilon\) x) OrIntL 12 14. ((z \(\epsilon\) x) v (z \(\epsilon\) y)) -> (z \(\epsilon\) (x \(\cup\) y)) AndElimR 3 15. Vy.(((z \(\epsilon\) x) v (z \(\epsilon\) y)) -> (z \(\epsilon\) (x \(\cup\) y))) ForallInt 14 16. ((z \(\epsilon\) x) v (z \(\epsilon\) x)) -> (z \(\epsilon\) (x \(\cup\) x)) ForallElim 15 17. z \(\epsilon\) (x \(\cup\) x) ImpElim 13 16 18. (z \(\epsilon\) x) -> (z \(\epsilon\) (x \(\cup\) x)) ImpInt 17 Qed 19. ((z \(\epsilon\) (x \(\cup\) x)) -> (z \(\epsilon\) x)) & ((z \(\epsilon\) x) -> (z \(\epsilon\) (x \(\cup\) x)) AndInt 11 18 20. (z \(\epsilon\) (x \(\cup\) x)) <-> (z \(\epsilon\) x) EquivConst 21. %z.((z \(\epsilon\) (x \(\cup\) x)) <-> (z \(\epsilon\) x)) ForallInt 20 Qed 22. %x.%y.((x = y) <-> %z.((z \(\epsilon\) x) <-> (z \(\epsilon\) y))) AxInt 23. %y.(((x \(\cup\) x) = y) <-> %z.((z \(\epsilon\) (x \(\cup\) x)) <-> (z \(\epsilon\) y))) ForallElim 22 24. ((x \(\cup\) x) = x) <-> %z.((z \(\epsilon\) (x \(\cup\) x)) <-> (z \(\epsilon\) x)) ForallElim 23 25. (((x \(\cup\) x) = x) -> %z.((z \(\epsilon\) (x \(\cup\) x)) <-> (z \(\epsilon\) x))) & (%z.((z \(\epsilon\) (x \(\cup\) x)) <-> (z \(\epsilon\) x)) 26. %z.((z \(\epsilon\) (x \(\cup\) x)) <-> (z \(\epsilon\) x)) -> ((x \(\cup\) x) = x) AndElimR 25 27. (x \(\cup\) x) = x ImpElim 21 26 Qed the log is 0. Hyp("Elem(z,union(x,x))") 1. TheoremInt(1) 2. AndElimL(1) 3. EquivExp(2) 4. AndElimL(3) 5. ForallInt(4,"y","y") 6. ForallElim(5,"x") 7. ImpElim(0,6) 8. Hyp("Elem(z,x)") 9. Hyp("Elem(z,x)") 10. OrElim(7,8,8,9,9) 11. ImpInt(10,0) 12. Hyp("Elem(z,x)") 13. OrIntL(12,"Elem(z,x)") 14. AndElimR(3) 15. ForallInt(14,"y","y") 16. ForallElim(15,"x") 17. ImpElim(13,16) 18. ImpInt(17,12) 19. AndInt(11,18) 20. EquivConst(19) 21. ForallInt(20,"z","z") 22. AxInt(0) 23. ForallElim(22,"union(x,x)") 24. ForallElim(23,"x") 25. EquivExp(24) 26. AndElimR(25) 27. ImpElim(21,26) To prove Th6 and Th7 we could make use of PolySub and previously proven propositional validities. For instance if a logical validity was previously proven 0. A v B Hyp 1. A Hyp 2. B v A OrIntL 1 3. B Hyp 4. B v A OrIntR 3 5. B v A OrElim 0 1 2 3 4 6. (A v B) -> (B v A) ImpInt 5 Qed 0. Hyp("(A v B)") 1. Hyp("A") 2. OrIntL(1,"B") 3. Hyp("B") 4. OrIntR(3,"A") 5. OrElim(0,1,2,3,4) 6. ImpInt(5,0) Suppose we saved this theorem as "Log1". Then we can add this theorem to our environment and instantiate it via PolySub to, for instance: >>> Load("Kelley-Morse") True >>> >>> ViewTheorem("Log1") 'Log1 : (A v B) -> (B v A)' >>> LoadTheorem("Log1") True >>> ShowTheorems() 0. A v -A 1. (A v B) -> (B v A) >>> TheoremInt(1) True >>> ShowProof() 0. (A v B) -> (B v A) TheoremInt >>> PolySub(0,"A","Elem(z,x)") (...) >>> PolySub(1,"B","Elem(z,y)") 0. (A v B) -> (B v A) TheoremInt 1. ((z \(\epsilon\) x) v B) -> (B v (z \(\epsilon\) x)) PolySub 0 2. ((z \(\epsilon\) x) v (z \(\epsilon\) y)) -> ((z \(\epsilon\) y) v (z \(\epsilon\) x)) PolySub 1 >>> ShowLog() 0. TheoremInt(1) 1. PolySub(0,"A","Elem(z,x)") 2. PolySub(1,"B","Elem(z,y)") It is important that when adding theorems the theorems were saved with the same environment. An exception occurs for logical validities. If a predicate is part of the language but has not been defined then we can consider it a second-order variable in the classical sense (i.e. having a fixed arity) and use the command PredSub. This can be useful for implementing axiom schemes such as induction. Also it is useful for reusing first-order validities (which is not possible with the polymorphic variables). >>> Load("Kelley-Morse") True >>> AddPredicate("Foo",1,True) True >>> Hyp("Foo(x)") 0. Foo(x) Hyp True >>> PredSub(0,"Foo",["x"], "neg Set(x)", [0]) 0. Foo(x) Hyp 1. -Set(x) PredSub 0 True >>> PredSub(1,"Set",["x"], "neg Set(x)", [0]) Predicate is defined. To test a theory, a collection of theorems, one can run the CheckTheory function, For instance, for the theorems up to theorem 55 in Kelley-Morse set theory: >>> CheckTheory(["Th4","Th5","Th6","Th7","Th8", "Th11","Th12","Th14","Th16","Th17","Th19","Th20", "Th21","Th24","Th26","Th27","Th28","Th29","Th30", "Th31","Th32","Th33","Th34","Th35","Th37","Th38", "Th39","Th41","Th42","Th43","Th44","Th46","Th47","Th49","Th50","Th53","Th54","Th55","" This will generate each proof again from the log. If the final line has Qed then the check is succesful. ### Proof of Law of Excluded Middle We present here a proof in PyLog (using AbsC )of the useful classical validity \(Av\neg A\). To do this we use three validities: >>> ShowTheorems() 0. D <-> \(\neg\neg\)D 1. (A -> B) -> (-B -> \(\neg\)A) 2. \(\neg\)(A v B) <-> (-A & \(\neg\)B) The proofs of these theorems is an easy exercise. Only theorem 0 uses AbsC. Here is the proof saved under the name "ExcludedMiddle". >>> ShowProof() 0. \(\neg\)(A v B) <-> (\(\neg\)A & \(\neg\)B) TheoremInt 1. \(\neg\)(A v \(\neg\)A) <-> (\(\neg\)A & \(\neg\neg\)A) PolySub 0 2. (\(\neg\)(A v \(\neg\)A) -> (\(\neg\)A & \(\neg\neg\)A)) & ((\(\neg\)A & \(\neg\neg\)A) -> \(\(\neg\)(A v \(\neg\)A)) EquivExp 3. D <-> \(\neg\neg\)D TheoremInt 4. (D -> \(\neg\)D) & (\(\neg\)D -> D) EquivExp 5. \(\neg\)A & \(\neg\neg\)A Hyp 6. \(\neg\)A AndElimL 5 7. \(\neg\neg\)A AndElimR 5 8. \(\neg\)D -> D AndElimR 4 9. \(\neg\)A -> A PolySub 8 10. A ImpElim 7 9 11. \(\neg\)A & A AndInt 6 10 12. (\(\neg\)A & \(\neg\neg\)A) -> (\(\neg\)A & A) ImpInt 11 Qed 13. \(\neg\)(A v \(\neg\)A) -> (\(\neg\)A & \(\neg\neg\)A) AndElimL 2 14. \(\neg\)(A v \(\neg\)A) Hyp 15. \(\neg\)A & \(\neg\neg\)A ImpElim 14 13 16. \(\neg\)A & A ImpElim 15 12 17. \(\neg\)(A v \(\neg\)A) -> (\(\neg\)A & A) ImpInt 16 Qed 18. (A -> B) -> (\(\neg\)B -> \(\neg\)A) TheoremInt 19. (\(\neg\)(A v \(\neg\)A) -> B) -> (\(\neg\)B -> \(\neg\)(A v \(\neg\)A)) PolySub 18 20. (\(\neg\)(A v \(\neg\)A) -> (\(\neg\)A & A)) -> (\(\neg\)(\(\neg\)A & A) -> \(\neg\neg\)(A v \(\neg\)A)) PolySub 19 21. \(\neg\)(A v \(\neg\)A) ImpLim 17 20 22. \(\neg\)(A v \(\neg\)A) -> (A v \(\neg\)A) PolySub 8 23. \(\neg\)(\(\neg\)A & A) Hyp 24. \(\neg\)(A v \(\neg\)A) ImpElim 23 21 25. A v \(\neg\)A ImpElim 24 22 26. \(\neg\)(\(\neg\)A & A) -> (A v \(\neg\)A) ImpInt 25 Qed 27. \(\neg\)A & A Hyp 28. \(\neg\)A AndElimL 27 29. A AndElimR 27 30. \(\mid\)_ ImpElim 29 28 31. \(\neg\)(\(\neg\)A & A) ImpInt 30 Qed 32. A v \(\neg\)A ImpElim 31 26 Qed >>> ShowLog() 0. TheoremInt(2) 1. PolySub(0,"B","neg A") 2. EquivExp(1) 3. TheoremInt(0) 4. EquivExp(3) 5. Hyp("(neg A & neg neg A)") 6. AndElimL(5) 7. AndElimR(5) 8. AndElimR(4) 9. PolySub(8,"D","A") 10. ImpElim(7,9) 11. AndInt(6,10) 12. ImpInt(11,5) 13. AndElimL(2) 14. Hyp("neg (A v neg A)") 15. ImpElim(14,13) 16. ImpElim(15,12) 17. ImpInt(16,14) 18. TheoremInt(1) 19. PolySub(18,"A","neg (A v neg A)") 20. PolySub(19,"B","(neg A & A)") 21. ImpElim(17,20) 22. PolySub(8,"D","(A v neg A)") 23. Hyp("neg (neg A & A)") 24. ImpElim(23,21) 25. ImpElim(24,22) 26. ImpInt(25,23) 27. Hyp("(neg A & A)") 28. AndElimL(27) 29. AndElimR(27) 30. ImpElim(29,28) 31. ImpInt(30,27) 32. ImpElim(31,26) ### The Gentzen Automatic Theorem Prover Included within PyLog is a simple algorithm for finding proofs of propositiona validities in the intuitionistic propositional calculus. Smullyan [6] has remarked that tableaux for classical propositional calculus correspond to proofs in the sequent calculus "turned upside-down". This is a paradigm of automatic proof generation. The _cut-free_ sequent calculi for the intuitionistic propositional calculus (IPC), more specifically, the Gentzen system G3i restricted to IPC (wherein the usual structural rules are "absorbed") is a striking example of this correspondence. Indeed, it is well known that G3i restricted to propositional formulas can be "inverted" to furnish a decision procedure for IPC [5][4.2.6]. For propositional G3i we have the rules: \[\begin{array}{c}\overline{P,\Gamma\Rightarrow P}\,\mathrm{Ax}\\ \overline{\bot,\Gamma\Rightarrow A}\end{array}\] \[\begin{array}{c}A,B,\Gamma\Rightarrow C\\ \overline{A\wedge B,\Gamma\Rightarrow C}\,\mathrm{L}\wedge\\ \overline{A\wedge B,\Gamma\Rightarrow C}\,\mathrm{L}\vee\end{array}\] \[\begin{array}{c}\overline{\Gamma\Rightarrow A}\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ the rules that have been applied to it and the accumulated set \(m\) of all the new sequents that have been generated by these rules (this is to avoid cycles). A rule \(r\) is an application which takes a sequence list \(S\) and yields a new sequence list \(S^{\prime}\). The transformation of the list of sequence is effected by choosing one specific sequence \(s_{k}\), eliminating it and possibly generating new sequents which are incorporate into the previous list with \(s_{k}\) removed. The rule is added to \(h\) and the new sequents (if any) are added to \(m\). To prove a formula \(f\) we start with a sequence object \((\Rightarrow f),(\emptyset,()))\), that is, having a single sequent. We want to find a sequence of rules \(r_{1},...,r_{n}\) such that when applied in order yield a sequent object of the form \(((),(m,h))\). The best way to understand our inversion of propositional G3i is to examine the source code which is found in the file gentzen.py in the PyLog main folder. Consider the Sequent and SequentList classes: class Sequent: def __init__(self, head,body): self.head = head self.body = body (...) class SequentList: def __init__(self,formstring): form = astop.NegationExpand(parser.Formula(tokenizer.Tokenize(formstring))) seq = Sequent(form,[]) self.sequentlist = [seq] self.memory = [] self.proof = [] (...) Here is an example of a rule: def rand(seq,n): if n < len(seq.sequentlist) and seq.sequentlist[n].head.name=="constructor": if seq.sequentlist[n].head.operator.name=="&": aux1 = seq.sequentlist[:n] aux2 = seq.sequentlist[n+1:] aux3 = seq.sequentlist[n].body left = seq.sequentlist[n].head.left right = seq.sequentlist[n].head.right new1 = Sequent(left,aux3) new2 = Sequent(right, aux3) if not SeqInc(new1, seq.memory) and not SeqInc(new2,seq.memory): seq.sequentlist = aux1 + [new1,new2] + aux2 seq.memory.append(new1) seq.memory.append(new2) seq.proof.append("rand(" + str(n) +")") return seq return False There are nine rules rand, ror1,ror2, rimp, land, lor, limp, labs, ax. This genzen.py program runs on the same syntactic engine as PyLog and formulas are entered the same way. Note that it can only find proofs of _intuitionistic_ propositional validities. The algorithm used to find a proof is a simple brute-force search which avoids cycles. It can take some time. $ python3.9 -i gentzen.py >>> Auto("( neg (A v B) -> (neg A & neg B))") >>> Prove() 1. rimp(0) 2. rand(0) 3. rimp(0) 4. imp(0,1) 5. ror1(0) 6. ax(0,0) A, -\((\)A v B) => A 7. ax(0,1) A, _|_ => _|_ 8. rimp(0) 9. limp(0,1) 10. ror2(0) 11. ax(0,0) B, -\((\)A v B) => B 12. ax(0,1) B, _|_ => _|_ 0. B, -\((\)A v B) => B Ax0 1. B, _|_ => _|_ Ax1 2. B, -\((\)A v B) => A v B Ror2 0 3. B, -\((\)A v B) => _|_ Limp1 2 1 4. A, -\((\)A v B) => A Ax0 5. A, _|_ => _|_ Ax1 6. A, -\((\)A v B) => A v B Ror1 4 7. A, -\((\)A v B) => _|_ Limp1 6 5 8. -\((\)A v B) => -B Rimp 3 9. -\((\)A v B) => -\(\Lambda\) Rimp 7 10. -\((\)A v B) => -\(\Lambda\) & -\(\beta\) Rand 9 8 11. => -\((\)A v B) -> (-\(\Lambda\) & -\(\beta\)) Rimp 10 As seen above there is also an algorithm which reconstructs a linear version of the full sequent proof tree from the proof. Looking at this linear sequent proof it is very easy to obtain the corresponding linear natural deduction proof. This will be implemented in future versions of Pylog. After the proof sequence is obtained can also be tested manually by applying the rules in order to the initial sequent list object (which is the object State). >>> State.display() 0. -> -\((\)A v B) -> (-A & -\(\beta\)) >>> rimp(State,0) <_main__SequentList object at 0x109a69070> >>> State.display() 0. -\((\)A v B) => -\(\Lambda\) & -\(\beta\) >>> rand(State,0) <_main__SequentList object at 0x109a69070> >>> State.display() 0. -\((\)A v B) => -\(\Lambda\) 1. -\((\)A v B) => -\(\beta\) >>> rimp(State,0) <_main__SequentList object at 0x109a69070> >>>>State.display() 0.A, \(\(\A\)vB)=>_|_ 1. \(\neg\(\A\)vB)=>_|_ >>>>limp(State,0,1) <_main_.SequentListobjectat0x109a69070> >>>State.display() 0.A, \(\neg\(\A\)vB)=>AvB 1.A, _|_=>_|_ 2. \(\neg\(\A\)vB)=>\(\neg\)B and so forth. We will in the future implement a further algorithm that transforms the linear sequent proof tree into a Pylog linear natural deduction proof. ### Formalising Category Theory >>>>ShowAxioms() 0. ([f:A \(\rightarrow\)B|C] & [g:B \(\rightarrow\)D|C]) -> [(gof):A \(\rightarrow\)D|C] 1.A:C -> [id(A):A \(\rightarrow\)A|C] 2. (A:C & [f:A \(\rightarrow\)B|C]) -> (((id(A)of) = f) & ((foid(A)) = f)) 3. ([f:A \(\rightarrow\)B|C] & ([g:B \(\rightarrow\)D|C] & [h:D \(\rightarrow\)E|C])) -> ((ho(gof)) = ((hog)of)) 4.A:C ->Cat(C) 5. [f:A \(\rightarrow\)B|C] -> (A:C & B:C) 6. (Cat(A) & Cat(B)) <->Cat(func(A,B)) 7.F:func(A,B) ->FA:B 8. (F:func(A,B) & [f:a \(\rightarrow\)b|A]) -> [Ff:Fa \(\rightarrow\)Fb|B] 9. (F:func(A,B) & a:A) -> (Fid(A) = id(FA)) 10. (F:func(A,B) & ([f:a \(\rightarrow\)b|C] & [g:b \(\rightarrow\)c|C])) -> (F(gof) = (FgoFf)) 11. ([e:F \(\rightarrow\)G|func(A,B)] & a:A) -> [nat(e,a):Fa \(\rightarrow\)Ga|B] 12. ([e:F \(\rightarrow\)G|func(A,B)] & [f:a \(\rightarrow\)b|A]) -> ((Gfonat(e,a)) = (nat(e,b)off)) >>>>ShowDefinitions() Terminal(A,C) <-> (A:C & VB.(B:C -> \(\exists^{\dagger}\)f.[f:B \(\rightarrow\)A|C])) Isomorphism(f,A,B,C) <-> ([f:A \(\rightarrow\)B|C] & \(\exists\)g.([g:B \(\rightarrow\)A|C] & (((gof) = id(A)) & ((fog) Iso(A,B,C) <-> \(\exists\)f.Isomorphism(f,A,B,C) >>>>ShowProof() 0. Terminal(T,C) Hyp 1. Terminal(S,C) Hyp 2.T:C & VB.(B:C -> \(\exists^{\dagger}\)f.[f:B \(\rightarrow\)T|C]) DefExp 0 3.S:C & VB.(B:C -> \(\exists^{\dagger}\)f.[f:B \(\rightarrow\)S|C]) DefExp 1 4.T:C AndElimL 2 5.S:C AndElimL 3 6.VB.(B:C -> \(\exists^{\dagger}\)f.[f:B \(\rightarrow\)T|C]) AndElimR 2 7.VB.(B:C -> \(\exists^{\dagger}\)f.[f:B \(\rightarrow\)S|C]) AndElimR 3 8.T:C -> \(\exists^{\dagger}\)f.[f:T \(\rightarrow\)T|C] ForallElim 6 9.S:C -> \(\exists^{\dagger}\)f.[f:S \(\rightarrow\)S|C] ForallElim 7 10.A:C -> [id(A):A \(\rightarrow\)A|C] AxInt 11.\(\forall\)A.(A:C -> [id(A):A \(\rightarrow\)A|C]) ForallInt 10 12.T:C -> [id(T):T \(\rightarrow\)T|C] ForallElim 11 13. \(\forall\)A.(A:C -> [id(A): A - A|C]) ForallInt 10 14. S:C -> [id(S): S - S|C] ForallElim 13 15. [id(T): T - T|C] ImpElim 4 12 16. [id(S): S - S|C] ImpElim 5 14 17. \(\exists^{1}\)f.[f: T - T|C] ImpElim 4 8 18. \(\exists^{1}\)f.[f: S - S|C] ImpElim 5 9 19. \(\exists\)f.([f: T - T|C] & \(\forall\)l.([l: T - T|C] -> (l = f))) UniqueElim 17 20. \(\exists\)f.([f: S - S|C] & \(\forall\)m.([m: S - S|C] -> (m = f))) UniqueElim 18 21. [j: T - T|C] & \(\forall\)l.([l: T - T|C] -> (l = j)) Hyp 22. [k: S - S|C] & \(\forall\)m.([m: S - S|C] -> (m = k)) Hyp 23. \(\forall\)l.([l: T - T|C] -> (l = j)) AndElimR 21 24. \(\forall\)m.([m: S - S|C] -> (m = k)) AndElimR 22 25. [j: T - T|C] AndElimL 21 26. [k: S - S|C] AndElimL 22 27. S:C -> \(\exists^{1}\)f.[f: S - T|C] ForallElim 6 28. T:C -> \(\exists^{1}\)f.[f: T - S|C] ForallElim 7 29. \(\exists^{1}\)f.[f: S - T|C] ImpElim 5 27 30. \(\exists^{1}\)f.[f: T - S|C] ImpElim 4 28 31. \(\exists\)f.([f: S - T|C] & \(\forall\)r.([r: S - T|C] -> (r = f))) UniqueElim 29 32. \(\exists\)f.([f: T - S|C] & \(\forall\)s.([s: T - S|C] -> (s = f))) UniqueElim 30 33. [u: S - T|C] & \(\forall\)r.([r: S - T|C] -> (r = u)) Hyp 34. [v: T - S|C] & \(\forall\)s.([s: T - S|C] -> (s = v)) Hyp 35. [u: S - T|C] AndElimL 33 36. [v: T - S|C] AndElimL 34 37. ([f: A - B|C] & [g: B - D|C]) -> [(gof): A - D|C] AxInt 38. \(\forall\)A.(([f: A - B|C] & [g: B - D|C]) -> [(gof): A - D|C]) ForallInt 37 39. ([f: T - B|C] & [g: B - D|C]) -> [(gof): T - D|C] ForallElim 38 40. \(\forall\)B.(([f: T - B|C] & [g: B - D|C]) -> [(gof): T - D|C]) ForallInt 39 41. ([f: T - S|C] & [g: S - D|C]) -> [(gof): T - D|C] ForallElim 40 42. \(\forall\)f.(([f: T - S|C] & [g: S - D|C]) -> [(gof): T - D|C]) ForallInt 41 43. ([v: T - S|C] & [g: S - D|C]) -> [(gov): T - D|C] ForallElim 42 44. \(\forall\)g.(([v: T - S|C] & [g: S - D|C]) -> [(gov): T - D|C]) ForallInt 43 45. ([v: T - S|C] & [u: S - D|C]) -> [(uov): T - D|C] ForallElim 44 46. \(\forall\)D.(([v: T - S|C] & [u: S - D|C]) -> [(uov): T - D|C]) ForallInt 45 47. ([v: T - S|C] & [u: S - T|C]) -> [(uov): T - T|C] ForallElim 46 48. [v: T - S|C] & [u: S - T|C] AndInt 36 35 49. [(uov): T - T|C] ImpElim 48 47 50. \(\forall\)A.(([f: A - B|C] & [g: B - D|C]) -> [(gof): A - D|C]) ForallInt 37 51. ([f: S - B|C] & [g: B - D|C]) -> [(gof): S - D|C] ForallElim 50 52. \(\forall\)B.(([f: S - B|C] & [g: B - D|C]) -> [(gof): S - D|C]) ForallInt 51 53. ([f: S - T|C] & [g: T - D|C]) -> [(gof): S - D|C] ForallElim 52 54. \(\forall\)D.([f: S - T|C] & [g: T - D|C]) -> [(gof): S - D|C]) ForallInt 53 55. ([f: S - T|C] & [g: T - S|C]) -> [(gof): S - S|C] ForallElim 54 56. \(\forall\)f.([f: S - T|C] & [g: T - S|C]) -> [(gof): S - S|C]) ForallInt 55 57. ([u: S - T|C] & [g: T - S|C]) -> [(gou): S - S|C] ForallElim 56 58. \(\forall\)g.([u: S - T|C] & [g: T - S|C]) -> [(gou): S - S|C]) ForallInt 57 59. ([u: S - T|C] & [v: T - S|C]) -> [(vou): S - S|C] ForallElim 58 60. [u: S - T|C] & [v: T - S|C] AndInt 35 36 61. [(vou): S - S|C] ImpElim 60 59 62. [id(T): T - T|C] -> (id(T) = j) ForallElim 23 63. id(T) = j ImpElim 15 62 64. [(uov): T \(\rightarrow\) T[C]] -> ((uov) = j) ForallElim 23 65. (uov) = j ImpElim 49 64 66. [id(S): S \(\rightarrow\) S[C] -> (id(S) = k) ForallElim 24 67. id(S) = k ImpElim 16 66 68. [(vou): S \(\rightarrow\) S[C] -> ((vou) = k) ForallElim 24 69. (vou) = k ImpElim 61 68 70. j = id(T) Symmetry 63 71. k = id(S) Symmetry 67 72. (uov) = id(T) EqualitySub 65 70 73. (vou) = id(S) EqualitySub 69 71 74. ((uov) = id(T)) & ((vou) = id(S)) AndInt 72 73 75. [u: S \(\rightarrow\) T[C] & (((uov) = id(T)) & ((vou) = id(S))) AndInt 35 74 76. \(\exists\)h.([h: S \(\rightarrow\) T[C] & (((hov) = id(T)) & ((voh) = id(S)))) ExistsInt 75 77. [v: T \(\rightarrow\) S[C] & \(\exists\)h.([h: S \(\rightarrow\) T[C] & ((hov) = id(T)) & ((voh) = id(S)))) AndInt 36 78. Isomorphism(v,T,S,C) DefSub 77 79. \(\exists\)f.Isomorphism(f,T,S,C) ExistsInt 78 80. Iso(T,S,C) DefSub 79 81. Iso(T,S,C) ExistsElim 19 21 80 82. Iso(T,S,C) ExistsElim 20 22 81 83. Iso(T,S,C) ExistsElim 31 33 82 84. Iso(T,S,C) ExistsElim 32 34 83 85. Terminal(S,C) -> Iso(T,S,C) ImpInt 84 86. Terminal(T,C) -> (Terminal(S,C) -> Iso(T,S,C)) ImpInt 85 Qed ### Principle of Transcendental Analogy It is cumbersome to work with formal proofs involving quotients of algebraic structures. We consider a class \(U\) of all the different models (or representatives of isomorphism classes) of a given structure. And we fix an axiomatic system for operations \(f_{1},...,f_{n}\) which act upon the whole \(\bigcup U\) and by restrictions on all the elements \(M\) of \(U\) where it corresponds to the various operations of these structures. It is like Weil's foundations for Algebraic Geometry in which all fields of rational functions are seen as subfields of one large field. Quotients (for instance for groups or rings) are defined not in terms of structures but of epimorphisms \(f:A\to B\) and a given substructure (normal subgroup, ideal) of the domain \(A\). ### Second-Order Arithmetic The project is to formalise Stephen G. Simpson's book Subsystems of Second-Order Arithmetic. The axioms of Z2 are interpreted as follows: 0. Nat(x) v Set(x) 1. Nat(x) -> \(\neg\)Set(x) 2. Nat(0) & Nat(1) 3. Nat(n) -> \(\neg\)((n + 1) = 0) 4. (Nat(n) & Nat(m)) -> (((m + 1) = (n + 1)) -> (m = n)) 5. Nat(m) -> ((m + 0) = m) 6. (Nat(n) & Nat(m)) -> ((m + (n + 1)) = ((m + n) + 1)) 7. Nat(m) -> ((m * 0) = 0) 8. (Nat(n) & Nat(m)) -> ((m * (n + 1)) = ((m * n) + m)) 9. Nat(m) -> -(m < 0) 10. (Nat(n) & Nat(m)) -> ((m < (n + 1)) <-> ((m < n) v (m = n))) 11. (x \(\epsilon\) X) -> (Nat(x) & Set(X)) 12. ((0 \(\epsilon\) X) & \(\forall\)n.((n \(\epsilon\) X) -> ((n + 1) \(\epsilon\) X))) -> \(\forall\)n.(n \(\epsilon\) X) n \(\epsilon\) { x: form(x)} ------------------------------------------------- Nat(n) & form(n) Nat(n) & form(n) ------------------------------------------------- n \(\epsilon\) { x: form(x)} The comprehension scheme is converted into the two rules above which are similar to ClassElim and ClassInt used to formalise Kelley-Morse set theory. Example of a simple proof: 0. 1 = 0 Hyp 1. Nat(0) & Nat(1) AxInt 2. 0 = 0 Identity 3. Nat(0) AndElimL 1 4. Nat(n) -> -((n + 1) = 0) AxInt 5. \(\forall\)n.(Nat(n) -> -((n + 1) = 0)) ForallInt 4 6. Nat(0) -> -((0 + 1) = 0) ForallElim 5 7. \(\neg\)((0 + 1) = 0) ImpElim 3 6 8. \(\neg\)((0 + 0) = 0) EqualitySub 7 0 9. Nat(m) -> ((m + 0) = m) AxInt 10. \(\forall\)m.(Nat(m) -> ((m + 0) = m)) ForallInt 9 11. Nat(0) -> ((0 + 0) = 0) FormalElim 10 12. (0 + 0) = 0 ImpElim 3 11 13. \(\neg\)(0 = 0) EqualitySub 8 12 14. \(\mid\)_ ImpElim 2 13 15. \(\neg\)(1 = 0) ImpInt 14 Qed ### Euclidean Geometry For a formalisation of the first theorem of Euclid in PyLog see [2].
2305.15621
Matrix Estimation for Offline Reinforcement Learning with Low-Rank Structure
We consider offline Reinforcement Learning (RL), where the agent does not interact with the environment and must rely on offline data collected using a behavior policy. Previous works provide policy evaluation guarantees when the target policy to be evaluated is covered by the behavior policy, that is, state-action pairs visited by the target policy must also be visited by the behavior policy. We show that when the MDP has a latent low-rank structure, this coverage condition can be relaxed. Building on the connection to weighted matrix completion with non-uniform observations, we propose an offline policy evaluation algorithm that leverages the low-rank structure to estimate the values of uncovered state-action pairs. Our algorithm does not require a known feature representation, and our finite-sample error bound involves a novel discrepancy measure quantifying the discrepancy between the behavior and target policies in the spectral space. We provide concrete examples where our algorithm achieves accurate estimation while existing coverage conditions are not satisfied. Building on the above evaluation algorithm, we further design an offline policy optimization algorithm and provide non-asymptotic performance guarantees.
Xumei Xi, Christina Lee Yu, Yudong Chen
2023-05-24T23:49:06Z
http://arxiv.org/abs/2305.15621v1
# Matrix Estimation for Offline Reinforcement Learning with Low-Rank Structure ###### Abstract We consider offline Reinforcement Learning (RL), where the agent does not interact with the environment and must rely on offline data collected using a behavior policy. Previous works provide policy evaluation guarantees when the target policy to be evaluated is covered by the behavior policy, that is, state-action pairs visited by the target policy must also be visited by the behavior policy. We show that when the MDP has a latent low-rank structure, this coverage condition can be relaxed. Building on the connection to weighted matrix completion with non-uniform observations, we propose an offline policy evaluation algorithm that leverages the low-rank structure to estimate the values of uncovered state-action pairs. Our algorithm does not require a known feature representation, and our finite-sample error bound involves a novel discrepancy measure quantifying the discrepancy between the behavior and target policies in the spectral space. We provide concrete examples where our algorithm achieves accurate estimation while existing coverage conditions are not satisfied. Building on the above evaluation algorithm, we further design an offline policy optimization algorithm and provide non-asymptotic performance guarantees. ## 1 Introduction Reinforcement Learning (RL) has achieved significant empirical success in the online setting, where the agent continuously interacts with the environment to collect data and improve its performance. However, online exploration is costly and risky in many applications, such as healthcare [5] and autonomous driving [20], in which case it is preferable to learn from a pre-collected observational dataset from doctors or human drivers using their own policies. Due to lack of on-policy interaction with the environment, offline RL faces the fundamental challenge of distribution shift [9]. A standard approach for handling distribution shift is importance sampling [16, 15]. More sophisticated approaches have been proposed to alleviate the high variance of importance sampling [3, 26]. Recent works [22, 12, 28] consider estimating the state marginal importance ratio, a more tractable problem. Existing work on offline RL requires the dataset to have sufficient coverage. A standard measure for coverage is the concentrability coefficient [24]: \(C^{\pi}=\max_{s,a}\frac{d^{\pi}(s,a)}{\rho(s,a)}\), which is the ratio between the state-action occupancy measure of a policy \(\pi\) of interest and the (empirical) occupancy measure \(\rho\) of the behavior policy generating the offline dataset. However, this can be restrictive as the support of \(\rho\) must contain that of \(d^{\pi}\) in order for \(C^{\pi}\) to be finite. Earlier work such as the Fitted Q-iteration (FQI) algorithm [11] requires full coverage, i.e. \(C^{\pi}<\infty\) for all policies \(\pi\). More recent works [24, 17, 10] requires a more relaxed, partial coverage condition \(C^{\pi^{*}}<\infty\) with \(\pi^{*}\) being optimal policy. Partial coverage is still a fairly strong requirement: the behavior policy must visit every state the optimal policy would visit, and take every action the optimal policy would take. In this paper, we seek to relax the coverage condition for offline policy evaluation in settings where the Markov decision process (MDP) has a latent low-rank structure. Similarly to [19, 18], we view the \(Q\) function as a matrix and exploit its low-rank structure to infer the entries that were not observed in the offline data. Unlike typical results from the low-rank matrix completion literature, our setting requires completing the matrix under non-uniform sampling, as in [4, 8]; moreover, the error is evaluated under a different distribution or weighted norm, leading to the fundamental challenge of distribution shift. By leveraging techniques from weighted and non-uniform matrix completion, we develop a new offline policy evaluation algorithm, which alternates between Q iteration and matrix estimation. For both the infinite and finite sample settings, we show that the evaluation error can be bounded in terms of a novel discrepancy measure between the behavior and target policies. In contrast to the standard concentrability coefficient, our discrepancy measure may remain finite even when the behavior policy does not cover the support of the target policy. We present a concrete example where the concentrability coefficient is infinite but our method achieves a meaningful error bound. Building on the above evaluation algorithm, we further design an offline policy optimization algorithm with provable performance guarantees. There are several challenges that arise when we borrow ideas from low-rank matrix estimation to offline RL. Classical matrix estimation results require two assumptions that are hard to satisfy in MDP. First, independence is assumed between the sampling process and the observation noise. This is clearly not true for MDPs, where observation noise is intertwined with the sampling. For example, if a state-action pair is sampled more frequently, the empirical observations (e.g., transition frequencies) are bound to be less noisy than others sampled less often. Second, sampling in matrix estimation typically requires each entry to have a non-zero probability of being observed. Sampling in MDPs is very different: only entries on the support of the sampling distribution, which is determined by the behavior policy, can be sampled; those off the support have a zero observation probability. We note that various recent works attempt to relax the aforementioned assumptions to make matrix estimation more applicable to real-world sequential decision-making problems. For example, the paper [2] allows for some dependence between the noise and sampling pattern, and the smallest sampling probability can be zero. Their algorithm, which involves finding a maximum biclique, works best with datasets with a specific restrictive structure, which is often not present in offline RL. Our goal in this paper is to derive a performance guarantee for a more general class of sampling patterns. ## 2 Problem Setup ### MDP with Low-Rank Structure Consider an MDP \(\mathcal{M}=(\mathcal{S},\mathcal{A},H,P,r,\mu_{1})\) with finite state space \(\mathcal{S}\), finite action space \(\mathcal{A}\), horizon \(H\), transition kernel \(P=\{P_{t}\}_{t\in[H]}\), bounded reward function \(r=\{r_{t}:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\}_{t\in[H]}\), and initial state distribution \(\mu_{1}\in\Delta(\mathcal{S})\). Let \(S=|\mathcal{S}|\) and \(A=|\mathcal{A}|.\) For each policy \(\pi=\{\pi_{t}:\mathcal{S}\rightarrow\Delta(\mathcal{A})\}_{t\in[H]}\), the Q function \(Q_{t}^{\pi}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is defined as \(Q_{t}^{\pi}(s,a)=\mathbb{E}_{\pi}[\sum_{i=t}^{H}r_{i}(s_{i},a_{i})|s_{t}=s,a_{ t}=a]\), and the total expected reward is \(J^{\pi}=\mathbb{E}_{\pi}[\sum_{t=1}^{H}r_{t}(s_{t},a_{t})|s_{1}\sim\mu_{1}].\) Let \(d_{t}^{\pi}:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\) denote the state-action occupancy measure at time \(t\in[H]\) under policy \(\pi\). Given a dataset generated by the behavior policy \(\pi^{\beta}\), our goal is to estimate \(J^{\pi^{\theta}}\) for a target policy \(\pi^{\theta}\). Our blanket assumption is that the MDP has the following low-rank structure, which implies that for any policy \(\pi\), its \(Q\) function (viewed as an \(S\)-by-\(A\) matrix) is at most rank \(d\). **Assumption**.: _For all \(t\), \(r_{t}\in[0,1]^{S\times A}\) has rank at most \(d^{\prime}=\lfloor d/2\rfloor\), and \(P_{t}\) admits the decomposition_ \[P_{t}(s^{\prime}|s,a)=\sum_{i=1}^{d^{\prime}}u_{t,i}(s^{\prime},s)w_{t,i}(a) \quad\text{or}\quad P_{t}(s^{\prime}|s,a)=\sum_{i=1}^{d^{\prime}}u_{t,i}(s)w_ {t,i}(s^{\prime},a),\quad\forall s^{\prime},s,a.\] The above low-rank model is different from the Low-rank MDP model considered in previous works [1, 25]. In Low-rank MDPs, the transition kernel \(P\) is assumed to have a factorization of the form \(P(s^{\prime}|s,a)=\sum_{i=1}^{d^{\prime}}u_{i}(s^{\prime})w_{i}(s,a)\), where the factors \(u_{i}(\cdot)\) and \(w_{i}(\cdot,\cdot)\) are unknown. Closely related is the Linear MDP model [6, 27], where the feature maps \(w_{i}(\cdot,\cdot)\) are known. In these models, the low-rank/linear structures are with respect to the relationship between the originating state-action pair \((s,a)\) and the destination state \(s^{\prime}\); they do _not_ imply that \(Q\) function is low-rank when viewed as a matrix. In contrast, our model stipulates that the transition kernel can be factorized either between (i) \(a\) and \((s,s^{\prime})\) or (ii) \(s\) and \((s^{\prime},a)\), both of which imply a low dimensional relationship between the current state \(s\) and the action \(a\) taken at that state, resulting in a low-rank \(Q\) function. A key consequence of the \(Q\) function being low-rank is that we can bypass modeling the environment and directly estimate the \(Q\) function by leveraging its low-rankness, resulting in a model-free method. On the other hand, most works in Low-rank MDPs consider model-based methods. Note that when the transition tensor is fully factorized: \(P_{t}(s^{\prime}|s,a)=\sum_{i=1}^{d^{\prime}}u_{t,i}(s^{\prime})v_{t,i}(s)w_{t,i}(a)\), it satisfies both our assumption and the assumption of Low-rank MDPs. ### Offline Dataset The offline dataset is denoted by \(\mathcal{D}=\{(s_{t}^{k},a_{t}^{k},r_{t}^{k})\}_{t\in[H],k\in[K]}\), which contains \(K\) independent trajectories generated from the behavior policy \(\pi^{\beta}\). We consider two settings: the infinite-sample setting with \(K\rightarrow\infty\) and the finite-sample setting with \(K<\infty\); we describe these two settings in detail below. For simplicity, we assume the immediate rewards are observed without noise, which can be easily relaxed. The uncertainty in the system completely comes from the transition probability. In the infinite-sample setting, we have partial but noiseless knowledge of the MDP on the support of the state-action occupancy measure induced by the behavior policy. In other words, for all state-action pairs that can be visited using the behavior policy, namely, \((s,a)\in\text{supp}(d_{t}^{\pi^{\beta}})\), we know the exact values of the transition probability \(P_{t}(\cdot|s,a)\). It is important to note that even in this idealized setting, off-policy evaluation is still non-trivial. When the behavior policy does not have full coverage, i.e., \(\text{supp}(d_{t}^{\pi^{\beta}})\neq\mathcal{S}\times\mathcal{A}\), we do not have any information for the state-action pairs off the support and they must be estimated by leveraging the low-rank structure of the \(Q\) function. The distribution shift that arises in the infinite-sample setting can be attributed to the difference in support, which is precisely reflected in our proposed distribution discrepancy in Definition 1 and the corresponding error bound in Theorem 1. In the finite-sample setting, we have a noisy and unbiased estimate \(\widehat{P}_{t}(\cdot|s,a)\) of the true transition probability \(P_{t}(\cdot|s,a)\) for \((s,a)\in\text{supp}(\widehat{d}_{t}^{\pi^{\beta}})\), where \(\widehat{d}_{t}^{\pi^{\beta}}\) denotes the empirical data distribution of \(K\) independent samples from the true distribution \(d_{t}^{\pi^{\beta}}\). Since different estimates of the probability exhibit different levels of uncertainty, only considering the support is no longer sufficient. In particular, the finite-sample distribution shift depends not only on the difference in support, but also the difference in the specific distributions, which will be reflected in our proposed distribution discrepancy in Definition 2 and the subsequent error bound in Theorem 2. ### Notation and Operator Discrepancy For a matrix \(M\in\mathbb{R}^{n\times m}\), let \(\left\|M\right\|_{*}\) denote its nuclear norm (sum of singular values), \(\left\|M\right\|_{\text{op}}\) its operator norm (maximum singular value), \(\left\|M\right\|_{\infty}=\max_{i,j}\left|M_{ij}\right|\) its entrywise \(\ell_{\infty}\) norm, and \(\text{supp}(M)=\{(i,j):M_{ij}\neq 0\}\) its support. The max norm [21] of \(M\) is defined as \(\left\|M\right\|_{\max}=\min_{U,V:X=UV^{\top}}\left\|U\right\|_{2\rightarrow \infty}\left\|V\right\|_{2\rightarrow\infty}\), where \(\left\|\cdot\right\|_{2\rightarrow\infty}\) denotes the maximum row \(\ell_{2}\) norm of a matrix. Both max norm and nuclear norm can be viewed as convex surrogates of the matrix rank [21]. For a rank-\(d\) matrix \(M\), its max norm can be upper bounded as \[\left\|M\right\|_{\max}\leq\sqrt{d}\left\|M\right\|_{\infty}. \tag{1}\] The nuclear norm and the max norm satisfy: \[\frac{1}{\sqrt{nm}}\left\|M\right\|_{*}\leq\left\|M\right\|_{\max}\leq\left\| M\right\|_{*}. \tag{2}\] The indicator matrix \(\mathds{1}_{M}\in\{0,1\}^{n\times m}\) is a binary matrix encoding the position of the support of \(M\). The entrywise product between two matrices \(M\) and \(M^{\prime}\) is denoted by \(M\circ M^{\prime}\). We propose a novel discrepancy measure defined below, and show that it can replace the role of the concentrability coefficient in our infinite-sample error bound under the low-rank assumption. **Definition 1** (Operator discrepancy).: _The operator discrepancy between two probability distributions \(p,q\in\Delta(\mathcal{S}\times\mathcal{A})\) is defined as_ \[\mathrm{Dis}(p,q)\coloneqq\min\Bigl{\{}\left\|g-q\right\|_{\text{op}}:g\in \Delta(S\times A),\;\text{supp}(g)\subseteq\text{supp}(p)\Bigr{\}}. \tag{3}\] Note that \(\mathrm{Dis}(p,q)\leq\left\|p-q\right\|_{\text{op}}\) is always finite, and \(\mathrm{Dis}(p,q)=0\) if and only if \(\text{supp}(q)\subseteq\text{supp}(p)\). To provide intuition for \(\mathrm{Dis}(p,q)\), let the minimizer in (3) be \(g^{*}\). By generalized Holder's inequality, \[\Bigl{|}\mathbb{E}_{(s,a)\sim g^{*}}\bigl{[}M(s,a)\bigr{]}-\mathbb{E}_{(s,a) \sim q}\bigl{[}M(s,a)\bigr{]}\Bigr{|}=\Bigl{|}\langle g^{*},M\rangle-\langle q,M\rangle\Bigr{|}\leq\mathrm{Dis}(p,q)\cdot\|M\|_{*}. \tag{4}\] If the nonzero singular values of \(M\) are of the same scale, then the RHS of (4) is of order \(\operatorname{Dis}(p,q)\cdot\operatorname{rank}(M)\). Therefore, \(\operatorname{Dis}(p,q)\) measures the distribution shift between \(p\) and \(q\) in terms of preserving the expectation of low-rank matrices. Compared to traditional measures such as the concentrability coefficient, the operator discrepancy takes into account the low-rank structure of the model and therefore can allow for a less restrictive coverage condition. Note that \(\operatorname{Dis}(p,q)\) only depends on the support of \(p\): if \(\operatorname{supp}(p)=\operatorname{supp}(p^{\prime})\), then \(\operatorname{Dis}(p,q)=\operatorname{Dis}(p^{\prime},q)\) for all \(q\). As mentioned before, the infinite-sample distribution shift depends only on the support of the behavior policy and the operator discrepancy reflects exactly that. Moreover, thanks to the minimization in the definition (3), \(\operatorname{Dis}(p,q)\) can be significantly smaller than \(\left\lVert p-q\right\rVert_{\operatorname{op}}\). For instance, if \(p\) is the uniform distribution on \(\mathcal{S}\times\mathcal{A}\), then \(g^{*}=q\) and hence \(\operatorname{Dis}(p,q)=0\) for all \(q\). For our finite-sample error bound, we consider a different notion of discrepancy. As we have a limited number of samples from the behavior policy, it is natural that the appropriate notion of discrepancy no longer depends only on the support of the distribution, but rather how closely the distributions match. In our analysis, the appropriate measure of closeness is given by the operator norm difference, which is also closely tied to Definition 1 when we restrict \(g\) to be equal to \(p\). **Definition 2** (Empirical Operator Discrepancy).: _The empirical operator discrepancy between two probability distributions \(p,q\in\Delta(\mathcal{S}\times\mathcal{A})\) is defined as_ \[\widehat{\operatorname{Dis}}(p,q)\coloneqq\left\lVert p-q\right\rVert_{ \operatorname{op}}. \tag{5}\] When infinite samples are given from the behavior policy, the error bound for our proposed off-policy evaluation algorithm will be a function of \(\operatorname{Dis}(d_{t}^{\pi^{g}},d_{t}^{\pi^{g}})\). The operator discrepancy only depends on the support of the behavior policy and not the exact distribution, which is expected under the infinite-sample setting. As such, the operator discrepancy highlights the inherent error induced by distribution shift. In the finite-sample setting, our error bound depends on the empirical operator discrepancy \(\widehat{\operatorname{Dis}}(d_{t}^{\pi^{g}},d_{t}^{\pi^{g}})\), for which the exact distribution matters. This is expected since we are given observations with varying noise levels determined by the empirical data distribution. We remark in passing that the inequality \(\operatorname{Dis}(d_{t}^{\pi^{g}},d_{t}^{\pi^{g}})\leq\widehat{\operatorname {Dis}}(d_{t}^{\pi^{g}},d_{t}^{\pi^{g}})\) holds by definition. Also, the above discrepancy metrics share similarity with the parameter \(\Lambda\) in [8], which also measures the difference in two distributions in the operator norm. ## 3 Algorithm In this section, we present our algorithm for offline policy evaluation. The algorithm takes as input an offline dataset \(\mathcal{D}=\{(s_{t}^{k},a_{t}^{k},r_{t}^{k})\}_{t\in[H],k\in[K]}\), which contains \(K\) independent trajectories generated from the behavior policy \(\pi^{\beta}\). The algorithm also takes as input the target policy \(\pi^{\theta}\), the initial state distribution \(\mu_{1}\), weight matrices \((\rho_{t})_{t\in[H]}\), and a matrix estimation algorithm \(\mathtt{ME}(\cdot)\) which will be specified in (7) and (9) for the infinite-sample and finite-sample settings, respectively. The weight matrices \(\{\rho_{t}\}\) are chosen by the user and primarily used as an input to \(\mathtt{ME}(\cdot)\). As a typical choice, in the infinite-sample setting we set \(\rho_{t}\) to be the true state-action occupancy measure \(d_{t}^{\pi^{\beta}}\) of the behavior policy; in the finite-sample setting we set \(\rho_{t}\) to be the empirical measure \(\widehat{d}_{t}^{\pi^{\beta}}\). Our algorithm iterates backwards in the horizon from steps \(t=H\) to \(t=1\). For each step \(t\), the algorithm has two parts. First, it applies Q-value iteration to empirically estimate the Q-values for state-action pairs in the support of \(d_{t}^{\pi^{\beta}}\). In particular, the data is used to construct unbiased empirical estimates of the transition kernel and occupancy measure of the behavior policy, denoted by \(\widehat{P}_{t}\) and \(\widehat{d}_{t}^{\pi^{\beta}}\), respectively. Let \(\widehat{B}_{t}^{\pi^{\beta}}\) denote the target policy's empirical Bellman operator, which is given by \[(\widehat{B}_{t}^{\pi^{\theta}}f)(s,a)=r_{t}(s,a)+\sum_{s^{\prime},a^{\prime}} \widehat{P}_{t}(s^{\prime}|s,a)\pi_{t}^{\theta}(a^{\prime}|s^{\prime})f(s^{ \prime},a^{\prime}) \tag{6}\] for all \(f:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\). Note that we can evaluate \((\widehat{B}_{t}^{\pi}f)(s,a)\) only over \((s,a)\in\operatorname{supp}(\widehat{d}_{t}^{\pi^{\beta}})\). With the given weight matrix \(\rho_{t}\), which is chosen such that \(\operatorname{supp}(\rho_{t})\subseteq\operatorname{supp}(\widehat{d}_{t}^{ \pi^{\beta}})\), the in-support empirical estimate of the Q-value is computed via \[Z_{t}(s,a)\leftarrow(\widehat{B}_{t}^{\pi^{\theta}}\widehat{Q}_{t+1}^{\pi^{ \theta}})(s,a),\quad\text{for }(s,a)\in\operatorname{supp}(\rho_{t}).\] Subsequently, to infer the Q values off support, the algorithm uses the low-rank matrix estimation subroutine, \(\mathtt{ME}(\cdot)\), which takes as input the weight matrix \(\rho_{t}\) and the empirical estimates \(Z_{t}\). While \(\mathtt{ME}(\cdot)\) can be any off-the-shelf matrix estimation algorithm, for the purpose of the analysis, we will use the max norm minimization method due to its computational tractability and robustness under nonuniform sampling. Specifically, our \(\mathtt{ME}(\cdot)\) subroutines are specified in (7) and (9) in the next section, in which different constraints are used for the infinite-sample and finite-sample settings. The pseudocode for our algorithm is given below. Our algorithm is computationally efficient and easy to implement. ``` Data: dataset \(\mathcal{D}\), \(\pi^{\theta}\), initial state distribution \(\mu_{1}\), weight matrices \((\rho_{t})_{t\in[H]}\), and \(\mathtt{ME}(\cdot)\). Result: estimator \(\widehat{J}\). 1\(\widehat{Q}_{H+1}^{\pi^{\theta}}(s,a)\gets 0,\quad\forall(s,a)\in\mathcal{S} \times\mathcal{A}\). 2for\(t\) = H, H-1,..., 1do 3 Q iteration: \(Z_{t}(s,a)\leftarrow(\widehat{B}_{t}^{\pi^{\theta}}\widehat{Q}_{t+1}^{\pi^{ \theta}})(s,a)\), for all \((s,a)\in\mathrm{supp}(\rho_{t})\). 4 Matrix estimation: \(\widehat{Q}_{t}^{\pi^{\theta}}\leftarrow\mathtt{ME}\left(\rho_{t},Z_{t}\right)\). 5 end for Output \(\widehat{J}\leftarrow\sum_{s,a}\mu_{1}(s)\pi_{1}^{\theta}(a|s)\widehat{Q}_{1}^{ \pi^{\theta}}(s,a)\). ``` **Algorithm 1**Matrix Completion in Low-Rank Offline RL ## 4 Analysis We present evaluation error bounds under both the _infinite-sample_ setting \(K\rightarrow\infty\) and the _finite-sample_ setting \(K<\infty\). Define the population Bellman operator \(B_{t}^{\pi^{\theta}}\), which is given by equation (6) with \(\widehat{P}_{t}\) replaced by \(P_{t}\). Define the matrix \(Y_{t}\in\mathbb{R}^{S\times A}\) via \(Y_{t}(s,a)=(B_{t}^{\pi^{\theta}}\widehat{Q}_{t+1}^{\pi^{\theta}})(s,a)\), which is the population version of \(Z_{t}\) computed in Algorithm 1. ### Infinite-sample setting In the infinite-sample setting, we have \(\widehat{d}_{t}^{\pi^{\beta}}(s,a)\to d_{t}^{\pi^{\beta}}(s,a)\) and \(\widehat{P}_{t}(s,a)\to P_{t}(s,a)\) for all \((s,a)\in\mathrm{supp}(d_{t}^{\pi^{\beta}})\). Consequently, both \(\widehat{B}_{t}^{\pi^{\theta}}\) and \(Z_{t}\) converge to their population versions \(B_{t}^{\pi^{\theta}}\) and \(Y_{t}\), respectively. Note that the infinite samples does not imply complete knowledge of the MDP. Instead, we only know a subset of transition probabilities on the support of the state-action occupancy measure induced by the behavior policy. The matrix estimation subroutine is given by the following max norm minimization program with \(\rho_{t}=d_{t}^{\pi^{\beta}}\) and \(L_{t}\coloneqq H-t+1\): \[\mathtt{ME}(\rho_{t},Y_{t})=\operatorname*{argmin}_{M\in\mathbb{R }^{S\times A}}\left\|M\right\|_{\max} \tag{7}\] \[\text{s.t. }\mathds{1}_{\rho_{t}}\circ M=\mathds{1}_{\rho_{t}} \circ Y_{t},\quad\left\|M\right\|_{\infty}\leq L_{t}.\] We impose an entrywise equality constraint because the information on the support of \(\rho_{t}\) is assumed to be noiseless and naturally we want the solution to exactly fit those entries. We have the following performance guarantee. The proof of Theorem 1 involves two steps. We first decompose the evaluation error as a summation of the matrix estimation accuracy from future timesteps and then bound the accuracy at each timestep by a standard application of Holder's inequality. The complete proof is deferred to Appendix A.1. **Theorem 1** (Infinite samples).: _In the infinite-sample setting, under Algorithm 1 with \(\rho_{t}=d_{t}^{\pi^{\beta}}\) and \(\mathtt{ME}(\cdot)\) being (7), the output estimator \(\widehat{J}\) satisfies_ \[\left|\widehat{J}-J^{\pi^{\theta}}\right|\leq 2H\sqrt{dSA}\sum_{t=1}^{H} \mathrm{Dis}(d_{t}^{\pi^{\beta}},d_{t}^{\pi^{\theta}}). \tag{8}\] In the above bound, note that the operator discrepancy only depends on the support of \(d_{t}^{\pi^{\beta}}\), not the specific distribution. This makes sense since the information of the data entirely depends on the support of \(d_{t}^{\pi^{\beta}}\), not the specific distribution. Once a state-action pair is supported, we know exactly what \(r_{t}(s,a)\) and \(P_{t}(\cdot|s,a)\) are, and therefore it does not matter what the actual value of \(d_{t}^{\pi^{\beta}}(s,a)\) is. When the support of \(d_{t}^{\pi^{\beta}}\) is \(\mathcal{S}\times\mathcal{A}\) for all \(t\in[H]\), it means the behavior policy is extremely exploratory and covers the whole state-action space. Consequently, we get a zero estimation bound in (8) because we know the MDP exactly. ### Finite-sample setting Next consider the setting with a finite dataset \(\mathcal{D}=\{(s_{t}^{k},a_{t}^{k},r_{t}^{k})\}_{t\in[H],k\in[K]}\). Let \(n_{t}(s,a)\coloneqq\sum_{k\in[K]}\mathds{1}_{(s_{t}^{k},a_{t}^{k})=(s,a)}\) be the visitation count of each state-action pair. Accordingly, the empirical occupancy of \(\pi^{\beta}\) is given by \(\widehat{d_{t}^{\pi^{\beta}}}(s,a)=n_{t}(s,a)/K\). Let \(\rho_{t}=\widehat{d_{t}^{\pi^{\beta}}}\) in Algorithm 1. The matrix estimation subroutine \(\mathtt{ME}(\cdot)\) is given by the following max norm minimization program: \[\mathtt{ME}(\rho_{t},Z_{t})= \operatorname*{argmin}_{M\in\mathbb{R}^{S\times A}}\left\|M\right\| _{\max} \tag{9}\] \[\text{s.t.}\ \left|\left\langle\rho_{t},M-Z_{t}\right\rangle \right|\leq\left|\left\langle\rho_{t},Z_{t}-Y_{t}\right\rangle\right|,\quad \left\|M\right\|_{\infty}\leq L_{t}.\] We state the following guarantee. The proof of Theorem 2 proceeds as follows. We build upon the proof of Theorem 1 to get the first discrepancy term on the RHS of (10). The second error term comes from the finite-sample error in the system and is obtained by applying a generalization error guarantee from [21]. The complete proof is deferred to Appendix A.2. **Theorem 2** (Finite samples).: _Consider the finite-sample setting under Algorithm 1 with \(\rho_{t}=\widehat{d_{t}^{\pi^{\beta}}}\) and \(\mathtt{ME}(\cdot)\) being (9). We assume \(2<K<SA\). There exists an absolute constant \(C>0\) such that with probability at least \(1-\delta\), we have_ \[\left|\widehat{J}-J^{\pi^{\theta}}\right|\leq 2H\sqrt{dSA}\sum_{t\in[H]}\widehat{\mathrm{Dis}}(d_{t}^{\pi^{ \beta}},d_{t}^{\pi_{\theta}})+CH^{2}\sqrt{\frac{d(S+A)\log(HS/\delta)}{K}}. \tag{10}\] On the RHS of (10), the first term quantifies the population-level distribution shift and the second term takes into account the statistical error. The finite-sample distribution shift is reflected in the term \(\widehat{\mathrm{Dis}}(d_{t}^{\pi^{\beta}},d_{t}^{\pi_{\theta}})\), which is always finite, given any behavior policy \(\pi^{\beta}\) and target policy \(\pi^{\theta}\), in contrast to the concentrability coefficient \(C^{\pi}=\max_{s,a}\frac{d^{\pi}(s,a)}{\rho(s,a)}\). Suppose that there exists some \((s,a)\) such that \(\rho_{t}(s,a)=0\) and \(d_{t}^{\pi^{\theta}}(s,a)>0\). Then, \(C^{\pi^{\theta}}=\infty\) whereas \(\widehat{\mathrm{Dis}}(d_{t}^{\pi^{\beta}},d_{t}^{\pi_{\theta}})\) is finite and meaningful. We will subsequently present examples to showcase the effectiveness of our bound. As a sanity check, let us consider the setting of evaluating the behavior policy, i.e. \(\pi^{\theta}=\pi^{\beta}\). Using our results, we obtain an error bound of \[\left|\widehat{J}-J^{\pi^{\beta}}\right|\lesssim H^{2}\sqrt{\frac{d(S+A)\log( HS/\delta)}{K}},\] with probability at least \(1-\delta\). This implies that for evaluating the behavior policy, our method requires a sample complexity of order \(H^{4}d(S+A)\), which matches the standard linear dependence on the dimensions in low-rank matrix estimation. ### Examples We present some concrete examples showcasing the effectiveness of our algorithm. #### 4.3.1 Policies with Disjoint Support under Uniform Transitions Assume \(S=A=n\). Consider the simple setting where the transition is uniform over all state-action pairs. For each \(s\) and \(t\), assume \(\pi_{t}^{\theta}(\cdot|s)\) selects an action uniformly at random amongst a subset of actions \(\mathcal{A}_{t}^{\theta}\subseteq\mathcal{A}\), where \(|\mathcal{A}_{t}^{\theta}|=m\), and the subset \(\mathcal{A}_{t}^{\theta}\) is itself sampled uniformly at random amongst all subsets of size \(m\). We assume \(\pi_{t}^{\beta}\) is generated from the same model independently, i.e. the behavior policy also randomizes uniformly amongst a subset of actions \(\mathcal{A}_{t}^{\theta}\), for a uniformly selected subset of actions. Note that the support of \(d_{t}^{\pi^{\beta}}\) and \(d_{t}^{\pi^{\theta}}\) will be mostly disjoint since \(\left|\mathcal{A}_{t}^{\theta}\cap\mathcal{A}_{t}^{\beta}\right|\) can be very small, making the concentrability coefficient infinite with high probability. Using Theorem 1, we derive the following infinite-sample error bound, the proof of which is deferred to Appendix A.3. **Corollary 1**.: _Under the aforementioned setting, there exists an absolute constant \(C>0\) such that when \(n\geq C\), with probability at least \(1-\frac{1}{n}\), we have_ \[\left|\widehat{J}-J^{\pi^{\theta}}\right|\leq CH^{2}\sqrt{\frac{d\log(nH)}{m}}. \tag{11}\] If \(m\) satisfies \(m\gtrsim\frac{H^{2}d\log(nH)}{\epsilon^{2}}\) for some \(\epsilon>0\), then we have \(|\widehat{J}-J^{\pi^{\theta}}|\leq\epsilon H\). It implies that even when \(m\) is logarithmic in \(n\), we can still achieve a consistent error bound. For example, suppose \(m=n/2\). In this setting, the behavior and target policies both randomize over half of the actions, but their actions may have little overlap. Our bound gives \(|\widehat{J}-J^{\pi^{\theta}}|\lesssim H^{2}\sqrt{\frac{d\log(nH)}{n}}\), which can be vanishingly small when \(n\) is large. The bound (11) identifies the inherent difficulty of distribution shift, manifested as a quantity proportional to \(H^{2}\sqrt{\frac{d}{m}}\), ignoring the logarithmic factor. When \(d\) and \(H\) are fixed, we have a larger estimation error when \(m\) is small, which is expected since small \(m\) indicates little to no overlap between \(d_{t}^{\pi^{\theta}}\) and \(d_{t}^{\pi^{\beta}}\). For the finite-sample case, we apply Theorem 2 and get the following corollary, the proof of which is deferred to Appendix A.5. **Corollary 2**.: _Under the same setting as in Corollary 1, there exists an absolute constant \(C>0\) such that when \(n\geq C\), with probability at least \(1-\frac{1}{n}\), we have_ \[\left|\widehat{J}-J^{\pi^{\theta}}\right|\leq CH^{2}\left(\sqrt{\frac{d\log(nH)}{m}}+ \sqrt{\frac{dn\log(nH)}{K}}\right). \tag{12}\] Interestingly, the first term on the RHS of (12) will dominate if \(K\gtrsim nm\), i.e. when there is at least a constant number of samples per state-action pair in the support of the behavior policy. This indicates that when \(K\) is sufficiently large or \(m\) is small enough, the population-level distribution shift will become the main source of estimation error. #### 4.3.2 Contextual Bandits In this section, we consider specializing our results to the problem of contextual bandits. Specifically, we consider \(H=1\). In this case, the states are the contexts and the agent acts based on the given contexts. Theorem 1 yields the following corollary. For notation simplicity, let \(d^{\pi^{\theta}}\equiv d_{1}^{\pi^{\theta}}\) and \(d^{\pi^{\beta}}\equiv d_{1}^{\pi^{\beta}}\). **Corollary 3** (Infinite samples with \(H=1\)).: _In the infinite-sample setting, under Algorithm 1 with \(\rho=d^{\pi^{\beta}}\) and \(\mathtt{ME}(\cdot)\) being (7), the output estimator \(\widehat{J}\) satisfies_ \[\left|\widehat{J}-J^{\pi^{\theta}}\right|\leq 2\sqrt{dSA}\,\mathrm{Dis}(d^{\pi^{\beta}},d^{ \pi^{\theta}}). \tag{13}\] The following result deals with finite-sample setting, which is a direct corollary of Theorem 2. **Corollary 4** (Finite samples with \(H=1\)).: _Consider the finite-sample setting under Algorithm 1 with \(\rho=\widehat{d}^{\pi^{\beta}}\) and \(\mathtt{ME}(\cdot)\) being (9). There exists an absolute constant \(C>0\) such that with probability at least \(1-\delta\), we have_ \[\left|\widehat{J}-J^{\pi^{\theta}}\right|\leq 2\sqrt{dSA}\,\widehat{\mathrm{Dis}}(d^{\pi^{\beta}},d^{\pi^{ \theta}})+C\sqrt{\frac{d(S+A)\log(HS/\delta)}{K}}. \tag{14}\] We now analyze the operator discrepancy between \(d^{\pi^{\beta}}\) and \(d^{\pi^{\theta}}\). Recall that the environment has a fixed initial state distribution, \(\mu\in\Delta(\mathcal{S})\). For all policy \(\pi\), the state-action occupancy measure \(d^{\pi}\) can be written as \(d^{\pi}(s,a)=\mu(s)\pi(a|s).\) If we view \(\mu\in\mathbb{R}^{S}\) as a vector and \(d^{\pi},\pi\in\mathbb{R}^{S\times A}\) as matrices, we can write \(d^{\pi}=(\mu\mathbf{1}^{\top})\circ\pi\), where \(\mathbf{1}\in\mathbb{R}^{S}\) is an all-one vector. Under this notation, we have \[\left\|d^{\pi^{\theta}}-d^{\pi^{\beta}}\right\|_{\mathrm{op}} =\left\|(\mu\mathbf{1}^{\top})\circ(\pi^{\theta}-\pi^{\beta}) \right\|_{\mathrm{op}}\] \[\leq\left\|\mu\right\|_{\infty}\left\|\pi^{\theta}-\pi^{\beta} \right\|_{\mathrm{op}},\] since \(\mu\mathbf{1}^{\top}\) is a rank-\(1\) matrix. Hence, inequality (14) can be written as \[\left|\widehat{J}-J^{\pi^{\theta}}\right|\leq 2\sqrt{dSA}\left\|\mu\right\|_{\infty}\left\|\pi^{\theta}-\pi^{ \beta}\right\|_{\mathrm{op}}+C\sqrt{\frac{d(S+A)\log(HS/\delta)}{K}}.\] For the infinite-sample case, we consider an arbitrary policy \(\pi\) that satisfies \(\mathrm{supp}(\pi)\subseteq\mathrm{supp}(\pi^{\beta})\). Consequently, we have \(\mathrm{supp}(d^{\pi})\subseteq\mathrm{supp}(d^{\pi^{\beta}})\) and \(\mathrm{Dis}(d^{\pi^{\beta}},d^{\pi^{\theta}})\leq\left\|d^{\pi}-d^{\pi^{ \theta}}\right\|_{\mathrm{op}}\) as a result. With a slight abuse of notation, we denote the operator discrepancy between two policies as \[\mathrm{Dis}(\pi^{\beta},\pi^{\theta})=\min\left\{\left\|\pi-\pi^{\theta} \right\|_{\mathrm{op}}:\text{policy }\pi,\mathrm{supp}(\pi)\subseteq\mathrm{supp}(\pi^{\beta}) \right\}.\] Thus, the infinite-sample bound (13) can be further upper bounded by \[\left|\widehat{J}-J^{\pi^{\theta}}\right|\leq 2\sqrt{dSA}\left\|\mu\right\|_{ \infty}\mathrm{Dis}(\pi^{\beta},\pi^{\theta}).\] Because of the simplicity of contextual bandits, we are able to transform the discrepancy between distributions to the discrepancy between policies which is easier to directly evaluate. ## 5 Constrained Off-policy Improvement In this section, we build on our policy evaluation methods to design an offline policy optimization algorithm. Given a dataset \(\mathcal{D}\) generated by a behavior policy \(\pi^{\beta}\), we use Algorithm 1 to obtain an value estimate \(\widehat{J}^{\pi}\) for each policy \(\pi\). We then optimize over a subclass of policies for which we can guarantee that the above estimate is reliable. Ideally, we could optimize over the following set of candidate policies \(\Pi_{\mathbf{B}}\), for which the empirical operator discrepancy, as defined in (2), between the candidate and behavior policies is bounded above by some parameter \(B_{t}\geq 0\) for all \(t\in[H]\), \[\Pi_{\mathbf{B}}:=\big{\{}\pi:\widehat{\mathrm{Dis}}(d_{t}^{\pi^{\beta}},d_{t}^{ \pi})\leq B_{t},\forall t\in[H]\big{\}}.\] Importantly, when \(B_{t}>0\), the set \(\Pi_{\mathbf{B}}\) contains policies with infinite concentrability coefficients, as demonstrated in the example from the last section. With a bigger \(B_{t}\), the set \(\Pi_{\mathbf{B}}\) includes more policies, at a price of weaker evaluation guarantees for these policies. Policy constraint/penalty method is prevalent in offline learning to address distribution shift. Researchers have proposed a variety of measures to enforce the constraint. For instance, KL-divergence is a popular choice to make sure the learned policy is close to the behavior policy, as seen in [14, 13]. The maximum mean discrepancy (MMD) also proves to be useful in practice [7]. However, both KL-divergence and MMD are sensitive to the support shift, whereas our operator discrepancy imposes a milder condition on the difference between the support. Determining whether a policy \(\pi\) is in \(\Pi_{\mathbf{B}}\) is non-trivial as computing the empirical operator discrepancy requires knowledge of the transition dynamics. In practice, we can instead optimize over a smaller set of candidate policies \(\widehat{\Pi}_{\mathbf{B}}\subseteq\Pi_{\mathbf{B}}\), for which determining \(\widetilde{\Pi}_{\mathbf{B}}\) is feasible; we will subsequently illustrate a construction for \(\widetilde{\Pi}_{\mathbf{B}}\) via an \(\varepsilon\)-net of the set specified in (18). Among all candidate policies in \(\widetilde{\Pi}_{\mathbf{B}}\), we maximize the estimated values obtained by Algorithm 1 to get \[\widehat{\pi}=\operatorname*{argmax}_{\pi\in\widetilde{\Pi}_{\mathbf{B}}}\widehat{ J}^{\pi}. \tag{15}\] We present the following guarantee for \(\widehat{\pi}\), the proof of which can be found in Appendix A.9. **Theorem 3**.: _Suppose \(\pi^{\beta}\in\widetilde{\Pi}_{\mathbf{B}}\). We obtain \(\widehat{\pi}\) by solving (15). There exists an absolute constant \(C>0\) such that with probability at least \(1-\delta\), we have_ \[J^{\widehat{\pi}}\geq J^{\pi}-4H\sqrt{dSA}\sum_{t\in[H]}B_{t}-C\sqrt{\frac{d(S+ A)\log(|\widehat{\Pi}_{\mathbf{B}}|HS/\delta)}{K}},\quad\forall\pi\in\widetilde{\Pi}_{ \mathbf{B}}. \tag{16}\] The above bound shows that we are able to find a policy \(\widehat{\pi}\) with a nearly optimal value, compared to other policies in \(\widetilde{\Pi}_{\mathbf{B}}\). How close \(\widehat{\pi}\) is to the optimal policy in \(\widetilde{\Pi}_{\mathbf{B}}\) depends on how accurately we can evaluate all policies in \(\widetilde{\Pi}_{\mathbf{B}}\). According to Theorem 2, the estimations are accurate if \(\sum_{t\in[H]}B_{t}\) is small (policies are close to behavior) and \(K\) is large (dataset is large), which is reflected in the bound (16). Similarly as before, the two error terms in (16) quantify the fundamental difficulty of distribution shift and finite-sample noise, respectively. One way to find such a subset \(\widetilde{\Pi}_{\mathbf{B}}\) is by constraining the policy directly. We do this with the help of the following lemma, which implies two policies that are close in terms of operator norm difference produce state-action occupancy measures close in empirical operator discrepancy. The proof of Lemma 1 can be found in Appendix A.8. **Lemma 1**.: _For an arbitrary pair of policies \(\pi^{\theta}=\{\pi_{t}^{\theta}\}_{t\in[H]},\pi^{\beta}=\{\pi_{t}^{\beta}\}_{ t\in[H]}\), we have_ \[\widehat{\mathrm{Dis}}(d_{t}^{\pi^{\theta}},d_{t}^{\pi^{\beta}})\leq\sum_{i=1} ^{t}\left(\sqrt{dS^{2}A}\right)^{t-i}\left\|\pi_{i}^{\theta}-\pi_{i}^{\beta} \right\|_{\mathrm{op}},\quad\forall t\in[H]. \tag{17}\] Following Lemma 1, we can define \(\widetilde{\Pi}_{\mathbf{B}}\) as a finite subset (e.g. an \(\varepsilon\) net) of the following set: \[\bigg{\{}\pi:\left\|\pi_{t}^{\theta}-\pi_{t}^{\beta}\right\|_{\mathrm{op}} \leq B_{t}\left(\sqrt{dS^{2}A}\right)^{t-H},\forall t\in[H]\bigg{\}}. \tag{18}\] For all \(\pi\in\widetilde{\Pi}_{\mathbf{B}}\), we have \(\widehat{\mathrm{Dis}}(d_{t}^{\pi^{\theta}},d_{t}^{\pi^{\beta}})\leq B_{t}\) for all \(t\in[H]\), indicating \(\pi\in\Pi_{\mathbf{B}}\). The exponential factor \(\left(\sqrt{dS^{2}A}\right)^{t-H}\) restricts the candidate policies to be exceedingly close to the behavior policy, especially at earlier stages. Intuitively, this makes sense since if the policies at early stages are too different, the resulting deviation will be amplified in later steps of the horizon, resulting in the exponentially growing multiplicative factor in the discrepancy bound. ## 6 Conclusion We propose a novel algorithm for efficient offline evaluation when low-rank structure is present in the MDP. Our algorithm is a combination of Q iteration and low-rank matrix estimation, which is easy to implement. We show that the proposed operator discrepancy measure better captures the difficulty of policy evaluation in the offline setting, compared to the traditional concentrability coefficient. We also combine the evaluation algorithm with policy optimization and provide performance guarantee. We believe that this work is a first step in exploiting the benefit of low-rank structure in the Q function in offline RL. Future directions of interest include extending our results to the infinite-horizon setting with stationary policies, and understanding lower bounds for estimation that would provide insight on whether or not our estimation error bounds are optimal. Acknowledgement:C. Yu is partially supported by NSF grants CCF-1948256 and CNS-1955997, AFOSR grant FA9550-23-1-0301, and by an Intel Rising Stars award. Y. Chen is partially supported by NSF grants CCF-1704828 and CCF-2047910.
2307.02524
Large Deviations Beyond the Kibble-Zurek Mechanism
The Kibble-Zurek mechanism (KZM) predicts that the average number of topological defects generated upon crossing a continuous or quantum phase transition obeys a universal scaling law with the quench time. Fluctuations in the defect number near equilibrium are approximately of Gaussian form, in agreement with the central limit theorem. Using large deviations theory, we characterize the universality of fluctuations beyond the KZM and report the exact form of the rate function in the transverse-field quantum Ising model. In addition, we characterize the scaling of large deviations in an arbitrary continuous phase transition, building on recent evidence establishing the universality of the defect number distribution.
Federico Balducci, Mathieu Beau, Jing Yang, Andrea Gambassi, Adolfo del Campo
2023-07-05T18:00:00Z
http://arxiv.org/abs/2307.02524v2
# Large Deviations Beyond the Kibble-Zurek Mechanism ###### Abstract The Kibble-Zurek mechanism (KZM) predicts that the average number of topological defects generated upon crossing a continuous or quantum phase transition obeys a universal scaling law with the quench time. Fluctuations in the defect number near equilibrium are approximately of Gaussian form, in agreement with the central limit theorem. Using large deviations theory, we characterize the universality of fluctuations beyond the KZM and report the exact form of the rate function in the transverse-field quantum Ising model. In addition, we characterize the scaling of large deviations in an arbitrary continuous phase transition, building on recent evidence establishing the universality of the defect number distribution. The Kibble-Zurek mechanism (KZM) is an important paradigm in nonequilibrium statistical physics, describing the dynamics across a continuous phase transition [1; 2; 3]. The divergence of the equilibrium relaxation time in the neighborhood of the critical point makes the critical dynamics necessarily nonadiabatic for large systems and leads to the spontaneous formation of topological defects. Consider a phase transition from a high symmetry phase to a broken symmetry phase, induced by varying a control parameter \(g\) across its critical value \(g_{c}\) in a finite quench time \(\tau_{Q}\). The central prediction of the KZM is that the average defect density, generated during the phase transition, displays a universal power-law dependence as a function of the quench time. The KZM thus makes a quantitative prediction on the breakdown of adiabatic dynamics across a phase transition and holds both in the classical and quantum regimes [1; 2; 3; 4; 5; 6; 7; 8]. Kibble's pioneering work was motivated by cosmological considerations regarding structure formation in the early universe [9]. The prospect of exploring analogous phenomena in condensed matter systems was soon realized [10; 11; 12] and pursued experimentally [13; 14; 15; 3]. The advance of quantum technologies has led to new tests of the KZM using quantum simulators in a variety of platforms, including ultracold gases [16; 17; 18; 19; 20; 21; 22], trapped ions [23; 24; 25; 26; 27], and Rydberg gases [28; 29]. Recently, the KZM has been studied with quantum computing devices, such as quantum annealers [30; 31; 32; 33]. The accumulated body of literature broadly supports the validity of KZM in a wide variety of systems. Experiments probing critical dynamics generally involve an ensemble of single experimental runs or individual realizations in which measurements are performed. As a result, they can access information beyond the average defect density and characterize the ensemble statistics. It is thus natural to ask whether there are universal signatures in the statistical properties of spontaneously generated topological defects [34; 35; 36]. The full counting statistics of defects appears to be universal in classical and quantum systems. Specifically, in classical continuous phase transitions, it has been found that the defect number distribution is binomial with an average density in agreement with the KZM [37; 38; 39; 40]. Exact solutions in quantum integrable systems have shown that the kink number distribution is Poisson-binomial [35; 26], a feature that can hold even when the system is coupled to an environment [32; 33]. These predictions build on the conventional KZM but lie outside its scope, requiring additional assumptions. We shall thus refer to them as beyond-KZM physics. The average number of defects is an extensive quantity. By contrast, the defect density is intensive, and its fluctuations near equilibrium are approximately Gaussian, in agreement with the central limit theorem. Large deviations theory (LDT) addresses the probability of nontypical events in which an intensive quantity deviates from its average value. The probability of such large deviations decays exponentially with increasing system size, at a rate controlled by the so-called rate function [41; 42; 43]. LDT provides a building block of statistical mechanics in and out of equilibrium. As such, it is a natural framework to explore beyond-KZM physics. To date, LDT has been used to describe the dynamics of many-body quantum systems in the limit of sudden quenches when \(\tau_{Q}\to 0\), e.g., to characterize the work statistics of a given process [44; 45; 46]. In this Letter, we establish the universality of large deviations beyond the KZM, after crossing a quantum phase transition in a finite time. Specifically, we report the exact rate function of the driven transverse-field quantum Ising model (TFQIM), characterizing the statistics of large fluctuations away from the mean kink density predicted by the conventional KZM. We further generalize these results to characterize the universality of large deviations in an arbitrary continuous phase transition leading to point-like defects. _The transverse-field quantum Ising model._ The TFQIM has been instrumental in generalizing the KZM from the classical to the quantum domain [4; 5; 6; 7; 25], and assessing the universality of beyond-KZM physics, both in theory [35] and experiments [32; 26; 33]. Its Hamiltonian is given by \[H[g(t)]=-J\sum_{l=1}^{N}\left[g(t)\sigma_{l}^{x}+\sigma_{l}^{z}\sigma_{l+1}^{ z}\right], \tag{1}\] where \(J>0\) favors ferromagnetic alignment and \(g(t)\) plays the role of an effective magnetic field. In the fermionic representation, the Ising chain Hamiltonian becomes [47] \[H[g(t)]=2J\sum_{k>0}\psi_{k}^{\dagger}\left[\tau^{z}(g(t)-\cos k)+\tau^{y}\sin k \right]\psi_{k}, \tag{2}\] in terms of the fermionic operators \(\psi_{k}^{\dagger}\equiv(\tilde{c}_{k}^{\dagger},\tilde{c}_{-k})\) in momentum space. Here, \(\tau^{x,y,z}\) are Pauli matrices. We choose to work with periodic boundary conditions so that the momentum is a good quantum number and takes the values \(k=(2n+1)\pi/N\) with \(n=-N/2,\dots,N/2-1\), as discussed, e.g., in Refs. [47; 48]. Momentum conservation restricts the formation of defects to kink-antikink pairs. Choosing the total number of spins \(N\) to be even proves convenient since the number of kink pairs is then restricted to outcomes in the set \(\{0,1,2,\dots,N/2\}\). Given Eq. (2), the dynamics of the TFQIM can be reduced to that of an ensemble of non-interacting two-level systems [7]. Consider a quench, in a finite time \(\tau_{Q}\), from the paramagnetic to the ferromagnetic phase \[g(t)=g_{c}\left(1-\frac{t}{\tau_{Q}}\right), \tag{3}\] where \(g(0)=g_{c}=1\) is the critical value of \(g\), and we let \(t\) run from \(-3\tau_{Q}\) to \(\tau_{Q}\). We will refer to \(\tau_{Q}\) as the quench time. We choose \(g(\tau_{Q})=0\) for simplicity since the final Hamiltonian contains only the ferromagnetic term and commutes with the kink-pair number operator \(K_{N}\equiv\frac{1}{4}\sum_{l=1}^{N}\left(1-\sigma_{l}^{z}\sigma_{l+1}^{z}\right)\). This observable counts the number of kink-antikink pairs in a given quantum state and is extensive in the system size \(N\)[7]. The study of its eigenvalue statistics provided the basis of previous studies exploring universality beyond the KZM [32; 33; 35; 33; 26]. We define an intensive kink-pair density operator \[\hat{\rho}_{N}\equiv\frac{K_{N}}{N}=\frac{1}{4N}\sum_{l=1}^{N}\left(1-\sigma_ {l}^{z}\sigma_{l+1}^{z}\right). \tag{4}\] The density of kink pairs, upon completion of the quench in Eq. (3), is given by the expectation value \(\rho_{\text{KZM}}=\langle\hat{\rho}_{N}\rangle\) at the final time \(\tau_{Q}\). It exhibits a power-law scaling in the slow driving limit, i.e., to leading order in a \(1/\tau_{Q}\) expansion [4; 5; 6; 2; 7] \[\rho_{\text{KZM}}=\langle\hat{\rho}_{N}\rangle=\frac{1}{4\pi}\sqrt{\frac{ \hbar}{2J\tau_{Q}}}, \tag{5}\] in agreement with the celebrated, universal KZM power-law scaling \(\rho_{\text{KZM}}\propto\tau_{Q}^{-\frac{\nu}{1+\nu z}}\) for the critical exponents \(\nu=z=1\) of the TFQIM [3]. In any quantum state other than an eigenstate of \(H(g=0)\), the density operator \(\hat{\rho}_{N}\) will exhibit fluctuations of either classical or quantum nature. The probability distribution function \(P(\rho_{N})\), characterizing the eigenvalue statistics of the kink-pair density operator, reads \[P(\rho_{N})=\langle\delta(\hat{\rho}_{N}-\rho_{N})\rangle\,, \tag{6}\] where \(\rho_{N}\) is the random variable associated with the kink-pair-number operator \(\hat{\rho}_{N}\). We aim at uncovering via LDT the universality of large fluctuations of \(P(\rho_{N})\) away from the mean, which the conventional KZM predicts. _Large deviations theory beyond the KZM in the TFQIM._ The central object in LDT is the scaled cumulant generating function, associated with a random variable \(\rho_{N}\), depending on a large parameter \(N\), \[\lambda(\theta)=\lim_{N\to\infty}\frac{1}{N}\ln\left\langle e^{N\theta\hat{ \rho}_{N}}\right\rangle. \tag{7}\] The Gartner-Ellis theorem states that when \(\lambda(\theta)\) exists for all real values of \(\theta\), then the random variable \(\rho_{N}\) satisfies the large deviations principle [41; 42] \[P\big{(}\rho_{N}\in[\rho,\rho+d\rho]\big{)}\approx e^{-NI(\rho)}d\rho, \tag{8}\] with the rate function \(I(\rho)\) given by the Legendre-Fenchel transform \[I(\rho)=\sup_{\theta\in\mathbb{R}}\,\big{[}\theta\rho-\lambda(\theta)\big{]}. \tag{9}\] Deviations from the mean value are thus exponentially suppressed by the rate function \(I(\rho)\) weighted with the system size, and the random variable concentrates around the mean in the thermodynamic limit. Let us consider the application of the Gartner-Ellis theorem to the distribution of kink pairs generated across a quantum phase transition in the TFQIM. In this case, the defect density is a non-negative quantity. As a result, \(I(\rho)\) is divergent for \(\rho<0\), and we focus on the case with \(\rho\geq 0\). We note that in Fourier space, the operator associated with the density of kink pairs at the end of the quench is \[\hat{\rho}_{N}=\frac{1}{N}\sum_{k>0}\gamma_{k}^{\dagger}(\tau_{Q})\gamma_{k}( \tau_{Q}), \tag{10}\] where \(\gamma_{k}(\tau_{Q})\) and \(\gamma_{k}^{\dagger}(\tau_{Q})\) are the fermionic Bogoliubov operators at the end of the quench, and the sum is restricted to \(k>0\) since the number of kink pairs equals the number of right-moving kinks. Further, for free fermions (with periodic boundary conditions), the time-dependent density matrix \(\varrho(t)\) retains the tensor product structure during unitary time evolution, i.e., \(\varrho(t)=\bigotimes_{k}\varrho_{k}(t)\). As a result, the moment-generating function admits the explicit form \[\left\langle e^{N\theta\hat{\rho}_{N}}\right\rangle =\prod_{k>0}\text{Tr}\left[\varrho_{k}(\tau_{Q})e^{\theta\gamma_{ k}^{\dagger}(\tau_{Q})\gamma_{k}(\tau_{Q})}\right]\] \[=\prod_{k>0}\left[1+\left(e^{\theta}-1\right)p_{k}\right], \tag{11}\] where \(p_{k}=\langle\gamma_{k}^{\dagger}(\tau_{Q})\gamma_{k}(\tau_{Q})\rangle\in[0,1]\) represents the probability that the mode \(k\) is excited at the end of the protocol. This is the moment-generating function of a Poisson binomial distribution associated with the sum of \(N/2\) independent random Bernoulli variables, each of which has probability \(p_{k}\) for the occupation number to be \(1\), corresponding to the formation of a kink-antikink pair, and probability \((1-p_{k})\) for the occupation number to be \(0\), corresponding to no defect formation [35]. In addition, the value of \(p_{k}\) can be estimated according to the Landau-Zener (LZ) approximation [7], \(p_{k}=\langle\gamma_{k}^{\dagger}(\tau_{Q})\gamma_{k}(\tau_{Q})\rangle\approx \exp(-2\pi J\tau_{Q}k^{2}/\hbar)\) near \(k=0\), dictating an exponential decay with increasing quench time and a Gaussian decay as a function of the wavenumber. This behavior dictates the KZM scaling in a quantum phase transition [6; 7; 49]. The explicit computation of the scaled cumulant generating function, according to Eqs. (7) and (11), in the limit \(N\to\infty\), yields \[\lambda(\theta)=\int_{0}^{\pi}\frac{dk}{2\pi}\,\ln\left[1+\left(e^{\theta}-1 \right)p_{k}\right], \tag{12}\] which is a convergent integral. For slow quenches, using a power-series expansion in \(1/\tau_{Q}\) to leading order, or equivalently extending the upper limit of the integral in Eq. (12) to infinity, one finds \[\lambda(\theta)=-\rho_{\text{KZM}}\text{Li}_{3/2}\left(1-e^{\theta}\right), \tag{13}\] in terms of the polylogarithm function \(\text{Li}_{q}(x)=\sum_{s=1}^{\infty}x^{s}/s^{q}\). The exact expression in the slow-quench limit, Eq. (13), shows that \(\lambda(\theta)\) is differentiable for all values of \(\theta\), ensuring the applicability of the Gartner-Ellis theorem. We verify that for \(\theta=0\), \(\lambda(0)=0\), consistently with its definition. Further, for \(\theta<0\), \(\lambda(\theta)\) quickly approaches the constant value \(\lambda(-\infty)=-\rho_{\text{KZM}}\zeta(3/2)\), where \(\zeta\) is the Riemann zeta function. Indeed, \(\lambda(\theta)\) is approximately constant for \(\theta<0\) and is a monotonic function of \(\theta\). We define a dimensionless density of defects \(\bar{\rho}\equiv\rho/\rho_{\text{KZM}}\) in terms of which \[I(\rho)=\rho_{\text{KZM}}\sup_{\theta\in\mathbb{R}}\left[\theta\bar{\rho}+ \text{Li}_{3/2}\left(1-e^{\theta}\right)\right]. \tag{14}\] As a result, the rate function (14) is universal in the sense that \(\bar{I}(\bar{\rho})=I(\rho)/\rho_{\text{KZM}}\) varies only with \(\bar{\rho}\) and is independent of the quench time \(\tau_{Q}\). This is the central result of our work, which we elaborate and generalize in what follows. Taking the derivative with respect to \(\theta\), one finds at the supremum \(\theta^{*}\) \[\bar{\rho}=-\frac{e^{\theta^{*}}}{e^{\theta^{*}}-1}\text{Li}_{1/2}\left(1-e^{ \theta^{*}}\right). \tag{15}\] The function \(\theta^{*}(\bar{\rho})\) and the rate function scaled by the KZM density \(\bar{I}(\bar{\rho})\) are found numerically. The rate function is shown in Fig. 1. As the decay of the probability density function \(P(\rho)\) is dictated by the rate function according to Eq. (8), the minimum of \(\bar{I}(\bar{\rho})\) at \(\bar{\rho}=1\) is associated with the most likely value of \(\hat{\rho}_{N}\), which equals the mean value \(\rho_{\text{KZM}}\) predicted by the KZM. Thus, LDT guarantees that the KZM prediction holds with maximum probability. Figure 1 shows also that only the very large deviations of the defect density \(\rho\) are sensible to the finite value of \(\tau_{Q}\). In particular, the larger the quench time \(\tau_{Q}\), the closer the scaled rate function \(\bar{I}\) to the universal analytical prediction obtained using the LZ approximation. _Concentration inequalities._ Let us tackle the problem of large deviations from a complementary angle using Figure 1: Comparison of the scaled rate function \(\bar{I}(\bar{\rho})=I(\rho)/\rho_{\text{KZM}}\) derived analytically with the numerically exact computation for a finite \(\tau_{Q}\) and \(N\). A TFQI chain, initialized in its ground state, is driven by varying \(g(t)\) from \(g(-3\tau_{Q})=4g_{c}\) to time \(\tau_{Q}\), when \(g(\tau_{Q})=0\). The cumulant generating function \(\lambda(\theta)\) is computed in the final nonequilibrium state using Eq. (7) for finite \(N\), from which the scaled rate function \(\bar{I}\) is found with a Legendre-Fenchel transform. As the quench time increases, the agreement between numerics and the analytical solution based on the LZ approximation improves visibly, while the agreement with the central limit theorem (CLT) prediction, obtained by matching the first and second cumulants, does not. The value at the origin \(\bar{I}(0)=\zeta(3/2)\) follows from Eq. (14), while the minimum \(\bar{I}=0\) is attained at \(\bar{\rho}=1\) (diamond). Finite-size analysis reveals the convergence of the numerically-evaluated rate function \(\bar{I}\) to the thermodynamic limit for \(N=1000\), which is used in this figure. concentration inequalities [50]. To bound large deviations, we make use of the Chernoff bound, which reads \(P(\rho_{N}>\rho)\leq\langle e^{\theta\hat{\rho}\hat{\rho}_{N}}\rangle e^{-\theta \rho}\), for all \(\theta>0\). The characteristic function can be written as \[\left\langle e^{\theta\hat{\rho}_{N}}\right\rangle =\exp\left\{\frac{N}{2\pi}\int_{0}^{\pi}dk\,\ln\left[1+(e^{\theta} -1)p_{k}\right]\right\} \tag{16}\] \[\approx\exp\left[-N\rho_{\text{KZM}}\text{Li}_{3/2}\left(1-e^{ \theta}\right)\right].\] We thus find from the Chernoff bound that \[P(\rho_{N}>\rho)\leq\exp\left\{-\rho_{\text{KZM}}[\theta\bar{\rho}+\text{Li}_ {3/2}(1-e^{\theta})]\right\} \tag{17}\] for all \(\theta>0\). Tightening the above inequality by taking the supremum of the exponent, the right tail of the distribution is bounded as \[P(\rho_{N}>\rho)\leq\exp[-NI(\rho)], \tag{18}\] with \(I(\rho)\) given by Eq. (14). Likewise, the left tail is bound by the same term, \(P(\rho_{N}<\rho)\leq\exp[-NI(\rho)]\). The logarithm of the two-sided Chernoff bound is the rate function. The above results establish the nature of large deviations of kink-antikink pairs formed across the quantum phase transition between the paramagnetic and the ferromagnetic phase of the TFQIM. These results are generalizable to the family of quasi-free fermion models in which the density of defects is given by the density of quasiparticles. In addition, the universal form of the scaled cumulant generating function and the rate function in the slow quench limit also hold when fast decaying long-range interactions are considered [47], further confirming their universality under fast-decaying long-range deformations. We next turn our attention to an arbitrary continuous phase transition described by the KZM. _LDT beyond KZM: General scenario._ Consider a scenario of spontaneous symmetry breaking leading to the generation of point-like defects in \(d\) spatial dimensions. The KZM exploits the equilibrium scaling relations for the correlation length \(\xi\) and the relaxation time \(\tau\), i.e., \[\xi=\frac{\xi_{0}}{|\varepsilon|^{\nu}},\qquad\tau=\frac{\tau_{0}}{|\varepsilon |^{z\nu}}, \tag{19}\] where \(\nu\) and \(z\) are critical exponents and \(\xi_{0}\) and \(\tau_{0}\) are microscopic constants. The dimensionless variable \(\varepsilon=(g_{c}-g)/g_{c}\) quantifies the distance to the critical point \(g_{c}\), and vanishes at the phase transition. Linearizing the driving protocol in the neighborhood of \(g_{c}\) as \(g(t)=g_{c}(1-t/\tau_{Q})\), one identifies the quench time \(\tau_{Q}\). The KZM sets the nonequilibrium correlation length to be \(\hat{\xi}=\xi_{0}(\tau_{Q}/\tau_{0})^{\frac{\tau}{1+\tau_{Q}}}\)[11; 12]. During the phase transition, the system is partitioned into protodomains of average volume \(\hat{\xi}^{d}\). A defect may be generated with an empirical probability \(p\) at the merging point between adjacent domains. For point-like defects, the number of events for defect formation is estimated as \(\mathcal{N}=V_{d}/(f\hat{\xi}^{d})\), where \(V_{d}\) is the volume in \(d\) spatial dimensions and \(f\) a fudge factor of order one [51; 52; 37]. As a result,the number of events scales as \(\mathcal{N}=[V_{d}/(f\xi_{0}^{d})](\tau_{0}/\tau_{Q})^{\frac{d\nu}{1+\tau_{Q}}}\). Assume defect formation events at different locations to be described by independent and identically distributed discrete random variables \(X_{i}\) with \(i=1,\ldots,\mathcal{N}\)[37; 38; 39; 40], where the outcome \(X_{i}=+1\) corresponds to the formation of a topological defect, and \(X_{i}=0\) to no defect formation. The defect number distribution takes the binomial form \(P(n)={\binom{\mathcal{N}}{n}}p^{n}(1-p)^{\mathcal{N}-n}\). Numerical studies support this prediction in \(d=1,2\) for varying \(\tau_{Q}\)[40; 37; 38; 39; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 1778; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 188; 187; 188; 189; 191; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 223; 217; 219; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 2778; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 287; 289; 291; 292; 293; 288; 289; 294; 295; 296; 297; 298; 300; 299; 3101; 311; 312; 313; 314; 315; 316; 317; 318; 329; 333; 340; 341; 342; 35; 356; 367; 378; 38; 390; 391; 320; 343; 357; 319; 368; 392; 370; 381; 393; 394; 395; 396; 397; 398; 398; 399; 40; 40; 401; 402; 403; 404; 405; 406; 407; 408; 409; 411; 412; 413; 42; 43,44; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 91; 92; 930; 931; 941; 95; 96; 97; 98; 99; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 109; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 119; 121; 124; 125; 126; 127; 128; 129; 131; 128; 129; 140; 151; 121; 126; 127; 129; 132; 1333; 141; 156; 128; 133; 157; 158; 159; 160; 171; 183; 184; 185; 186; 187; 188; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 203; 204; 205; 206; 207; 208; 209; 211; 213; 214; 215; 216; 217; 219; 226; 218; 219; 233; 240; 241; 242; 25; 256; 267; 278; 289; 299; 303; 297; 298; 3104; large deviations in the number of topological defects generated across a quantum phase transition driven in finite time, and showed that the rate function is proportional to the KZM density of kink pairs. The rate function exhibits a universal power-law scaling with the quench time in which the phase transition is crossed. We have further generalized these findings to account for the dynamics of arbitrary continuous phase transitions described by the KZM. We have thus proved the KZM, showing that the defect density concentrates at the KZM prediction in the thermodynamic limit, and provided a framework to characterize universal deviations in current experiments with moderate system sizes. Our results are of broad interest in nonequilibrium quantum and classical statistical mechanics, connecting large deviations with the breakdown of adiabatic dynamics, and should find broad applications in quantum simulation, quantum annealing, ultracold atom physics, and the study of critical phenomena. _Acknowledgements._ AdC thanks SISSA for its hospitality during the early stages of this work. We thank Federico Roccati for his feedback on the manuscript. This project was funded within the QuantERA II Programme that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 16434093. AG acknowledges financial support from the PNRR MUR project PE0000023-NQST. For open access and in fulfillment of the obligations arising from the grant agreement, the authors have applied a Creative Commons Attribution 4.0 International (CC BY 4.0) license to any Author Accepted Manuscript version arising from this submission.
2305.03043
Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization
There is a growing demand for the accessible creation of high-quality 3D avatars that are animatable and customizable. Although 3D morphable models provide intuitive control for editing and animation, and robustness for single-view face reconstruction, they cannot easily capture geometric and appearance details. Methods based on neural implicit representations, such as signed distance functions (SDF) or neural radiance fields, approach photo-realism, but are difficult to animate and do not generalize well to unseen data. To tackle this problem, we propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing. Trained from a collection of high-quality 3D scans, our face model is parameterized by geometry, expression, and texture latent codes with a learned SDF and explicit UV texture parameterization. Once trained, we can reconstruct an avatar from a single in-the-wild image by leveraging the learned prior to project the image into the latent space of our model. Our implicit morphable face models can be used to render an avatar from novel views, animate facial expressions by modifying expression codes, and edit textures by directly painting on the learned UV-texture maps. We demonstrate quantitatively and qualitatively that our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
Connor Z. Lin, Koki Nagano, Jan Kautz, Eric R. Chan, Umar Iqbal, Leonidas Guibas, Gordon Wetzstein, Sameh Khamis
2023-05-04T17:58:40Z
http://arxiv.org/abs/2305.03043v1
# Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization ###### Abstract. There is a growing demand for the accessible creation of high-quality 3D avatars that are animatable and customizable. Although 3D morphable models provide intuitive control for editing and animation, and robustness for single-view face reconstruction, they cannot easily capture geometric and appearance details. Methods based on neural implicit representations, such as signed distance functions (SDF) or neural radiance fields, approach photo-realism, but are difficult to animate and do not generalize well to unseen data. To tackle this problem, we propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing. Trained from a collection of high-quality 3D scans, our face model is parameterized by geometry, expression, and texture latent codes with a learned SDF and explicit UV texture parameterization. Once trained, we can reconstruct an avatar from a single in-the-wild image by leveraging the learned prior to project the image into the latent space of our model. Our implicit morphable face models can be used to render an avatar from novel views, animate facial expressions by modifying expression codes, and edit textures by directly painting on the learned UV-texture maps. We demonstrate quantitatively and qualitatively that our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods. + Footnote †: Work done during an internship at NVIDIA. + Footnote †: Work done during an internship at NVIDIA. that are more easily integrated into downstream applications and pipelines. Single-shot personalized avatar creation enables reconstructing face avatars from individual RGB images with greater convenience and flexibility than methods that require more specialized capture setups or procedures. Traditional approaches to animatable 3D avatar creation are often based on 3D Morphable Models (3DMM) (Blanz and Vetter, 1999), which disentangle shape and appearance variation into a low-dimensional face representation. Building on these, more recent approaches often leverage either explicit (textured) template meshes (Danceek et al., 2022; Feng et al., 2021; Grassal et al., 2022; Khakhulin et al., 2022; Li et al., 2017; Tran and Liu, 2019) or neural implicit representations (Mildenhall et al., 2021; Park et al., 2019; Sitzmann et al., 2019). Template-based approaches enable easy asset extraction and intuitive editing, but are often unable to capture high-quality geometry and textures. Emerging implicit face models can achieve greater realism by modeling more complex geometric features such as hair (Cao et al., 2022; Giebenhain et al., 2022; Zheng et al., 2022). However, implicit face representations often compromise on interpretability and are less intuitive to control; the entangled latent spaces learned by these highly parameterized models are difficult to edit. Our approach aims to combine the interpretability and editability advantages of template-based 3DMMs with the quality and topological flexibility of implicit 3D representations. Crucially, we decouple appearance and geometry into two branches of our network architecture. By incorporating a UV parameterization network to learn continuous and consistent texture maps, we can export avatars as textured meshes to support downstream applications such as texture map editing and relighting in a traditional graphics pipeline (See Figure 1). On the other hand, by representing geometry with an implicit signed distance field (SDF), our facial shape is less limited by resolution and topology compared to mesh-based approaches. We show that our proposed hybrid representation effectively captures the geometry, appearance, and expression space of faces. We demonstrate that single-shot in-the-wild portrait images can be effectively mapped to avatars based on our proposed representation, and that these avatars improve upon the previous state-of-the-art in photo-realism, geometry, and monocular expression transfer. Moreover, we demonstrate compelling capability for enabling direct texture editing and disentangled attribute editing such as facial geometry and appearance attributes. In summary, contributions of our work include: * We propose a hybrid morphable face model combining the high-quality geometry and flexible topology of implicit representations with the editability of explicit UV texture maps. * We present a single-shot inversion framework to map a single in-the-wild RGB image to our implicit 3D morphable model representation. The inverted avatar supports novel view rendering, non-linear facial reanimation, disentangled shape and appearance control, direct texture map editing, and textured mesh extraction for downstream applications. * We demonstrate state-of-the-art reconstruction accuracy for photo-realistic rendering, geometry, and expression accuracy in the single-view reconstruction setting. ## 2. Related Work ### Mesh-based 3D Morphable Models The seminal work by Blanz and Vetter proposed a linear 3D Morphable Model (3DMM) (Blanz and Vetter, 1999) that models facial shape and textures on a template mesh using linear subspaces computed by principal component analysis (PCA) from 200 facial scans. This low-dimensional facial shape and texture space makes 3DMMs suitable for robustly capturing facial animation as well as reconstructing 3D faces in monocular settings. To reconstruct shape, texture, and lighting from a photo, previous work employed continuous optimization using constraints such as facial landmarks and pixel colors (Cao et al., 2014, 2016; Garrido et al., 2013, 2016; Ichim et al., 2015; Li et al., 2017; Romdhani and Vetter, 2005; Shi et al., 2014; Thies et al., 2016) and more recently deep learning-based inference (B R et al., 2021; Danecek et al., 2022; Deng et al., 2019; Dib et al., 2021; Dou et al., 2017; Feng et al., 2021; Genova et al., 2018; Luo et al., 2021; Tewari et al., 2019; Tewari et al., 2017; Tuan Tran et al., 2017; Wu et al., 2019). While approaches relying on 3DMMs tend to be robust, they are ineffective for reconstructing high-fidelity geometry and texture details due to the linearity and low dimensionality of the model. Various other methods extended 3DMMs to capture non-linear shapes (Chandran et al., 2020; Li et al., 2020; Tewari et al., 2018; Tran et al., 2019; Tran and Liu, 2018, 2019; Wang et al., 2022), photo-realistic appearance using neural rendering or optimization (Gecer et al., 2019; Nagano et al., 2018; Saito et al., 2017; Thies et al., 2019), or reflectance and geometry details for relightable avatar generation (Chen et al., 2019; Huynh et al., 2018; Lattas et al., 2020; Yamaguchi et al., 2018). Recent approaches predict geometry offsets over the template mesh to reconstruct non-facial regions such as hair (Grassal et al., 2022; Khakhulin et al., 2022). We refer the reader to Egger et al. (2020) for an in-depth survey of 3DMM techniques and Tewari et al. (2022) for a report of recent advancements in neural rendering. Since mesh-based 3DMMs represent geometry with a shared template mesh, their fixed topology limits the ability to scale the model to capture complex geometry such hair or fine-scale details. Additionally, their ability to synthesize photo-realistic facial textures may be limited by the resolution of the template mesh and discrete texture map. By parameterizing geometry with a signed distance function and color with a continuous texture map, our method is able to avoid such resolution issues and scale more efficiently with model capacity while retaining 3DMM-like intuitive parameters to individually control geometry and textures. Our consistent texture parameterization enables not only direct texture editing in UV space, \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Generalizable Single-Image} & \begin{tabular}{c} Implicit \\ Representation \\ Content \\ \end{tabular} & \begin{tabular}{c} Explicit \\ Texture \\ Content \\ \end{tabular} \\ \hline EMOCA (2022) & ✓ & ✓ & ✗ & ✓ \\ ROMI (2022) & ✓ & ✓ & ✗ & ✗ \\ Neural Parametric Head Models (2022) & ✗ & ✓ & ✗ & ✗ \\ IR-Admiral (2022) & ✗ & ✗ & ✓ & ✗ \\ Neural Neural Features (2022) & ✗ & ✗ & ✓ & ✗ \\ Neural Feature Features (2022) & ✗ & ✗ & ✓ & ✓ \\ Valumetric Contrast & ✗ & ✗ & ✓ & ✓ \\ HealthNet (2022) & ✓ & ✓ & ✓ & ✗ \\ **Ours** & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1. Comparison to recent prior work. To the best of our knowledge, our method is the first implicit 3D face model to generalize across single-image inputs while supporting flexible topology and explicit texture map control. but also semantic correspondence between our face model and an input image via facial landmarks, which can be leveraged to improve single-shot reconstruction quality. ### Implicit Representations for Modeling and Rendering While single-shot 3D reconstruction methods have explored various explicit 3D representations such as voxels [11, 12, 13, 14, 15, 16], point clouds [10], meshes [15], geometric primitives [13, 14], and depth maps [13], implicit representations have recently been leveraged to achieve higher resolution reconstruction using occupancy or signed distance fields (SDFs) [10, 12, 13]. Implicit representations such as neural radiance fields (NeRFs) [11] and signed distance fields (SDFs) [14] have demonstrated high reconstruction quality for 3D shapes and volumetric scenes. PIFu [15] and follow-up works [17, 18] use implicit fields to model human bodies and clothing. AtlasNet [19] demonstrated 3D shape generation by predicting a set of parametric surface elements given an input image or point cloud. NeuTex [13] replaces the radiance prediction of NeRFs with a learned UV texture parameterization conditioned on lighting direction. Although our method also employs a UV cycle consistency loss, we 1) operate in a SDF setting and condition our parameterization on geometry and expression latent codes to generalize across samples rather than overfit to a single scene, 2) employ sparse facial landmark constraints to facilitate learning a semantically intuitive and consistent parameterization, and 3) explicitly leverage 2D to 3D facial landmark correspondences enabled by the learned consistent parameterization during single-image reconstruction. Implicit representations have also given rise to higher quality 3D generative models [10, 12, 13, 14], and follow-up work has studied inverting an image into the latent space of a pre-trained 3D GAN [17, 18, 19] for single-view 3D reconstruction. However, without careful optimization and additional priors [13, 14], this 3D GAN inversion tends to be less robust due to unknown camera poses [17] and multi-view nature of NeRF training in the monocular setting. On the other hand, the compact face representation of our model provides robust initialization in the single-shot reconstruction setting. ### Implicit Face Models Compared to traditional mesh-based 3DMMs for face modeling, implicit representations naturally offer flexible topology and non-linear expression animation through latent code conditioning. While some approaches learn to reconstruct an implicit 3DMM from an input 3D face scan [1, 12, 13, 14, 15], other works have explored modeling an implicit face model from RGB videos [13, 14, 15, 16]. However, the above approaches either do not support or cannot generalize to single-shot in-the-wild images. Multi-view methods have also been used to reconstruct implicit head models [1, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 209, 210, 211, 222, 231, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 285, 286, 287, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 31, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 308, 309, 321, 335, 336, 337, 338, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 422, 434, 445, 446, 447, 448, 449, 425, 449, 436, 447, 449, 44, 450, 407, 408, 409, 410, 422, 434, 446, 447, 451, 46, 474, 47, 48, 49, 400, 411, 422, 434, 448, 44, 452, 46, 474, 48, 49, 400, 411, 422, 434, 44, 48, 49, 413, 44, 49, 425, 44, 49, 400, 411, 42, 43, 44, 49, 41, 44, 41, 44, 42, 44, 45, 46, 47, 48, 49, 41, 42, 44, 45, 47, 49, 42, 43, 44, 46, 48, 49, 400, 411, 42, 43, 44, 49, 44, 45, 46, 47, 49, 400, 411, 42, 43, 44, 49, 41, 45, 46, 48, 49, 425, 49, 43, 44, 45, 46, 48, 49, 400, 411, 42, 43, 44, 49, 44, 45, 46, 49, 41, 45, 47, 49, 42, 45, 46, 49, 43, 44, 47, 48, 49, 400, 411, 42, 43, 49, 44, 45, 46, 49, 41, 45, 47, 49, 42, 45, 48, 49, 400, 411, 42, 43, 44, 49, 45, 46, 49, 41, 45, 47, 49, 42, 46, 49, 43, 47, 48, 49, 400, 411, 42, 43, 44, 49, 45, 46, 49, 400, 411, 42, 43, 44, 49, 41, 45, 47, 49, 40, 411, 42, 43, 44, 49, 42, 45, 46, 47, 48, 49, 400, 411, 42, 43, 44, 49, 41, 45, 46, 49, 42, 47, 48, 49, 400, 411, 42, 43, 44, 49, 42, 45, 46, 49, 40, 411, 42, 43, 44, 49, 42, 45, 46, 47, 48, 49, 400, 411, 42, 43, 44, 49, 41, 45, 46, 49, 400, 411, 42, 43, 44, 49, 41, 45, 46, 49, 400, 411, 42, 43, 44, 49, 41, 45, 46, 49, 400, 411, 42, 43, 44, 49, 42, 45, 46, 49, 400, 411, 4 follow an invertible surface parameterization, which enables correspondences between texture and geometry used in our single-shot inversion pipeline, described in Section 3.5. \[\ell_{surf} =\frac{1}{|X|}\sum_{x\in X}|f(x)| \tag{3}\] \[\ell_{eikonal} =\mathbb{E}_{x}(\|\nabla_{x}f(x)\|-1)^{2}\] (4) \[\ell_{normal} =\frac{1}{|X|}\sum_{x\in X}\|\nabla_{x}f(x)-\hat{n}(x)\|^{2}\] (5) \[\ell_{uo} =\frac{1}{|X|}\sum_{x\in X}\|x-g^{-1}(g(x))\|^{2}\] (6) \[\mathcal{L}_{geom} =\ell_{surf}+\ell_{eikonal}+\ell_{normal}+\ell_{uo} \tag{2}\] The color loss consists of a reconstruction loss \(\ell_{tex}\) on the ground truth texture \(\hat{T}\), as well as perceptual (Zhang et al., 2018) and reconstruction losses \(\ell_{img}\) over the facial region \(I_{face}\) between the ground truth image \(\hat{I}\) and rendered image \(I\) obtained via sphere tracing: \[\ell_{tex} =\frac{1}{|X|}\sum_{x\in X}\|\hat{T}(x)-h(g(x))\|^{2} \tag{8}\] \[\ell_{img} =LPIPS(I_{face},I_{face})+\|\hat{I}_{face}-I_{face}\|^{2}\] (9) \[\mathcal{L}_{color} =\ell_{tex}+\ell_{img} \tag{7}\] Finally, we enforce the compactness in the learned latent space by penalizing the magnitude of the geometry, color, and expression codes: \[\mathcal{L}_{reg}=\|w_{geom}\|^{2}+\|w_{color}\|^{2}+\|w_{expr}\|^{2} \tag{10}\] ### Learning UV Parameterizations To learn an interpretable texture space and coherent semantic correspondence across subjects, we add an auxiliary loss term to \(\mathcal{L}_{reg}\) that enforces the parameterization to be consistent through a sparse set of facial landmark constraints: \[\ell_{landmark}=\frac{1}{|L|}\sum_{x\in L}\|\hat{g}(x)-g(x)\|^{2}+\|x-g^{-1}(g (x))\|^{2} \tag{11}\] The first term enforces the learned UV mapping to match the ground truth UV mapping \(\hat{g}\) for the set of 3D facial landmark points \(L\), and the second term enforces this mapping to be invertible. Fig. 8 demonstrates the consistency of our learned UV parameterization. Although mostly consistent, it is difficult to obtain perfect registrations around the inner mouth and eyes due to the billboard geometry and errors originating from the ground truth data. Figure 3. Single-shot inversion pipeline. We de-light the input image and initialize the latent codes using a pre-trained encoder (top row). We then perform PTI (Roich et al., 2022) to get the final reconstruction (bottom row). Original image courtesy of Brett Jordan/flickr. Figure 2. Our Pipeline. Avatars are represented by geometry, expression, and color latent codes \(\{w_{geom},w_{expr},w_{color}\}\) with each being 512 dimensional. At each 3D coordinate \(p\) during sphere tracing, the SDF network \(f\) and UV parameterization network \(g\) are conditioned on \(w_{geom}\), \(w_{expr}\), and positional encoding \(PE(p)\) to predict the signed distance \(SDF(p)\) and UV coordinates \(UV(p)\), respectively. The inverse UV parameterization network \(g^{-1}\) regularizes the learned mapping to be a surface parameterization \(g^{-1}(UV(p);w_{geom}\), \(w_{expr})=p\), while the color network \(h\) predicts the associated RGB texture \(RGB(p)=h(UV(p);w_{color},w_{expr})\). After training, the avatar can be rendered freely with direct control over its texture and facial expression, or extracted as a stand-alone textured mesh asset. ### Animation After training, an avatar can be animated by manipulating its expression latent code. For a source subject with expression code \(w_{expr}\), target expression code \(w^{\prime}_{expr}\), and animation timesteps \(t\in[0,1]\), we define the expression animation trajectory by: \[w_{expr}(t)=w_{expr}+t*(w^{\prime}_{expr}-w_{expr}) \tag{12}\] Unlike traditional linear 3DMM approaches, our expression space follows non-linear trajectories learned from high-quality 3D scans, as shown in Fig. 4. ### Single-Shot Inversion In order to reconstruct and animate unseen subjects, we project an input RGB image into the latent space of our pre-trained model and lightly fine-tune the model weights similar to Pivotal Tuning Inversion (PTI) (Roich et al., 2022). To handle unseen lighting conditions, we de-light the input image using LUMOS (Yeh et al., 2022) and initialize the geometry, color, and expression codes through a separately trained encoder. We empirically find this encoder initialization to be important in obtaining robust results for in-the-wild input images (See Figure 9). Image EncoderWe attain latent code initializations by training a DeepLabV3+ (Chen et al., 2018) encoder to reconstruct each training image \(\hat{I}\) and its corresponding latent codes \(\hat{W}\) already computed from the previous AutoDecoder training stage: \[\mathcal{L}_{enc} =\|\hat{I}-I\|^{2}+\|\hat{W}-W\|^{2} \tag{14}\] \[W =[w_{geom};w_{color};w_{expr}] \tag{13}\] One major challenge when inverting in-the-wild images is handling unseen identities, accessories, hairstyles, and occlusion present in real-world images, as Triplegangers contain limited identities with no variations in hairstyles or background. Therefore, we augment the encoder's training dataset with synthetically augmented Triplegangers images from (Yeh et al., 2022), which improves the robustness of the initialization and final inversion reconstruction, shown in Fig. 9. OptimizationAfter initializing the latent codes for an input image \(\hat{I}\) using our encoder, we freeze the model weights and optimize the latent codes while minimizing image, silhouette, multi-view consistency, facial landmark, and regularization losses: \[\ell_{img} =LPIPS(\hat{I}_{face};I_{face})+\|\hat{I}_{face}-I_{face}\|^{2} \tag{16}\] \[\ell_{silhouette} =\sum_{x\in\hat{I}_{face}\times x\in\hat{I}_{face}}f(x)\] (17) \[\ell_{ID} =ArcFace(\hat{I},\hat{I},I_{rand})\] (18) \[\ell_{landmark} =\sum_{d\in D(\hat{I})}\|d-proj_{2D}(g^{-1}(\hat{d}))\|^{2}\] (19) \[\ell_{reg} =\|w_{geom}\|^{2}+\|w_{color}\|^{2}+\|w_{expr}\|^{2} \tag{15}\] where the silhouette loss \(\ell_{silhouette}\) iterates over points contained in the ground truth face region \(\hat{I}_{face}\), but not in the predicted face region \(I_{face}\), to bring the points closer to the SDF zero level set. ArcFace (Deng et al., 2019) measures the face similarity between different views and \(I_{rand}\) is a predicted render from a randomly perturbed camera pose. \(D\) is an off-the-shelf facial landmark detector (King, 2009) and \(\hat{d}\) is the ground truth facial landmark UV mapping enforced in Eq. 11. Note that our consistent UV parameterization directly enables correspondences for the facial landmark alignment loss \(\ell_{landmark}\); Fig. 10 demonstrates the benefits of incorporating this loss. The regularization loss \(\ell_{reg}\) is important to ensure that the optimized codes stay near the manifold of the pre-trained latent space for expression animation. We obtain face masks using a pre-trained BiSeNet (Yu et al., 2018) and optimize for 800 steps. Fine-tuningTo reconstruct finer details in the input image, we freeze the latent codes after optimization and fine-tune the model weights on the above losses. We omit the silhouette loss, as we find it tends to bloat the geometry when the model weights are unfrozen. Although fine-tuning the model improves reconstruction quality, it may also hinder its capability for animation or novel view synthesis. Therefore, we only perform model fine-tuning for 60 steps. ## 4. Results We present results of our proposed method with comparisons to EMOCA (Danecek et al., 2022), ROME (Khakhulin et al., 2022) and FaceVerse (Wang et al., 2022), three recent mesh-based approaches for single-shot 3D avatar generation, and HeadNeRF (Hong et al., 2022), an implicit approach using neural radiance fields. Our method achieves higher fidelity texture and geometry reconstruction in the facial region compared to the baselines. Qualitatively and quantitatively, our method also demonstrates more faithful expression and pose transfer between in-the-wild source and target images. Finally, our learned texture map is intuitive to edit and propagates naturally during animation. ### Implementation Details Our model is trained in two stages. In the first stage, we withhold the ground truth multi-view images, as we find that supervising with both texture maps and multi-view images negatively impacts the model's ability to learn a consistent UV mapping. In the second stage, Figure 4. Non-linear animation space. By linearly interpolating between source and target expression codes, our model exhibits non-linear deformation trajectories on the 3D mouth vertices visualized. Original image courtesy of David Shankbone/flickr. we freeze the UV networks \(\{g,g^{-1}\}\) and supervise using the multi-view images to fine-tune the learned texture maps while rendering image reconstructions at \(768\times 512\) resolution. Camera poses are provided with ground truth training data and we estimate camera poses for in-the-wild FFHQ images using Deep3DFaceRecon [4]. We perform sphere tracing for 50 steps per ray and use a dimensionality of 512 for the geometry, color, and expression latent codes. We train our AutoDecoder for 1000 epochs (approx. one week) and our inversion encoder for 200 epochs (approx. one day) across 8 NVIDIA A40 GPUs. We use a Triplegangers training/test split of 386/129 for the quantitative expression experiments. Sphere tracing takes 8.5 seconds and inversion takes 3 hours per image. See supplemental material for more details on training and model architectures. \begin{table} \begin{tabular}{l c c c c|c c c} \hline \hline Reconstruction & LPIPS\(\downarrow\) & DISTS\(\downarrow\) & SSIM\(\uparrow\) & Pose\(\downarrow\) & ID\(\uparrow\) & L1 & RMSE \\ & & & & & Depth\(\downarrow\) & Depth\(\downarrow\) & Depth\(\downarrow\) \\ \hline EMOCA & 0.1122 & 0.1268 & 0.9182 & 0.0681 & 0.0697 & 0.0300 & 0.0677 \\ ROME & 0.1054 & 0.1130 & 0.9317 & 0.0600 & 0.3866 & 0.0237 & 0.0513 \\ HeadNeRF & 0.1090 & 0.1199 & 0.9268 & 0.0606 & 0.2334 & 0.0379 & 0.0695 \\ Ours (optimization-free) & 0.1427 & 0.1465 & 0.9053 & 0.0549 & 0.1082 & 0.0357 & 0.0658 \\ Ours (encoder-free) & 0.0890 & 0.0921 & 0.9441 & **0.0533** & 0.4600 & 0.0241 & 0.0527 \\ Ours & **0.0879** & **0.0905** & **0.9451** & 0.0563 & **0.4670** & **0.0228** & **0.0510** \\ \hline \hline \end{tabular} \begin{tabular}{l c c} \hline \hline Retargeting & FACS\(\downarrow\) & Facial \\ & & Landmarks\(\downarrow\) \\ \hline EMOCA & 4.712 & 0.2088 \\ ROME & 3.204 & 0.1414 \\ HeadNeRF & 3.848 & 0.1641 \\ Ours & **1.733** & **0.1165** \\ \hline \hline \end{tabular} \begin{tabular}{l c} \hline \hline Retargeting & FACS\(\downarrow\) & Facial \\ & & Landmarks\(\downarrow\) \\ \hline EMOCA & 4.712 & 0.2088 \\ ROME & 3.204 & 0.1414 \\ HeadNeRF & 3.848 & 0.1641 \\ Ours & **1.733** & **0.1165** \\ \hline \hline \end{tabular} \begin{tabular}{l c} \hline \hline Retargeting & FACS\(\downarrow\) & Facial \\ & & Landmarks\(\downarrow\) \\ \hline EMOCA & 4.712 & 0.2088 \\ ROME & 3.204 & 0.1414 \\ HeadNeRF & 3.848 & 0.1641 \\ Ours & **1.733** & **0.1165** \\ \hline \hline \end{tabular} \begin{tabular}{l c} \hline \hline Retargeting & FACS\(\downarrow\) & Facial \\ & & Landmarks\(\downarrow\) \\ \hline EMOCA & 4.712 & 0.2088 \\ ROME & 3.204 & 0.1414 \\ HeadNeRF & 3.848 & 0.1641 \\ Ours & **1.733** & **0.1165** \\ \hline \hline \end{tabular} \end{table} Table 3: Quantitative comparison with FaceVerse [4] on 500 sampled FFHQ images for single-shot in-the-wild reconstruction. Figure 5: Single-shot reconstruction on FFHQ with expression and pose transfer. On the left, we show the input FFHQ source image, de-lit input image using LUMOS [12], and reconstruction results for each method. On the right, we show monocular performance capture and retargeting, where we reconstruct and transfer the expression and pose from a target image (right-most column) to the source image identity (left-most column). On the left from top to bottom, original images are courtesy of José Carlos Cortizo Pérez/flickr, Montclair Film/flickr, Pham Toan/flickr, Javier Morales/flickr, Khiet Nguyen/flickr, and Malcolm Slaney/flickr. On the right from top to bottom, original images are courtesy of Adam Charnock/flickr, Daughterville Festival/flickr, Delaney Turner/flickr, South African Tourism/flickr, Pat (Cletch) Williams/flickr, and Collision Conf/flickr. ### Single-Shot 3D Face Reconstruction and Animation _Qualitative Results._ We show qualitative comparisons for single-shot reconstruction followed by expression and pose transfer on FFHQ [Karras et al.2019] images between the proposed method, EMOCA, ROME, and HeadNeRF in Fig. 5 and Fig. 13. Overall, our method is more photo-realistic and achieves higher expression accuracy in facial reconstruction. EMOCA does not model the mouth interior and relies on a pre-trained FLAME [Li et al.2017] albedo model for texture. Our model produces the most faithful expression transfer, demonstrating the diversity of its learned expression space and generalization capabilities of our method to in-the-wild data. HeadNeRF exhibits a large amount of identity shift during pose transfer, whereas our method remains view-consistent after large pose changes. We also show a ground truth comparison of reconstructed geometry on the H3DS [Ramon et al.2021] dataset between our method and the baselines in Fig. 6. HeadNeRF performs volumetric rendering at a low resolution and therefore produces noisy depth results. Our geometry captures higher fidelity facial geometry than ROME and captures the expression more faithfully (e.g., eye blink) compared to EMOCA. _Quantitative Results._ We report quantitative reconstruction and self-reenactment expression transfer results in Table 2 and Table 3. The photometric (LPIPS [Zhang et al.2018], DISTS [Ding et al.2020], SSIM [Wang et al.2004]), pose error, and MagFace [Meng et al.2021] identity consistency (ID) metrics are calculated over a dataset of 500 images from FFHQ. We compute L1 and RMSE depth error over all subjects in the H3DS dataset. To evaluate self-reenactment expression error, we randomly sample 32 source-target expression pairs over a test split of the Tripleangers dataset and measure the L2 error for FACS [Ekman and Friesen1978] coefficients and facial landmarks. For details related to how each metric is computed, please refer to the supplemental material. On the FFHQ dataset, our proposed method achieves the best accuracy in terms of LPIPS, DISTS, SSIM, and ID score. The optimization-free ablation struggles to handle the considerably large domain shift between Tripleangers training data and FFHQ in-the-wild images. Our model also exhibits the lowest depth error on the H3DS dataset without relying on a 3D template mesh prior. Finally, our model has the lowest FACS and facial landmark errors, demonstrating the diversity of its learned expression space. ### Ablations In addition to the baselines mentioned, we compare our method to two ablations for single-shot reconstruction. The first ablation is an optimization-free inversion approach that only uses the learned encoder to directly map an input image to the geometry, color, and expression codes \(\{\mathbf{w}_{\textit{geom}},\mathbf{w}_{\textit{color}},\mathbf{w}_{\textit{expr}}\}\). The second ablation is an encoder-free inversion approach that omits the encoder and instead uses a mean initialization for \(\{\mathbf{w}_{\textit{geom}},\mathbf{w}_{\textit{color}},\mathbf{w}_{\textit{expr}}\}\) over the learned AutoDecoder dictionary of latent codes. Quantitative results for the ablations are reported in Table 2. The optimization-free approach produces significantly worse photometric and depth results, as there is a large domain gap between Tripleangers training data and in-the-wild images; this causes the encoder to produce a coarse reconstruction. The encoder-free approach performs better than the optimization-free approach but is still worse than our full method in image and geometry quality, demonstrating that the encoder initialization improves the optimization reconstruction. Both ablations and our full method perform similarly on pose accuracy. _Applications._ As demonstrated in Fig. 5, our method directly supports monocular facial performance capture and expression retargeting. Our hybrid representation provides direct control over an intuitive texture map with a consistent layout. Fig. 7 demonstrates an example workflow: a user reconstructs an input image and modifies the learned texture map. The edits then continue to persist smoothly across different facial animations. Textured meshes can be extracted for further downstream applications such as re-lighting, as shown in the teaser. Fig. 11 and Fig. 12 further demonstrate our model's disentanglement between geometry, texture, and expression with its capability of shape and facial appearance transfer. combines implicit representations with explicit texture maps to support explicit editing while achieving better photo-realistic rendering, geometry, and expression reconstruction than previous methods. We believe the proposed method makes important contributions towards accessible creation of high-fidelity avatars from in-the-wild images that are animatable, editable, and customizable for downstream applications. However, there are still limitations to the method. Firstly, the current optimization process during inversion is significantly slower than encoder-based methods. For real-time applications, more expressive representations such as neural feature fields can be explored to enable optimization-free inversion methods. Furthermore, the method relies on a de-lighting module from Lumos to process in-the-wild images to generate a diffusely lit input image, which may cause subjects to appear paler than expected. These limitations may be alleviated through lighting augmentations of the training dataset to reduce the domain gap and incorporating a lighting model such as spherical harmonics into the representation. Finally, the results shown in this paper do not capture hair or accessories due to limitations of the training dataset. While not perfect, we refer to the supplemental material for a preliminary demonstration of our representation's capacity to handle hair and clothing on the smaller RenderPeople dataset. As implicit representations such as neural radiance fields excel at capturing the geometry and texture of thin structures, it may be fruitful to combine our method with recent sparse view implicit hair models (Kuang et al., 2022; Wu et al., 2022). ###### Acknowledgements. We thank Simon Yuen and Miguel Guerrero for helping with preparing the 3D scan dataset and assets, and Ting-Chun Wang for providing Lumos code. We also thank Nicholas Sharp, Sanja Fidler and David Luebke for helpful discussions and supports. This project was supported in part by a David Cheriton Stanford Graduate Fellowship, ARL grant W911NF-21-2-0104, a Vannevar Bush Faculty Fellowship, a gift from the Adobe corporation, Samsung, and Stanford HAI.
2304.14616
A gauge constrained algorithm of VDAT at $\mathcal{N}=3$ for the multi-orbital Hubbard model
The recently developed variational discrete action theory (VDAT) provides a systematic variational approach to the ground state of the quantum many-body problem, where the quality of the solution is controlled by an integer $\mathcal{N}$, and increasing $\mathcal{N}$ monotonically approaches the exact solution. VDAT can be exactly evaluated in the $d=\infty$ multi-orbital Hubbard model using the self-consistent canonical discrete action theory (SCDA), which requires a self-consistency condition for the integer time Green's functions. Previous work demonstrates that $\mathcal{N}=3$ accurately captures multi-orbital Mott/Hund physics at a cost similar to the Gutzwiller approximation. Here we employ a gauge constraint to automatically satisfy the self-consistency condition of the SCDA at $\mathcal{N}=3$, yielding an even more efficient algorithm with enhanced numerical stability. We derive closed form expressions of the gauge constrained algorithm for the multi-orbital Hubbard model with general density-density interactions, allowing VDAT at $\mathcal{N}=3$ to be straightforwardly applied to the seven orbital Hubbard model. We present results and a performance analysis using $\mathcal{N}=2$ and $\mathcal{N}=3$ for the $\textrm{SU}(2\textrm{N}_{\textrm{orb}})$ Hubbard model in $d=\infty$ with $\textrm{N}_{\textrm{orb}}=2-8$, and compare to numerically exact dynamical mean-field theory solutions where available. The developments in this work will greatly facilitate the application of VDAT at $\mathcal{N}=3$ to strongly correlated electron materials.
Zhengqian Cheng, Chris A. Marianetti
2023-04-28T03:44:34Z
http://arxiv.org/abs/2304.14616v2
# A gauge constrained algorithm of VDAT at \(\mathcal{N}=3\) for the multi-orbital Hubbard model ###### Abstract The recently developed variational discrete action theory (VDAT) provides a systematic variational approach to the ground state of the quantum many-body problem, where the quality of the solution is controlled by an integer \(\mathcal{N}\), and increasing \(\mathcal{N}\) monotonically approaches the exact solution. VDAT can be exactly evaluated in the \(d=\infty\) multi-orbital Hubbard model using the self-consistent canonical discrete action theory (SCDA), which requires a self-consistency condition for the integer time Green's functions. Previous work demonstrates that \(\mathcal{N}=3\) accurately captures multi-orbital Mott/Hund physics at a cost similar to the Gutzwiller approximation. Here we employ a gauge constraint to automatically satisfy the self-consistency condition of the SCDA at \(\mathcal{N}=3\), yielding an even more efficient algorithm with enhanced numerical stability. We derive closed form expressions of the gauge constrained algorithm for the multi-orbital Hubbard model with general density-density interactions, allowing VDAT at \(\mathcal{N}=3\) to be straightforwardly applied to the seven orbital Hubbard model. We present results and a performance analysis using \(\mathcal{N}=2\) and \(\mathcal{N}=3\) for the SU(\(2\mathrm{N}_{\mathrm{orb}}\)) Hubbard model in \(d=\infty\) with \(\mathrm{N}_{\mathrm{orb}}=2-8\), and compare to numerically exact dynamical mean-field theory solutions where available. The developments in this work will greatly facilitate the application of VDAT at \(\mathcal{N}=3\) to strongly correlated electron materials. ## I Introduction The recently developed variational discrete action theory (VDAT) [5; 6] has emerged as a powerful tool to study the ground state of the multi-orbital Hubbard model [4], which can be considered as a minimal model for a wide class of strongly correlated electron materials [15; 17]. VDAT consists of two central components: the sequential product density matrix (SPD) ansatz and the discrete action theory to evaluate observables under the SPD. The accuracy of the SPD is controlled by an integer \(\mathcal{N}\), and the SPD monotonically approaches the exact solution for increasing \(\mathcal{N}\). In the context of the Hubbard model, the SPD recovers most well known variational wavefunctions [5]: \(\mathcal{N}=1\) recovers the Hartree-Fock wave function, \(\mathcal{N}=2\) recovers the Gutzwiller wave function [11; 12; 13], and \(\mathcal{N}=3\) recovers the Gutzwiller-Baeriswyl [22] and Baeriswyl-Gutzwiller wavefunctions [8]. The discrete action theory can be viewed as an integer time generalization of the imaginary time path integral, yielding an integer time generalization of the Green's function and Dyson equation [5]. For \(d=\infty\), the SPD can be exactly evaluated using the self-consistent canonical discrete action (SCDA) [5; 6]. VDAT within the SCDA offers a paradigm shift away from the dynamical mean-field theory (DMFT) [10; 17; 29], allowing the exact solution of the ground state properties of the \(d=\infty\) Hubbard model to be systematically approached within the wave function paradigm. The computational cost of VDAT grows with \(\mathcal{N}\), at an exponential scaling for an exact evaluation and a polynomial scaling for a numerical evaluation using Monte-Carlo, so rapid convergence with \(\mathcal{N}\) is important if VDAT is to be a practical alternative to DMFT. VDAT using \(\mathcal{N}=2,3,4\) has been applied to the single orbital Anderson impurity model on a ring [6], the \(d=\infty\) single orbital Hubbard model [6], and the \(d=\infty\) two orbital Hubbard model [4], and in all cases \(\mathcal{N}=3\) yields accurate results as compared to the numerically exact solutions. This success is particularly nontrivial in the two orbital problem, where complex local interactions including the Hubbard \(U\), Hund \(J\), and crystal field \(\Delta\) were studied over all parameter space. Therefore, VDAT within the SCDA at \(\mathcal{N}=3\) provides a minimal and accurate description of the two orbital Hubbard model, but with a computational cost that is comparable to \(\mathcal{N}=2\), which recovers the Gutzwiller approximation [13] (GA) and slave boson mean-field theories [3; 16; 24]. The fact that VDAT within the SCDA at \(\mathcal{N}=3\) resolves all the limitations of the Gutzwiller approximation and the slave boson mean-field theories without substantially increasing the computational cost motivates a deeper understanding of how the SCDA works. The SCDA provides a route for exactly evaluating the SPD in \(d=\infty\), and the SCDA can be viewed as the integer time analogue of DMFT [4; 5]. While DMFT maps the Hubbard model to a self-consistently determined Anderson impurity model, the SCDA maps the SPD to a self-consistently determined canonical discrete action (CDA), parametrized by the corresponding non-interacting integer time Green's function \(\mathcal{G}\), which implicitly depends on the variational parameters of the SPD. While the DMFT self-consistency condition only needs to be executed once, the SCDA self-consistency condition must be executed for every choice of variational parameters during the minimization. Previously, we proposed an approach to mitigate this issue by simultaneously minimizing the variational parameters and updating \(\mathcal{G}\), and demonstrated it to be efficient for the two band Hubbard model [4]. However, for some regions of parameter space, such as in the large polarization regime, a very small step size is needed to maintain numerical stability. Such problems are partially due to inaccuracies of \(\mathcal{G}\) within the iteration process, given that \(\mathcal{G}\) is only highly precise when the fixed point is reached. Therefore, it would be advantageous if the self-consistency condition of the SCDA could be automatically satisfied. Given that \(\mathcal{N}=2\) recovers the GA, it is interesting to recall how the GA automatically satisfies the SCDA self-consistency condition. As previously demonstrated [5], the GA has a prescribed form of \(\mathcal{G}\) which is fully determined by imposing that the non-interacting and interacting local single particle density matrices are identical, which we refer to as the Gutzwiller gauge. While the SCDA at \(\mathcal{N}=2\) can evaluate an SPD with arbitrary variational parameters, the GA is only valid when the SPD satisfies the Gutzwiller gauge. Therefore, the GA is a special case of the SCDA at \(\mathcal{N}=2\), but the Gutzwiller gauge does not limit the variational power of the SPD due to the gauge freedom of the SPD [4]. In summary, the GA provides an important lesson for numerically simplifying the SCDA at \(\mathcal{N}=2\) by exploiting the gauge freedom of the SPD, converting the problem of solving for \(\mathcal{G}\) into a constraint on the variational parameters of the SPD. In this paper, we demonstrate that the lessons of the GA can be generalized to \(\mathcal{N}=3\), which quantitatively captures the Mott and Hund physics of the multi-orbital Hubbard model [4]. In order to demonstrate the power of the gauge constrained implementation of the SCDA at \(\mathcal{N}=3\), we study the SU(2N\({}_{\rm orb}\)) Hubbard model in \(d=\infty\) with \(\mathrm{N}_{\rm orb}=2-8\). Our successful execution of these calculations demonstrates the viability of applying VDAT at \(\mathcal{N}=3\) to crystals bearing \(d\) or \(f\) electrons. Moreover, the SU(2N\({}_{\rm orb}\)) Hubbard model is interesting in its own right, given that experiments on ultracold atoms in an optical lattice can realize the SU(2N\({}_{\rm orb}\)) Hubbard model[23; 25; 26; 27; 14; 30]. Therefore, VDAT should serve as a reliable tool for understanding and interpreting such experimental measurements. The structure of this paper is as follows. Sec. II presents the gauge constrained algorithm of the SCDA at \(\mathcal{N}=3\), with Sec. II.1 providing a high level overview of the entire algorithm, including all key equations, while the remaining subsections provide detailed derivations. Sec. III presents results for the SU(2N\({}_{\rm orb}\)) Hubbard model in \(d=\infty\) with \(\mathrm{N}_{\rm orb}=2-8\). ## II Gauge Constrained Algorithm at \(\mathcal{N}\)=3 ### Overview The goal of this subsection is to provide an overview of the gauge constrained algorithm for the SCDA at \(\mathcal{N}=3\), and Secs. II.2, II.3, and II.4 will derive all details of the procedure. We begin by highlighting how the SCDA exactly evaluates the SPD in \(d=\infty\)[4; 5; 6]. Consider a fermionic lattice model having a Hamiltonian \[\hat{H}=\hat{K}+\hat{H}_{loc}=\sum_{k\alpha\sigma}\epsilon_{k\alpha\sigma}\hat {n}_{k\alpha\sigma}+\sum_{i}\hat{H}_{loc;i}, \tag{1}\] where \(i=1,\ldots,\mathrm{N}_{\rm site}\) enumerates over the lattice sites, \(k=1,\ldots,\mathrm{N}_{\rm site}\) enumerates over the \(k\)-points, \(\alpha=1,\ldots,\mathrm{N}_{\rm orb}\) enumerates over the orbitals, and \(\sigma\) enumerates over spin. The G-type SPD for \(\mathcal{N}=3\) can be motivated from the following variational wave function \[\exp\left(\sum_{k\alpha\sigma}\gamma_{k\alpha\sigma}\hat{n}_{k\alpha\sigma} \right)\prod_{i}\hat{P}_{i}\left(u\right)|\Psi_{0}\rangle, \tag{2}\] where \(\{\gamma_{k\alpha\sigma}\}\) is the set of non-interacting variational parameters, \(u=\{u_{i\Gamma}\}\) is the set of interacting variational parameters, \(\hat{P}_{i}\left(u\right)=\sum_{\Gamma}u_{i\Gamma}\hat{P}_{i\Gamma}\), and \(|\Psi_{0}\rangle\) is a non-interacting variational wavefunction; and both \(\{\gamma_{k\alpha\sigma}\}\) and \(\{u_{i\Gamma}\}\) are real numbers. The index \(\Gamma\) enumerates a set of many-body operators \(\{\hat{P}_{i\Gamma}\}\) local to site \(i\). The \(\{\hat{P}_{i\Gamma}\}\) used for an \(\hat{H}_{loc;i}\) containing only density-density type interactions is given in Eq. 31, while the general case is given in Eq. II.2. It will be important to rewrite the above wave function as a density matrix, yielding the G-type SPD [5] for \(\mathcal{N}=3\) \[\hat{\varrho}=\hat{\mathcal{P}}_{1}\hat{\mathcal{P}}_{2}\hat{\mathcal{P}}_{3}= \left(\hat{K}_{1}\hat{P}_{1}\right)\left(\hat{K}_{2}\hat{P}_{1}\right)\left( \hat{K}_{1}\right), \tag{3}\] where \(\hat{K}_{1}=\exp\left(\sum_{k\alpha\sigma}\gamma_{k\alpha\sigma}\hat{n}_{k \alpha\sigma}\right)\), \(\hat{P}_{1}=\prod_{i}\hat{P}_{i}\left(u\right)\), and \(\hat{K}_{2}=|\Psi_{0}\rangle\langle\Psi_{0}|\). Here we have chosen \(\hat{K}_{1}\) to be diagonal in \(k\alpha\sigma\), while the most general case is addressed in Ref. [4]. Evaluating expectation values under the SPD is highly nontrivial, and we have developed the discrete action theory [5] to formalize the problem in a manner which is amenable to systematic approximations. A key idea of the discrete action theory is the equivalence relation between an integer time correlation function and a corresponding expectation value in the compound space. An operator operator \(\hat{O}\) in the original space is promoted to the compound space with a given integer time index \(\tau\), denoted as \(\hat{O}^{(\tau)}\)[5]. For the total energy with \(\mathcal{N}=3\), this equivalence is given as \[\langle\hat{H}\rangle_{\hat{\varrho}}=\langle\hat{H}^{(\mathcal{N})}\rangle_{ \hat{\varrho}}, \tag{4}\] where \(\hat{\varrho}=\hat{\varrho}_{0}\prod_{i}\hat{P}_{i}\) is the discrete action of the SPD, \(\hat{\varrho}_{0}=\hat{Q}\hat{K}_{1}^{(1)}\hat{K}_{2}^{(2)}\hat{K}_{1}^{(3)}\) is the non-interacting discrete action, \(\hat{Q}\) is the integer time translation operator [4; 5], and the interacting projector for site \(i\) is \[\hat{P}_{i}=\hat{P}_{i}^{(1)}(u)\hat{P}_{i}^{(2)}(u)=\sum_{\Gamma\Gamma^{ \prime}}u_{i\Gamma}u_{i\Gamma^{\prime}}\hat{P}_{i\Gamma}^{(1)}\hat{P}_{i \Gamma^{\prime}}^{(2)}. \tag{5}\] An important previous result is that the expectation value in the compound space can be exactly evaluated for \(d=\infty\) using the self-consistent canonical discrete action theory (SCDA) [5; 6]. Given the common scenario of translation symmetry, the SCDA can be presented in terms of two auxiliary effective discrete actions parameterized by \(2\mathrm{N}_{\mathrm{orb}}\mathcal{N}\times 2\mathrm{N}_{\mathrm{orb}}\mathcal{N}\) matrices \(\mathbf{S}_{loc}\) and \(\mathbf{\mathcal{G}}\), given as \[\langle\hat{H}^{(\mathcal{N})}\rangle_{\hat{e}}=\langle\hat{K}^{( \mathcal{N})}\rangle_{\hat{e}_{K}}+\mathrm{N}_{\mathrm{site}}\langle\hat{H}^{( \mathcal{N})}_{loc;i}\rangle_{\hat{e}_{icei}}, \tag{6}\] \[\hat{\rho}_{K}=\hat{g}_{0}\exp\Big{(}-\sum_{i}\ln\mathbf{S}_{loc}^{T} \cdot\hat{\mathbf{n}}_{i}\Big{)},\] (7) \[\hat{\rho}_{loc;i}=\exp\Big{(}-\ln\big{(}\mathbf{\mathcal{G}}^{-1}- \mathbf{1}\big{)}^{T}\cdot\hat{\mathbf{n}}_{i}\Big{)}\hat{P}_{i}, \tag{8}\] where \([\hat{\mathbf{n}}_{i}]_{\alpha\sigma\tau,\alpha^{\prime}\sigma^{\prime}\tau^{ \prime}}=\hat{a}_{i\alpha\sigma}^{\dagger(\tau)}\hat{a}_{i\alpha^{\prime} \sigma^{\prime}}^{(\tau^{\prime})}\), the dot product operation is defined as \(\mathbf{a}\cdot\hat{\mathbf{b}}\equiv\sum_{\ell\ell^{\prime}}[\mathbf{a}]_{\ell\ell^{ \prime}}[\hat{\mathbf{b}}]_{\ell\ell^{\prime}}\), the discrete action \(\hat{\rho}_{K}\) is used to compute all single-particle integer time Green's functions, and \(\hat{\rho}_{loc;i}\) is used to compute all N-particle integer time Green's functions local to site \(i\). The \(\mathbf{\mathcal{G}}\) and \(\mathbf{S}_{loc}\) must satisfy the following two self-consistency conditions \[\big{(}\mathbf{g}_{loc}^{-1}-\mathbf{1}\big{)}=\big{(}\mathbf{\mathcal{G}}^{- 1}-\mathbf{1}\big{)}\,\mathbf{S}_{loc}, \tag{9}\] \[\mathbf{g}_{loc}=\mathbf{g}_{loc}^{\prime}, \tag{10}\] where \(\mathbf{g}_{loc}=\langle\hat{\mathbf{n}}_{i}\rangle_{\hat{\rho}_{loc;i}}\) and \(\mathbf{g}_{loc}^{\prime}=\langle\hat{\mathbf{n}}_{i}\rangle_{\hat{\rho}_{K}}\). One challenge posed by the SCDA is that the self-consistency condition must be satisfied for a given choice of variational parameters, which makes the minimization over the variational parameters nontrivial. An efficient algorithm for VDAT within the SCDA was proposed for the multi-orbital Hubbard model for general \(\mathcal{N}\), referred to as the decoupled minimization algorithm, and implemented in the two orbital Hubbard model up to \(\mathcal{N}=4\)[4]. The decoupled minimization algorithm begins with an initial choice of variational parameters \(\{\gamma_{k\alpha\sigma}\},u\) and an initial choice for \(\mathbf{\mathcal{G}}\), which determines the discrete action \(\hat{\rho}_{loc;i}\) (Eq. 8) which yields \(\mathbf{g}_{loc}\). Using the discrete Dyson equation (Eq. 9), \(\mathbf{S}_{loc}\) can be computed from \(\mathbf{\mathcal{G}}\) and \(\mathbf{g}_{loc}\). Then the discrete action \(\hat{\rho}_{K}\) (Eq. 6) can be used to compute \(\mathbf{g}_{loc}^{\prime}\). Using \(\mathbf{g}_{loc}=\mathbf{g}_{loc}^{\prime}\) in the discrete Dyson equation, a new \(\mathbf{\mathcal{G}}\) can be obtained. During this self-consistency cycle, relevant first order derivatives with regard to \(\{\gamma_{k\alpha\sigma}\},\,u\), and \(\mathbf{\mathcal{G}}\) can be computed and two effective models can be constructed to update \(\{\gamma_{k\alpha\sigma}\}\) and \(u\). This entire procedure is iterated until \(\{\gamma_{k\alpha\sigma}\}\), \(u\), and \(\mathbf{\mathcal{G}}\) are self-consistent. In a given iteration before reaching self-consistency, the energy and its gradients contain errors due to a deviation from the SCDA self-consistency condition, which can yield slow convergence in some regions of parameter space. Automatically satisfying the SCDA would yield a dramatic advantage when minimizing over the variational parameters. In previous work [5], we demonstrated that the gauge freedom of the SPD can be used to automatically satisfy the SCDA self-consistency condition at \(\mathcal{N}=2\), which recovers the Gutzwiller approximation, and here we extend this line of reasoning to \(\mathcal{N}=3\). For simplicity, we use a restricted form of the SPD, where the kinetic projector is diagonal in k-space and \(\hat{P}_{i}\left(u\right)\) does not introduce off-diagonal terms at the level of the single particle density matrix. Therefore, \(\mathbf{\mathcal{G}}\), \(\mathbf{S}_{loc}\), and \(\mathbf{g}_{loc}\) all have the form \([\mathbf{g}_{loc}]_{\alpha\sigma\tau,\alpha^{\prime}\sigma^{\prime}\tau^{\prime}}= \delta_{\alpha\alpha^{\prime}}\delta_{\sigma\sigma^{\prime}}[\mathbf{g}_{loc}]_{ \alpha\sigma\tau,\alpha\sigma\tau^{\prime}}\), and the integer time Green's functions of each spin orbital are described by a \(3\times 3\) matrix. We begin by partitioning a local integer time \(3\times 3\) matrix \(\mathbf{M}_{\alpha\sigma}\) for a given spin orbital into submatrices as: \[\mathbf{M}_{\alpha\sigma} =\left(\begin{array}{cc}[\mathbf{M}_{\alpha\sigma}]_{11}&[\mathbf{M}_{ \alpha\sigma}]_{12}&[\mathbf{M}_{\alpha\sigma}]_{13}\\ &[\mathbf{M}_{\alpha\sigma}]_{21}&[\mathbf{M}_{\alpha\sigma}]_{22}&[\mathbf{M}_{\alpha \sigma}]_{23}\\ \hline[\mathbf{M}_{\alpha\sigma}]_{31}&[\mathbf{M}_{\alpha\sigma}]_{32}&[\mathbf{M}_{ \alpha\sigma}]_{33}\end{array}\right) \tag{11}\] \[=\begin{pmatrix}\mathbf{M}_{\alpha\sigma;A}&\mathbf{M}_{\alpha\sigma;B}\\ \mathbf{M}_{\alpha\sigma;C}&\mathbf{M}_{\alpha\sigma;D}\end{pmatrix}, \tag{12}\] where \(\mathbf{M}\) can be \(\mathbf{\mathcal{G}}\), \(\mathbf{S}_{loc}\), \(\mathbf{g}_{loc}\), and \(\mathbf{g}_{loc}^{\prime}\). The main idea is to satisfy the self-consistency condition \(\mathbf{g}_{loc}=\mathbf{g}_{loc}^{\prime}\) in two stages: first for the A block and then for the B, C, and D blocks. We proceed by outlining the logic and key equations of the first stage, which is treated in detail in Section II.2 and II.3. The first stage begins by considering \(\mathbf{\mathcal{G}}_{\alpha\sigma;A}\), a \(2\times 2\) matrix, which can be parametrized in terms of the single variable \(\mathcal{G}_{\alpha\sigma;12}\) using the gauge freedom of the SPD, and \(\mathcal{G}_{\alpha\sigma;12}\) should now be regarded as an independent variational parameter. The \(\mathbf{g}_{loc;A}\) and \(\mathbf{S}_{loc}\) can be determined as a function of the sets \(\mathcal{G}_{12}=\{\mathcal{G}_{\alpha\sigma;12}\}\) and \(u=\{u_{i\Gamma}\}\), though we suppress the function arguments \(\mathcal{G}_{12}\) and \(u\) for brevity. The local density is also a function of \(\mathcal{G}_{12}\) and \(u\), defined as \(n_{\alpha\sigma}(\mathcal{G}_{12},u)=\left[g_{loc;\alpha\sigma}\right]_{22}\). For a given \(\alpha\sigma\), the \(\mathcal{S}_{loc;\alpha\sigma}\) can be parametrized using \(S_{\alpha\sigma;11}\) and \(S_{\alpha\sigma;12}\), and we can explicitly reparametrize the kinetic variational parameters as \(n_{k\alpha\sigma;0}\) and \(n_{k\alpha\sigma}\), where \(n_{k\alpha\sigma;0}=0,1\) is the single particle density matrix of \(\hat{K}_{2}\) and \(n_{k\alpha\sigma}\) is the single particle density matrix of the SPD. It will be proven that \(n_{k\alpha\sigma;0}\) determines the Fermi surface of both the interacting and non-interacting SPD, and therefore it will be useful to define two regions of momentum space, denoted as \(<\) or \(>\), where \(<\) denotes the set of \(k\) points with \(n_{k\alpha\sigma;0}=1\) and \(>\) indicates \(n_{k\alpha\sigma;0}=0\); and we assume \(\int dk=1\). For each region \(X\in\{<,>\}\) of a given spin orbital \(\alpha\sigma\), it will be useful to define the charge transfer \(\Delta_{X\alpha\sigma}\) and charge fluctuation \(\mathcal{A}_{X\alpha\sigma}\) as \[\Delta_{X\alpha\sigma} =\int_{X}dk\left(n_{k\alpha\sigma;0}-n_{k\alpha\sigma}\right), \tag{13}\] \[\mathcal{A}_{X\alpha\sigma} =\int_{X}dk\sqrt{n_{k\alpha\sigma}\left(1-n_{k\alpha\sigma}\right)}, \tag{14}\] which measure the influence of the local interaction on the given spin orbital. Given that \(\mathbf{g}_{loc,\alpha\sigma;A}^{\prime}\) is determined by \(S_{\alpha\sigma;11}\), \(S_{\alpha\sigma;12}\), \(\{n_{k\alpha\sigma}\}\), and \(\{n_{k\alpha\sigma;0}\}\), the self-consistency condition \(\mathbf{g}_{loc;A}=\mathbf{g}_{loc;A}^{\prime}\) becomes three lin ear constraints on \(n_{k\alpha\sigma;0}\) and \(n_{k\alpha\sigma}\), given as \[\int n_{k\alpha\sigma;0}dk =n_{\alpha\sigma}\left(\mathcal{G}_{12},u\right), \tag{15}\] \[\int n_{k\alpha\sigma}dk =n_{\alpha\sigma}\left(\mathcal{G}_{12},u\right),\] (16) \[\Delta_{<\alpha\sigma} =\Delta_{\alpha\sigma}\left(\mathcal{G}_{12},u\right), \tag{17}\] where \[n_{\alpha\sigma}\left(\mathcal{G}_{12},u\right)\equiv\left[ \boldsymbol{g}_{loc;\alpha\sigma}\right]_{22}, \tag{18}\] \[\Delta_{\alpha\sigma}\left(\mathcal{G}_{12},u\right)\equiv \frac{S_{\alpha\sigma;12}}{S_{\alpha\sigma;11}}\left[\boldsymbol{g}_{loc; \alpha\sigma}\right]_{12}. \tag{19}\] It should be noticed that the three constraints imply \(\Delta_{<\alpha\sigma}=-\Delta_{>\alpha\sigma}\). The three constraints have a clear interpretation. Equation 15 indicates that the local density of the non-interacting reference system \(\hat{K}_{2}\) is constrained to \(\left[\boldsymbol{g}_{loc;\alpha\sigma}\right]_{22}\). Using Equation 16, we see that \(\left[\boldsymbol{g}_{loc;\alpha\sigma}\right]_{22}=\left[\boldsymbol{g}_{ loc;\alpha\sigma}\right]_{33}\), dictating that the Fermi volume is equal to the local density obtained from the SPD. The result of these two constraints can be viewed as the wave function analogue of the Luttinger theorem [20]. The third constraint, Eq. 17, reveals how the local interaction influences the density distribution. When the interacting projector is close to the identity, \(S_{\alpha\sigma;12}\) approaches zero while the \(S_{\alpha\sigma;11}\) and \(\left[\boldsymbol{g}_{loc;\alpha\sigma}\right]_{12}\) approach a finite value, dictating that \(\Delta_{\alpha\sigma}\left(\mathcal{G}_{12},u\right)\) approaches zero and therefore \(n_{k\alpha\sigma}\) approaches \(n_{k\alpha\sigma;0}\). Alternatively, when the interacting projector deviates from the identity, \(\Delta_{\alpha\sigma}\left(\mathcal{G}_{12},u\right)\) increases and imposes a deviation of \(n_{k\alpha\sigma}\) away from \(n_{k\alpha\sigma;0}\). In summary, the first stage enforces self-consistency for the A block, determining the kinetic energy. In the second stage, \(\boldsymbol{g}^{\prime}_{loc,\alpha\sigma}\) is fully determined by \(S_{\alpha\sigma;11},\ S_{\alpha\sigma;12},\ n_{\alpha\sigma}\left(\mathcal{G}_{1 2},u\right)\), \(\Delta_{\alpha\sigma}\left(\mathcal{G}_{12},u\right)\), \(\mathcal{A}_{<\alpha\sigma}\), and \(\mathcal{A}_{>\alpha\sigma}\), therefore \(\boldsymbol{g}_{loc}=\boldsymbol{g}^{\prime}_{loc}\) can determine \(\boldsymbol{\mathcal{G}}\) on the B, C, and D blocks, which is derived in Section II.4. In summary, the self-consistency has been automatically satisfied and the local energy determined. In conclusion, we have an explicit functional form for the total energy of the SPD parametrized by \(\mathcal{G}_{12}\), \(u\), and \(n_{k\alpha\sigma}\), given as \[\sum_{\alpha\sigma}\int dk\epsilon_{k\alpha\sigma}n_{k\alpha\sigma}+E_{loc} \left(\mathcal{G}_{12},u,\mathcal{A}\right), \tag{20}\] where \(\mathcal{A}=\{\mathcal{A}_{<\alpha\sigma},\mathcal{A}_{>\alpha\sigma}\}\) and \(n_{k\alpha\sigma}\) is constrained by Equation 16 and 17 and the volume of fermi sea is constrained by Equation 15. The total energy has been expressed as a functional of \(\mathcal{G}_{12}\), \(u\), \(\{n_{k\alpha\sigma}\}\), and \(\{n_{k\alpha\sigma;0}\}\). This algorithm can be viewed as a nonlinear reparametrization of the original variational parameters \(\left|\Psi_{0}\right\rangle\), \(u\), and \(\{\gamma_{k\alpha\sigma}\}\), where \(\{n_{k\alpha\sigma;0}\}\) is a reparametrization of \(\left|\Psi_{0}\right\rangle\), \(\{n_{k\alpha\sigma}\}\) is a reparametrization of part of \(\{\gamma_{k\alpha\sigma}\}\), and \(\mathcal{G}_{12}\) can be viewed as a set of variational parameters which reparametrizes the remaining part of \(\{\gamma_{k\alpha\sigma}\}\) through condition 17. It should be noted that \(\{n_{k\alpha\sigma}\}\) only influences the local interaction energy through \(\mathcal{A}\), and is constrained by \(n_{\alpha\sigma}(\mathcal{G}_{12},u)\) and \(\Delta_{\alpha\sigma}(\mathcal{G}_{12},u)\), and therefore to find an optimized \(n_{k\alpha\sigma}\) in the region \(X\in\{<,>\}\) of spin orbital \(\alpha\sigma\), two Lagrange multipliers \(a_{X\alpha\sigma}\) and \(b_{X\alpha\sigma}\) can be introduced \[F_{X\alpha\sigma}=\int_{X}dk\Big{(} \epsilon_{k\alpha\sigma}n_{k\alpha\sigma}-a_{X\alpha\sigma}n_{k \alpha\sigma}\] \[-b_{X\alpha\sigma}\sqrt{n_{k\alpha\sigma}\left(1-n_{k\alpha\sigma} \right)}\Big{)}, \tag{21}\] and we can solve for \(n_{k\alpha\sigma}\) from \(\frac{\delta F_{X\alpha\sigma}}{\delta n_{k\alpha\sigma}}\big{|}_{k\in X}=0\), resulting in \[n_{k\alpha\sigma}\big{|}_{k\in X}=\frac{1}{2}\left(1+\frac{a_{X\alpha\sigma}- \epsilon_{k\alpha\sigma}}{\sqrt{\left(a_{X\alpha\sigma}-\epsilon_{k\alpha \sigma}\right)^{2}+b_{X\alpha\sigma}^{2}}}\right). \tag{22}\] Therefore, the true independent variational parameters for the algorithm are \(\mathcal{G}_{12}\), \(u\), and \(\boldsymbol{b}=\{b_{X\alpha\sigma}\}\), given that \(\boldsymbol{a}=\{a_{X\alpha\sigma}\}\) can be determined as a function of \(\mathcal{G}_{12}\), \(u\), and \(\boldsymbol{b}\) through Eqs. 16 and 17. Finally, the ground state energy can be determined as \[\mathcal{E}=\min_{\mathcal{G}_{12},u,\boldsymbol{b}}\Bigl{(}\int dk \epsilon_{k\alpha\sigma}n_{k\alpha\sigma}\left(\boldsymbol{a},\boldsymbol{b} \right)+E_{loc}\left(\mathcal{G}_{12},u,\boldsymbol{b}\right)\Bigr{)}, \tag{23}\] where the functional dependencies for \(n_{k\alpha\sigma}\left(\boldsymbol{a},\boldsymbol{b}\right)\) are defined in Eq. 22 and \(E_{loc}\left(\mathcal{G}_{12},u,\boldsymbol{b}\right)\) is detailed in the remaining sections. In this work, we used the Nelder-Mead algorithm [21] to perform the minimization in Eq. 23, which is a gradient free algorithm. In some cases, it may be preferable to solve a Hamiltonian with fixed density \(n_{\alpha\sigma}\), and this procedure is outlined in Appendix A. It is useful to give some practical guidelines for the efficiency of the gauge constrained algorithm, which can roughly be broken down into two factors. First, there is the cost of evaluating expectation values under \(\hat{\rho}_{loc;i}\) (i.e. Eq. 29), which will scale exponentially with the number of spin orbitals. Second, there is the number of independent variational parameters, which scales exponentially in the absence of symmetry. The first factor is roughly independent of the symmetry of the Hamiltonian \(\hat{H}\) which is being solved, while the second factor strongly depends on the symmetry. However, it is always possible to restrict the number of variational parameters in order to control the computational cost of the second factor, maintaining an upper bound for the total energy compared to the full variational minimization. Therefore, there are numerous avenues for engineering a minimal parametrization of the space of variational parameters. In the present paper, we study the SU(2N\({}_{\rm orb}\)) Hubbard model, where the high local symmetry results in a linear scaling for the number of variational parameters, and therefore the first factor completely dominates the computational cost. ### Evaluating observables within the local A-block Here we will elucidate why the block structure introduced in Eq. 12 is the starting point for the gauge constrained algorithm. We begin by explaining why \(\mathbf{\mathcal{G}}_{\alpha\sigma;A}\) is the only block that needs to be considered when determining \(\mathbf{S}_{loc}\). Given that \(\hat{P}\) only acts on the first and second integer time step, \(\mathbf{S}_{loc}\) only has nontrivial elements on the A block, which are determined by \(\mathbf{\mathcal{G}}_{\alpha\sigma;A}\) and \(u\) (see Section V.B in Ref. [5] for further background). Therefore, only the form of \(\mathbf{\mathcal{G}}_{\alpha\sigma;A}\) needs to be specified to initiate the algorithm. We previously demonstrated that the gauge freedom of the SPD allows the following simple form [4] \[\mathbf{\mathcal{G}}_{\alpha\sigma;A}=\begin{pmatrix}\frac{1}{2}&\mathcal{G}_{ \alpha\sigma;12}\\ -\mathcal{G}_{\alpha\sigma;12}&\frac{1}{2}\end{pmatrix}, \tag{24}\] where \(\mathcal{G}_{\alpha\sigma;\tau\tau^{\prime}}=[\mathbf{\mathcal{G}}_{\alpha\sigma }]_{\tau\tau^{\prime}}\) and \(\mathcal{G}_{\alpha\sigma;12}\in[0,1/2]\). Since the \(\mathbf{\mathcal{G}}_{\alpha\sigma;A}\) is completely determined, any observables within the local A block can now be explicitly determined. For any operator \(\hat{Q}\) local to site \(i\), the expectation value under \(\hat{\varrho}_{loc;i}\) can be rewritten in terms of expectations values of the non-interacting part of \(\hat{\varrho}_{loc;i}\) as \[\langle\hat{O}\rangle_{\hat{\varrho}_{loc;i}}=\frac{\langle\hat{P}_{i}\hat{O} \rangle_{\hat{\varrho}_{loc;i,0}}}{\langle\hat{P}_{i}\rangle_{\hat{\varrho}_{ loc;i,0}}}, \tag{25}\] where \[\hat{\varrho}_{loc;i,0}\equiv\exp(-\ln\left(\mathbf{\mathcal{G}}^{-1}-\mathbf{1} \right)^{T}\cdot\hat{\mathbf{n}}_{i}), \tag{26}\] Using the form of \(\hat{P}_{i}\) in Eq. 5, we have \[\langle\hat{P}_{i}\hat{O}\rangle_{\hat{\varrho}_{loc;i,0}}=u^{T}(\hat{O})_{u}u, \tag{27}\] where \(u=\left(u_{i1},\cdots,u_{iN_{\Gamma}}\right)^{T}\) is a \(N_{\Gamma}\)-element real vector, \(N_{\Gamma}\) is the number of local projectors, and \((\hat{O})_{u}\) is an \(N_{\Gamma}\times N_{\Gamma}\) matrix with elements \[[(\hat{O})_{u}]_{\Gamma\Gamma^{\prime}}=\langle\hat{P}_{i\Gamma}^{(1)}\hat{P }_{i\Gamma^{\prime}}^{(2)}\hat{O}\rangle_{\hat{\varrho}_{loc;i,0}}. \tag{28}\] It should be emphasized that the subscript in \((\hat{O})_{u}\) solely indicates that this matrix and the vector \(u\) are in the same representation, and the elements of \((\hat{O})_{u}\) defined in Eq. 28 are not dependent on the values of \(u_{i\Gamma}\); a different representation which is useful for constraining the density is presented in Appendix A. The expectation value of \(\hat{O}\) under \(\hat{\varrho}_{loc;i}\) is given as \[\langle\hat{O}\rangle_{\hat{\varrho}_{loc;i}}=\frac{u^{T}(\hat{O})_{u}u}{u^{T }\left(\hat{1}\right)_{u}u}. \tag{29}\] For example, the local integer time Green's function can be computed as \[[\mathbf{g}_{loc;\alpha\sigma}]_{\tau\tau^{\prime}}=\frac{u^{T}(\hat{\bar{a}}_{ \alpha\sigma}^{\dagger}\hat{a}_{\alpha\sigma}^{(\tau^{\prime})})_{u}u}{u^{T} \left(\hat{1}\right)_{u}u}. \tag{30}\] In the following, we present key formulas to evaluate equation 28. Given that we have restricted the SPD to be diagonal, the local projectors can be chosen as [4] \[\hat{P}_{i\Gamma}=\prod_{\alpha\sigma}\left(\delta_{\Gamma_{\alpha\sigma},0}( 1-\hat{n}_{\alpha\sigma})+\delta_{\Gamma_{\alpha\sigma},1}\hat{n}_{\alpha \sigma}\right), \tag{31}\] where \(\Gamma_{\alpha\sigma}\in\{0,1\}\) and are determined from the binary relation \(\left(\Gamma_{1\uparrow}\Gamma_{1\downarrow}\ldots\Gamma_{N_{\text{sub}} \uparrow}\Gamma_{N_{\text{sub}}\downarrow}\right)_{2}=\Gamma-1\). The matrix elements of \(\left(\hat{1}\right)_{u}\) are given as \[\left[\left(\hat{1}\right)_{u}\right]_{\Gamma\Gamma^{\prime}}= \prod_{\alpha\sigma}p_{\alpha\sigma}\left(\Gamma_{\alpha\sigma},\Gamma^{ \prime}_{\alpha\sigma}\right), \tag{32}\] \[p_{\alpha\sigma}\left(\Gamma_{\alpha\sigma},\Gamma^{\prime}_{ \alpha\sigma}\right)=\frac{1}{4}+\left(-1\right)^{\Gamma_{\alpha\sigma}+\Gamma^ {\prime}_{\alpha\sigma}}\mathcal{G}_{\alpha\sigma;12}^{2}. \tag{33}\] Single particle operators are evaluated as \[\left[(\hat{\bar{a}}_{\alpha\sigma}^{\dagger(\tau)}\hat{\bar{a}}_ {\alpha\sigma}^{\left(\tau^{\prime}\right)})_{u}\right]_{\Gamma\Gamma^{\prime}} =g_{\alpha\sigma}^{\tau\tau^{\prime}}\left(\Gamma_{\alpha\sigma}, \Gamma^{\prime}_{\alpha\sigma}\right)\] \[\times\prod_{\alpha^{\prime}\sigma^{\prime}\neq\alpha\sigma}p_{ \alpha^{\prime}\sigma^{\prime}}\left(\Gamma_{\alpha^{\prime}\sigma^{\prime}}, \Gamma^{\prime}_{\alpha^{\prime}\sigma^{\prime}}\right), \tag{34}\] where \[g_{\alpha\sigma}^{\tau\tau^{\prime}}\left(\Gamma_{\alpha\sigma}, \Gamma^{\prime}_{\alpha\sigma}\right)=p_{\alpha\sigma}\left(\Gamma_{\alpha \sigma},\Gamma^{\prime}_{\alpha\sigma}\right)\mathcal{G}_{\alpha\sigma;\tau\tau ^{\prime}}+\left(-1\right)^{\Gamma_{\alpha\sigma}+\Gamma^{\prime}_{\alpha \sigma}}\] \[\times\left(\left(\frac{1}{2}\left(-1\right)^{\Gamma^{\prime}_{ \alpha\sigma}-1}\mathcal{G}_{\alpha\sigma;1\tau^{\prime}}-\mathcal{G}_{\alpha \sigma;12}\mathcal{G}_{\alpha\sigma;2\tau^{\prime}}\right)\right(\delta_{1,\tau }-\mathcal{G}_{\alpha\sigma;\tau 1})\] \[+\left(\frac{1}{2}\left(-1\right)^{\Gamma_{\alpha\sigma}-1} \mathcal{G}_{\alpha\sigma;2\tau^{\prime}}+\mathcal{G}_{\alpha\sigma;12}\mathcal{G}_{ \alpha\sigma;1\tau^{\prime}}\right)(\delta_{2,\tau}-\mathcal{G}_{\alpha\sigma; \tau 2})\right). \tag{35}\] Any two particle correlation function of the below form are given as \[\left[\left(\hat{\bar{a}}_{\alpha_{1}\sigma_{1}}^{\dagger(\tau_{1})} \hat{\bar{a}}_{\alpha_{1}\sigma_{1}}^{\dagger(\tau_{1})}\hat{\bar{a}}_{\alpha_{ 2}\sigma_{2}}^{\dagger(\tau_{2})}\hat{\bar{a}}_{\alpha_{2}\sigma_{2}}^{\dagger( \tau_{2})}\right)_{u}\right]_{\Gamma\Gamma^{\prime}}=g_{\alpha_{1}\sigma_{1}}^{ \tau_{1}^{\prime}}\left(\Gamma_{\alpha_{1}\sigma_{1}},\Gamma^{\prime}_{\alpha_ {1}\sigma_{1}}\right)\] \[\times g_{\alpha_{2}\sigma_{2}}^{\tau_{2}^{\prime}\prime}\left( \Gamma_{\alpha_{2}\sigma_{2}},\Gamma^{\prime}_{\alpha_{2}\sigma_{2}}\right) \prod_{\alpha^{\prime}\sigma^{\prime}\neq\alpha_{1}\sigma_{1},\alpha_{2} \sigma_{2}}p_{\alpha^{\prime}\sigma^{\prime}}\left(\Gamma_{\alpha^{\prime}\sigma^ {\prime}},\Gamma^{\prime}_{\alpha^{\prime}\sigma^{\prime}}\right), \tag{36}\] where \(\alpha_{1}\sigma_{1}\neq\alpha_{2}\sigma_{2}\). In appendix B, we outline how to treat a general interacting projector. In summary, we have provided explicit formulas for evaluating local quantities up to the two particle level, which is sufficient to execute the algorithm. It should be emphasized that these expressions for local observables are valid outside of the A block, but require complete knowledge of \(\mathbf{\mathcal{G}}\) (e.g. see Eq. 35). Normally, evaluating expectation values under \(\hat{\varrho}_{loc;i}\) (i.e. Eq. 29) will be the rate limiting factor in the SCDA, and given that \(N_{\Gamma}\) scales exponentially with the number of spin orbitals, the overall computational cost will scale exponentially. There are two possible routes to mitigate this exponential scaling. First, one could reduce the number of projectors, though this must be done carefully as it will limit the variational freedom. Second, one may use Monte Carlo to evaluate Eq. 29. We now proceed to evaluate \(\mathbf{S}_{loc,\alpha\sigma}\). Given the choice of \(\mathbf{\mathcal{G}}_{\alpha\sigma;A}\) and using equations 32 and 34, we find that \(\mathbf{g}_{\alpha\sigma;A}\) has the following form \[\mathbf{g}_{loc,\alpha\sigma;A}=\begin{pmatrix}n_{\alpha\sigma}&g_{\alpha\sigma;12} \\ -g_{\alpha\sigma;12}&n_{\alpha\sigma}\end{pmatrix}, \tag{37}\] where \(n_{\alpha\sigma}\) and \(g_{\alpha\sigma;12}\) are functions of \(\mathcal{G}_{12}\) and \(u\). Given that the local interacting projector only acts on the \(A\) block, the discrete Dyson equation simplifies to \[\left(\mathbf{g}_{loc,\alpha\sigma;A}^{-1}-\mathbf{1}\right)=\left(\mathbf{\mathcal{G}}_{ \alpha\sigma;A}^{-1}-\mathbf{1}\right)\mathbf{S}_{loc,\alpha\sigma;A}, \tag{38}\] which yields an integer time self-energy of the form \[\mathbf{S}_{loc,\alpha\sigma}=\left(\begin{array}{ccc}S_{\alpha\sigma;11}&S_{ \alpha\sigma;12}&0\\ -S_{\alpha\sigma;12}&S_{\alpha\sigma;11}&0\\ 0&0&1\end{array}\right), \tag{39}\] where \[S_{\alpha\sigma;11}= \frac{1}{\left(4\mathcal{G}_{\alpha\sigma,1,2}^{2}+1\right) \left(g_{\alpha\sigma,1,2}^{2}+n_{\alpha\sigma}^{2}\right)}\] \[\times\Big{(}-g_{\alpha\sigma,1,2}^{2}+4g_{\alpha\sigma,1,2} \mathcal{G}_{\alpha\sigma,1,2}-n_{\alpha\sigma}^{2}+n_{\alpha\sigma}\] \[+4\mathcal{G}_{\alpha\sigma,1,2}^{2}\left(g_{\alpha\sigma,1,2}^{2 }+\left(n_{\alpha\sigma}-1\right)n_{\alpha\sigma}\right)\Big{)}, \tag{40}\] \[S_{\alpha\sigma;12}= \frac{1}{\left(4\mathcal{G}_{\alpha\sigma,1,2}^{2}+1\right) \left(g_{\alpha\sigma,1,2}^{2}+n_{\alpha\sigma}^{2}\right)}\] \[\times\Big{(}g_{\alpha\sigma,1,2}\left(4\mathcal{G}_{\alpha \sigma,1,2}\left(\mathcal{G}_{\alpha\sigma,1,2}-g_{\alpha\sigma,1,2}\right)-1\right)\] \[-4\left(n_{\alpha\sigma}-1\right)n_{\alpha\sigma}\mathcal{G}_{ \alpha\sigma,1,2}\Big{)}. \tag{41}\] In summary, Eqns. 40 and 41 express the local integer time self-energy as a function of \(\mathcal{G}_{12}\) and \(u\). ### Parametrization of the integer time lattice Green's function and self-consistency of the A-block In the preceding section, we determined \(\mathbf{S}_{loc}\), which completely determines \(\hat{\rho}_{K}\) via Eq. 7, allowing the computation of \(\mathbf{g}_{k\alpha\sigma}=\langle\hat{\mathbf{n}}_{k\alpha\sigma}\rangle_{\hat{E}_{K}}\). We will demonstrate that \(\mathbf{g}_{k\alpha\sigma}\) can be written analytically in terms of \(\gamma_{k\alpha\sigma}\), the expectation value \(n_{k\alpha\sigma;0}=\langle\hat{n}_{k\alpha\sigma}\rangle_{\hat{K}_{2}}\), and \(\mathbf{S}_{loc,\alpha\sigma}\). It is natural to reparametrize \(\gamma_{k\alpha\sigma}\) using \(\lambda_{k\alpha\sigma}=\langle\hat{n}_{k\alpha\sigma}\rangle_{\hat{K}_{1}}= \left(1+\exp(-\gamma_{k\alpha\sigma})\right)^{-1}\in(0,1)\)[5]. In general, \(\hat{K}_{2}\) can be a mixed state, where \(n_{k\alpha\sigma;0}\in[0,1]\), and an analytic expression for \(\mathbf{g}_{k\alpha\sigma}\) in terms of \(\gamma_{k\alpha\sigma}\) and \(n_{k\alpha\sigma;0}\) is given in the Appendix. At zero temperature in the metallic phase, \(\hat{K}_{2}\) will be a pure state after minimization and \(n_{k\alpha\sigma;0}\) is either zero or one. For the insulating phase at zero temperature, \(\mathbf{g}_{k\alpha\sigma}\) does not depend on \(n_{k\alpha\sigma;0}\), and therefore we are free to choose \(n_{k\alpha\sigma;0}\in[0,1]\), though for convenience we still choose zero or one. A general expression for \(\mathbf{g}_{k\alpha\sigma}\) is presented in Eq. S8 in Supplementary Material [1], which in the case of \(n_{k\alpha\sigma;0}=1\) reduces to \[\mathbf{g}_{k\alpha\sigma}\big{|}_{n_{k\alpha\sigma;0}=1}=C^{-1}\begin{pmatrix}C& 0&0\\ -m_{21}&m_{22}&m_{23}\\ -m_{31}&-m_{32}&m_{22}\end{pmatrix}, \tag{42}\] where \[m_{21} =\left(\lambda_{k\alpha\sigma}-1\right){}^{2}S_{\alpha\sigma;11}, \tag{43}\] \[m_{22} =\lambda_{k\alpha\sigma}^{2},\] (44) \[m_{23} =\left(1-\lambda_{k\alpha\sigma}\right)\lambda_{k\alpha\sigma},\] (45) \[m_{31} =\left(1-\lambda_{k\alpha\sigma}\right)\lambda_{k\alpha\sigma}S_{ \alpha\sigma;11},\] (46) \[m_{32} =\left(1-\lambda_{k\alpha\sigma}\right)\lambda_{k\alpha\sigma}S_{ \alpha\sigma;12},\] (47) \[C =\lambda_{k\alpha\sigma}^{2}+\left(\lambda_{k\alpha\sigma}-1\right) {}^{2}S_{\alpha\sigma;12}. \tag{48}\] Furthermore, it is natural to reparametrize \(\lambda_{k\alpha\sigma}\) using \(n_{k\alpha\sigma}=[\mathbf{g}_{k\alpha\sigma}]_{33}=\langle\hat{n}_{k\alpha\sigma} ^{(3)}\rangle_{\hat{E}_{K}}\), which is the physical density distribution \(\langle\hat{n}_{k\alpha\sigma}\rangle_{\hat{\rho}}\) within the SCDA, as \[\lambda_{k\alpha\sigma}=\frac{n_{k\alpha\sigma}S_{\alpha\sigma;12}}{n_{k\alpha \sigma}S_{\alpha\sigma;12}+\sqrt{\left(1-n_{k\alpha\sigma}\right)n_{k\alpha \sigma}S_{\alpha\sigma;12}}}, \tag{49}\] resulting in \[\mathbf{g}_{k\alpha\sigma}\Big{|}_{n_{k\alpha\sigma;0}=1}\] \[=\left(\begin{array}{ccc}1&0&0\\ -\frac{S_{\alpha\sigma;11}}{S_{\alpha\sigma;12}}\left(1-n_{k\alpha\sigma} \right)&n_{k\alpha\sigma}&\frac{A}{\sqrt{S_{\alpha\sigma;12}}}\\ -\frac{S_{\alpha\sigma;11}}{\sqrt{S_{\alpha\sigma;12}}}A&-\sqrt{S_{\alpha\sigma ;12}}A&n_{k\alpha\sigma}\end{array}\right), \tag{50}\] where \(A=\sqrt{\left(1-n_{k\alpha\sigma}\right)n_{k\alpha\sigma}}\). For the case of \(n_{k\alpha\sigma;0}=0\), we have \[\mathbf{g}_{k\alpha\sigma}\Big{|}_{n_{k\alpha\sigma;0}=0}=C^{-1}\left(\begin{array}[] {ccc}0&m_{12}&m_{13}\\ 0&m_{22}&m_{23}\\ 0&-m_{32}&m_{22}\end{array}\right), \tag{51}\] where \[m_{12} =\lambda_{k\alpha\sigma}^{2}S_{\alpha\sigma;11}, \tag{52}\] \[m_{13} =\left(1-\lambda_{k\alpha\sigma}\right)\lambda_{k\alpha\sigma}S_{ \alpha\sigma;11},\] (53) \[m_{22} =\lambda_{k\alpha\sigma}^{2}S_{\alpha\sigma;12},\] (54) \[m_{23} =\left(1-\lambda_{k\alpha\sigma}\right)\lambda_{k\alpha\sigma}S_{ \alpha\sigma;12},\] (55) \[m_{32} =\left(1-\lambda_{k\alpha\sigma}\right)\lambda_{k\alpha\sigma}\left(S _{\alpha\sigma;11}^{2}+S_{\alpha\sigma;12}^{2}\right),\] (56) \[C =\left(\lambda_{k\alpha\sigma}-1\right){}^{2}\left(S_{\alpha\sigma;1 1}^{2}+S_{\alpha\sigma;12}^{2}\right)+\lambda_{k\alpha\sigma}^{2}S_{\alpha \sigma;12}. \tag{57}\] We can similarly reparametrize \(\lambda_{k\alpha\sigma}\) in terms of \(n_{k\alpha\sigma}\) as \[\lambda_{k\alpha\sigma}=\frac{\sqrt{n_{k\alpha\sigma}}S_{\alpha\sigma}}{\sqrt{n_ {k\alpha\sigma}}S_{\alpha\sigma}+\sqrt{\left(1-n_{k\alpha\sigma}\right)S_{ \alpha\sigma;12}}}, \tag{58}\] where \(S_{\alpha\sigma}=\sqrt{S_{\alpha\sigma;11}^{2}+S_{\alpha\sigma;12}^{2}}\), yielding \[\mathbf{g}_{k\alpha\sigma}\Big{|}_{n_{k\alpha\sigma;0}=0}=\left(\begin{array}{cc}0& \frac{S_{\alpha\sigma;11}}{S_{\alpha\sigma;12}}n_{k\alpha\sigma}&\frac{S_{ \alpha\sigma;11}}{\sqrt{S_{\alpha\sigma;12}S_{\alpha\sigma}}}A\\ 0&n_{k\alpha\sigma}&\frac{\sqrt{S_{\alpha\sigma;12}}}{S_{\alpha\sigma}}A\\ 0&-\frac{S_{\alpha\sigma}}{\sqrt{S_{\alpha\sigma;12}}}A&n_{k\alpha\sigma} \end{array}\right), \tag{59}\] where \(A=\sqrt{\left(1-n_{k\alpha\sigma}\right)n_{k\alpha\sigma}}\). The local integer time Green's function can now be constructed as an average over the Brillouin zone as \[\mathbf{g}_{loc,\alpha\sigma}^{\prime}=\left\langle\hat{\mathbf{n}}_{i\alpha\sigma} \right\rangle_{\hat{\varrho}_{K}}=\int dk\mathbf{g}_{k\alpha\sigma}, \tag{60}\] using the convention \(\int dk=1\). Using the self-consistency condition on the \(A\) block, \[\mathbf{g}_{loc,\alpha\sigma;A}^{\prime}=\mathbf{g}_{loc,\alpha\sigma;A}, \tag{61}\] we can determine the resulting constraints on \(n_{k\alpha\sigma;0}\), \(n_{k\alpha\sigma}\), \(u\), and \(\mathcal{G}_{\alpha\sigma;12}\). There are four constraining equations from the four corresponding entries of the \(A\) block, but only three of them are independent. The first constraint is \([\mathbf{g}_{loc,\alpha\sigma}^{\prime}]_{11}=[\mathbf{g}_{loc,\alpha\sigma}]_{11}\), which yields \[\int dkn_{k\alpha\sigma;0}=\int_{<}dk=n_{\alpha\sigma}\left(\mathcal{G}_{12}, u\right), \tag{62}\] where \(n_{\alpha\sigma}\left(\mathcal{G}_{12},u\right)=[\mathbf{g}_{loc,\alpha\sigma}]_{ 11}=[\mathbf{g}_{loc,\alpha\sigma}]_{22}\), the symbol \(<\) denotes the region where \(n_{k\alpha\sigma;0}=1\), while \(>\) denotes the region where \(n_{k\alpha\sigma;0}=0\). The first constraint requires that \(\left|\Psi_{0}\right\rangle\) has the same density as given by \(n_{\alpha\sigma}\left(\mathcal{G}_{12},u\right)\), and we refer to this as the fermium volume constraint. The second constraint is \([\mathbf{g}_{loc,\alpha\sigma}^{\prime}]_{22}=[\mathbf{g}_{loc,\alpha\sigma}]_{22}\), which yields the density constraint \[\int dkn_{k\alpha\sigma}=\int_{<}dkn_{k\alpha\sigma}+\int_{>}dkn_{k\alpha \sigma}=n_{\alpha\sigma}\left(\mathcal{G}_{12},u\right). \tag{63}\] The third constraint is \([\mathbf{g}_{loc,\alpha\sigma}^{\prime}]_{12}=[\mathbf{g}_{loc,\alpha\sigma}]_{12}\), which yields \[\int_{>}dkn_{k\alpha\sigma}=\Delta_{\alpha\sigma}\left(\mathcal{G}_{12},u \right), \tag{64}\] where \(\Delta_{\alpha\sigma}\left(\mathcal{G}_{12},u\right)\equiv\frac{S_{\alpha \sigma;12}}{S_{\alpha\sigma;11}}\left[\mathbf{g}_{loc;\alpha\sigma}\right]_{12}.\) The fourth constraint is \([\mathbf{g}_{loc,\alpha\sigma}^{\prime}]_{21}=[\mathbf{g}_{loc,\alpha\sigma}]_{21}\), which yields \[\int_{<}dk\left(1-n_{k\alpha\sigma}\right)=\Delta_{\alpha\sigma}\left(\mathcal{ G}_{12},u\right). \tag{65}\] The third and fourth constraint are identical as long as the first and second constraint are satisfied, \[\int_{<}dk\left(1-n_{k\alpha\sigma}\right)=\int_{>}dkn_{k\alpha\sigma}=\Delta _{\alpha\sigma}\left(\mathcal{G}_{12},u\right), \tag{66}\] which we refer to as the charge transfer constraint. We now discuss how to satisfy these three constraints, using constraints on \(n_{k\alpha\sigma;0}\) and \(n_{k\alpha\sigma}\). One can start with arbitrary \(u\) and \(\mathcal{G}_{\alpha\sigma;12}\in[0,1/2]\), which yields some \(n_{\alpha\sigma}\left(\mathcal{G}_{12},u\right)\) that determines the fermi volume and \(n_{k\alpha\sigma;0}\). Furthermore, one must choose \(n_{k\alpha\sigma}\) such that Eqns. 63 and 66 are satisfied. To simplify the expression for \(\mathbf{g}_{loc}^{\prime}\), it is useful to define the following quantities \[\Delta_{<\alpha\sigma} =\int_{<}dk\left(n_{k\alpha\sigma;0}-n_{k\alpha\sigma}\right)=\int _{<}dk\left(1-n_{k\alpha\sigma}\right), \tag{67}\] \[\Delta_{>\alpha\sigma} =\int_{>}dk(n_{k\alpha\sigma;0}-n_{k\alpha\sigma})=-\int_{>}dkn_{k \alpha\sigma},\] (68) \[\mathcal{A}_{<\alpha\sigma} =\int_{<}dk\sqrt{n_{k\alpha\sigma}\left(1-n_{k\alpha\sigma}\right)},\] (69) \[\mathcal{A}_{>\alpha\sigma} =\int_{>}dk\sqrt{n_{k\alpha\sigma}\left(1-n_{k\alpha\sigma}\right)}. \tag{70}\] Using equations 62 and 63, we have \(\Delta_{<\alpha\sigma}=-\Delta_{>\alpha\sigma}\). Equations 62 and 63 are treated as independent conditions, and equations 64 and 65 become the single condition given in Eq. 66. For convenience, we define \[\Delta_{\alpha\sigma} \equiv\Delta_{<\alpha\sigma}=-\Delta_{>\alpha\sigma}, \tag{71}\] \[n_{\alpha\sigma} \equiv\int_{<}dk=\int dkn_{k\alpha\sigma}, \tag{72}\] which should not be confused with the corresponding quantities \(\Delta_{\alpha\sigma}\left(\mathcal{G}_{12},u\right)\) and \(n_{\alpha\sigma}\left(\mathcal{G}_{12},u\right)\) determined from the local discrete action. The quantity \(\Delta_{\alpha\sigma}\) measures the total charge transfer generated by the projector \(\hat{K}_{1}\hat{P}_{1}\) across the fermi surface determined by \(\left|\Psi_{0}\right\rangle\), and is uniquely determined from \(\hat{\varrho}_{loci}\). The non-interacting case yields \(\Delta_{\alpha\sigma}=0\), while the strong coupling limit of the Mott insulating phase yields \(n_{k\alpha\sigma}=n_{\alpha\sigma}\) and \(\Delta_{\alpha\sigma}=n_{\alpha\sigma}\left(1-n_{\alpha\sigma}\right)\). Once \(n_{k\alpha\sigma}\) and \(n_{k\alpha\sigma;0}\) have been constrained by equations 62, 63, and 66, the total kinetic energy can be evaluated as \[K=\sum_{\alpha\sigma}\int dk\epsilon_{k\alpha\sigma}n_{k\alpha\sigma}. \tag{73}\] We now discuss the properties of \(\mathcal{A}_{<\alpha\sigma}\) and \(\mathcal{A}_{>\alpha\sigma}\), which are relevant for evaluating the \(B\), \(C\), and \(D\) blocks of \(\mathbf{g}_{loc;\alpha\sigma}^{\prime}\). The non-interacting case yields \(\mathcal{A}_{<\alpha\sigma}=\mathcal{A}_{>\alpha\sigma}=0\), while for a given \(n_{\alpha\sigma}\) and \(\Delta_{\alpha\sigma}\) the maximum value of \(\mathcal{A}_{<\alpha\sigma}\) is reached when \(n_{k\alpha\sigma}|_{k\in<}=1-\Delta_{\alpha\sigma}/n_{\alpha\sigma}\), and therefore \(\mathcal{A}_{<\alpha\sigma}\in[0,\sqrt{\left(n_{\alpha\sigma}-\Delta_{\alpha \sigma}\right)\Delta_{\alpha\sigma}}]\). Similarly, the maximum of \(\mathcal{A}_{>\alpha\sigma}\) is reached when \(n_{k\alpha\sigma}|_{k\in>}=\Delta_{\alpha\sigma}/\left(1-n_{\alpha\sigma}\right)\) and therefore \(\mathcal{A}_{>\alpha\sigma}\in[0,\sqrt{\left(1-n_{\alpha\sigma}-\Delta_{\alpha \sigma}\right)\Delta_{\alpha\sigma}}]\). Using these definitions, we have \[\int_{<}dk\mathbf{g}_{k\alpha\sigma}\] \[=\left(\begin{array}{ccc}n_{\alpha\sigma}&0&0\\ -\frac{S_{\alpha\sigma;11}}{S_{\alpha\sigma;12}}\Delta_{\alpha\sigma}&n_{\alpha \sigma}-\Delta_{\alpha\sigma}&\frac{\mathcal{A}_{<,\alpha\sigma}}{\sqrt{S_{ \alpha\sigma;12}}}\\ -\frac{S_{\alpha\sigma;11}}{\sqrt{S_{\alpha\sigma;12}}}\mathcal{A}_{<,\alpha \sigma}&-\sqrt{S_{\alpha\sigma;12}}\mathcal{A}_{<,\alpha\sigma}&n_{\alpha \sigma}-\Delta_{\alpha\sigma}\end{array}\right), \tag{74}\] and \[\int_{>}dk\mathbf{g}_{k\alpha\sigma}\] \[=\left(\begin{array}{ccc}0&\frac{S_{\alpha\sigma;11}}{S_{ \alpha\sigma;12}}\Delta_{\alpha\sigma}&\frac{S_{\alpha\sigma;11}}{\sqrt{S_{ \alpha\sigma;12}}S_{\alpha\sigma}}\mathcal{A}_{>,\alpha\sigma}\\ 0&\Delta_{\alpha\sigma}&\frac{\sqrt{S_{\alpha\sigma;12}}}{S_{\alpha\sigma}} \mathcal{A}_{>,\alpha\sigma}\\ 0&-\frac{S_{\alpha\sigma}}{\sqrt{S_{\alpha\sigma;12}}}\mathcal{A}_{>,\alpha \sigma}&\Delta_{\alpha\sigma}\end{array}\right), \tag{75}\] yielding the local lattice Green's function \[\mathbf{g}^{\prime}_{loc,\alpha\sigma}=\left(\begin{array}{ccc}n_{\alpha\sigma} &\frac{S_{\alpha\sigma;11}}{S_{\alpha\sigma;12}}\Delta_{\alpha\sigma}&g_{13} \\ -\frac{S_{\alpha\sigma;11}}{S_{\alpha\sigma;12}}\Delta_{\alpha\sigma}&n_{ \alpha\sigma}&g_{23}\\ g_{31}&g_{32}&n_{\alpha\sigma}\end{array}\right), \tag{76}\] where \[g_{13} =\frac{S_{\alpha\sigma;11}}{\sqrt{S_{\alpha\sigma;12}}S_{\alpha \sigma}}\mathcal{A}_{>,\alpha\sigma}, \tag{77}\] \[g_{23} =\frac{1}{\sqrt{S_{\alpha\sigma;12}}}\mathcal{A}_{<,\alpha\sigma }+\frac{\sqrt{S_{\alpha\sigma;12}}}{S_{\alpha\sigma}}\mathcal{A}_{>,\alpha \sigma},\] (78) \[g_{31} =-\frac{S_{\alpha\sigma;11}}{\sqrt{S_{\alpha\sigma;12}}}\mathcal{ A}_{<,\alpha\sigma},\] (79) \[g_{32} =-\sqrt{S_{\alpha\sigma;12}}\mathcal{A}_{<,\alpha\sigma}-\frac{S _{\alpha\sigma}}{\sqrt{S_{\alpha\sigma;12}}}\mathcal{A}_{>,\alpha\sigma}. \tag{80}\] The above equations provide explicit expressions for all blocks of \(\mathbf{g}^{\prime}_{loc;\alpha\sigma}\). Determining the \(B\), \(C\), and \(D\) blocks of \(\mathbf{\mathcal{G}}\) and evaluating the total energy Assuming that \(\mathbf{S}_{loc}\) and \(\mathbf{g}^{\prime}_{loc}\) have been completely determined, the self-consistency condition \(\mathbf{g}_{loc}=\mathbf{g}^{\prime}_{loc}\) can be used to determine the remaining blocks of \(\mathbf{\mathcal{G}}\) via the discrete Dyson equation as \[\mathbf{\mathcal{G}}=\left(\mathbf{1}+\left(\mathbf{g}^{-1}_{loc}-\mathbf{1}\right)\mathbf{S}^{-1} _{loc}\right)^{-1}, \tag{81}\] which yields \[\mathbf{\mathcal{G}}_{B} =\mathbf{\mathcal{G}}_{A}\mathbf{g}^{-1}_{loc;A}\mathbf{g}_{loc;B}, \tag{82}\] \[\mathbf{\mathcal{G}}_{C} =\mathbf{g}_{loc;C}\left(\mathbf{1}-\mathbf{g}_{loc;A}\right)^{-1}\left(\mathbf{1} -\mathbf{\mathcal{G}}_{A}\right),\] (83) \[\mathbf{\mathcal{G}}_{D} =\mathbf{g}_{loc;D}\] \[+\mathbf{g}_{loc;C}\left(\mathbf{1}-\mathbf{g}_{loc;A}\right)^{-1}\left(\mathbf{g }_{loc;A}-\mathbf{\mathcal{G}}_{A}\right)\mathbf{g}^{-1}_{loc;A}\mathbf{g}_{loc;B}. \tag{84}\] The individual matrix elements of \(\mathbf{\mathcal{G}}\) are provided in Supplementary Material [1]. We now proceed to compute the total energy. Having completely determined \(\mathbf{\mathcal{G}}\), the local interaction energy can be computed using Eq. 29 as \[E_{loc}=\frac{u^{T}\left(\hat{H}^{(3)}_{loc}\right)_{u}u}{u^{T}\left(\hat{1} \right)_{u}u}, \tag{85}\] where the matrix \((\hat{H}^{(3)}_{loc})_{u}\) depends on \(\mathbf{\mathcal{G}}\) and the matrix \(\left(\hat{1}\right)_{u}\) depends on \(\mathbf{\mathcal{G}}_{A}\). It should be emphasized that the density distribution \(n_{k\alpha\sigma}\) will influence \(E_{loc}\) through \(\mathbf{\mathcal{G}}_{B}\)and \(\mathbf{\mathcal{G}}_{C}\). ## III Results for SU(2N\({}_{\rm orb}\)) Hubbard model We now proceed to illustrate the gauge constrained implementation of VDAT within the SCDA at \(\mathcal{N}=3\). We choose to study the SU(2N\({}_{\rm orb}\)) Hubbard model, where \(\hat{H}_{loc;i}=U\sum_{\ell<\ell^{\prime}}\hat{n}_{i\ell}\hat{n}_{i\ell^{ \prime}}\) and \(\ell=1,\ldots,2\)N\({}_{\rm orb}\), in order to showcase the advantages of VDAT over DMFT for obtaining zero temperature results at large N\({}_{\rm orb}\). It should be noted that the SU(2N\({}_{\rm orb}\)) symmetry can be exploited when evaluating the expectation values of local observables (i.e. Eq. 29), but for purposes of benchmarking the computational cost, we utilized the general algorithm which does not exploit the SU(2N\({}_{\rm orb}\)) symmetry. Therefore, for a single evaluation of the SPD, the computational cost is the same as the general case. To provide a rough idea of the computational cost on a typical single processor core, when N\({}_{\rm orb}\leq 3\) the cost of a single evaluation of the SPD is \(10^{-4}\) seconds. For N\({}_{\rm orb}>3\), the computational cost scales exponentially, requiring \(10^{-3}\), 0.02, 0.1, and 3 seconds for N\({}_{\rm orb}\) of 4, 5, 6, and 7, respectively. The timing difference between \(\mathcal{N}=2\) and \(\mathcal{N}=3\) is negligible in all the aforementioned cases. We study the SU(2N\({}_{\rm orb}\)) Hubbard model on the \(d=\infty\) Bethe lattice for N\({}_{\rm orb}\)=2-8 at half filling and 2N\({}_{\rm orb}\)=5 for all fillings. At half filling for N\({}_{\rm orb}\)=2,4 we compare to a zero temperature extrapolation of published DMFT results using a quantum Monte-Carlo (QMC) impurity solver [2], while for 2N\({}_{\rm orb}\)=5 we compare to published DMFT results using the numerical renormalization group (NRG) impurity solver [18]. All VDAT results in this paper are generated by a Julia implementation of the gauge constrained algorithm, which can treat general density-density interactions, and is available at [7]. We begin by considering the case of half filling for N\({}_{\rm orb}\)=2-8, exceeding the number of orbitals that would be encountered for a \(d\) or \(f\) electron manifold relevant to strongly correlated electron materials. The basic aspects of the SU(2N\({}_{\rm orb}\)) Hubbard model in \(d=\infty\) are well understood: there is a metal-insulator transition (MIT) at a critical value of \(U\), and this transition value increases with N\({}_{\rm orb}\). The signature of the MIT can be seen in the variational parameters of the SPD, and therefore it is instructive to examine \(\mathcal{G}_{\alpha\sigma;12}\), \(b_{a\sigma<}\), and the local variational parameters \(u=\{u_{\Gamma}\}\) (see Figure 1). It is also useful to explore the intermediate parame Figure 1: Optimized variational parameters of VDAT within the SCDA at \(\mathcal{N}=3\) for the SU(2N\({}_{\rm orb}\)) Hubbard model on the \(d=\infty\) Bethe lattice at half filling and \(T=0\), with N\({}_{\rm orb}=2-8\). The variational parameters \(\mathcal{G}_{\alpha\sigma;12}\), \(b_{a\sigma<}\), and \(u_{\Gamma}\) for \(\Gamma\) with particle number N\({}_{\rm orb}-1\) are shown in panels \(a\), \(b\), and \(d\), respectively, while the dependent parameters \(a_{\alpha\sigma<}\) are shown in panel \(c\). The inset of panel \(d\) shows all \(u_{\Gamma}\) for N\({}_{\rm orb}=5\), and the value monotonically decreases as the particle number of \(\Gamma\) decreases from four to zero. Figure 2: Static observables computed from VDAT within the SCDA at \(\mathcal{N}=2\) (dashed lines) and \(\mathcal{N}=3\) (solid lines) for the SU(2N\({}_{\rm orb}\)) Hubbard model on the \(d=\infty\) Bethe lattice at half filling and \(T=0\), with N\({}_{\rm orb}=2-8\). The legend colors are identical to Figure 1. (\(a\)) Total energy difference per spin orbital vs. \(U/t\), where \(\Delta E=E(t,U)-E(0,U)\). (\(b\)) Kinetic energy per spin orbital vs. \(U/t\). (\(c\)) Scaled double occupancy vs. \(U/t\), where \(\tilde{D}=(D-D_{at})/(D_{0}-D_{at})\); \(D\), \(D_{0}\), and \(D_{at}\) are the double occupancies at the given \(U/t\), the non-interacting value, and the atomic value, respectively. ters \(a_{\alpha\sigma<}\), which can be determined from the variational parameters. Because of the SU(2N\({}_{\rm orb}\)) symmetry, the \(\mathcal{G}_{\alpha\sigma;12}\), \(b_{\alpha\sigma<}\), and \(a_{\alpha\sigma<}\) are independent of \(\alpha\sigma\), while \(u_{\Gamma}\) only depends on the particle number of \(\Gamma\). Furthermore, particle-hole symmetry at half filling dictates that \(b_{\alpha\sigma<}=b_{\alpha\sigma>}\), \(a_{\alpha\sigma<}=-a_{\alpha\sigma>}\), and \(u_{\Gamma}=u_{\Gamma^{\prime}}\) if the sum of the particle numbers of \(\Gamma\) and \(\Gamma^{\prime}\) is 2N\({}_{\rm orb}\). Considering \(\mathcal{G}_{\alpha\sigma;12}\), it begins at \(U=0\) in the metallic phase with a value of roughly 0.37 and monotonically increases to 0.5 at the MIT, and is fixed at 0.5 in the insulating regime (see Figure 1a). Increasing the number of orbitals from N\({}_{\rm orb}\)=2-8 has several effects. In the small \(U\) regime, \(\mathcal{G}_{\alpha\sigma;12}\) increases with N\({}_{\rm orb}\), enhancing the electronic correlations for larger N\({}_{\rm orb}\). Alternatively, increasing N\({}_{\rm orb}\) will increase the critical value of \(U\) for the metal-insulator transition. The variational parameter \(b_{\alpha\sigma<}\) turns out to be approximately equal to \(U\), and therefore we also plot \(b_{\alpha\sigma<}-U\) (see Figure 1b). The intermediate parameter \(a_{\alpha\sigma<}\), which can be computed from the variational parameters, has a value of \(2t\) for \(U=0\), independent of N\({}_{\rm orb}\), and goes to zero in the insulating phase (see Figure 1c). The latter can be appreciated from Eq. 22, which dictates that the quasiparticle weight becomes zero when \(a_{\alpha\sigma<}=a_{\alpha\sigma>}\) and \(b_{\alpha\sigma<}=b_{\alpha\sigma>}\). For a given N\({}_{\rm orb}\), there is an arbitrary coefficient for the interacting projector which can be fixed by choosing \(u_{\Gamma}=1\) when the particle number of \(\Gamma\) is N\({}_{\rm orb}\). Therefore, there are N\({}_{\rm orb}\) independent local variational parameters \(u_{\Gamma}\) for \(\Gamma\) with particle number equal to \(0,\ldots,{\rm N}_{\rm orb}-1\). For a given N\({}_{\rm orb}\), we plot the \(u_{\Gamma}\) where the particle number for \(\Gamma\) is N\({}_{\rm orb}\)-1 (see Figure 1d), in addition to all \(u_{\Gamma}\) for N\({}_{\rm orb}=5\). The \(u_{\Gamma}\) for \(\Gamma\) with particle number N\({}_{\rm orb}\)-1 goes to zero in the insulating phase. We now consider the total energy, where we explore both \(\mathcal{N}=2\) and \(\mathcal{N}=3\) (see Fig. 2a). For clarity, we plot \(\Delta E=E(t,U)-E(0,U)\) divided by the number of spin orbitals. The \(\mathcal{N}=3\) results are always lower in energy than the \(\mathcal{N}=2\) results, as is required by the variational principle. Furthermore, the \(\mathcal{N}=3\) results resolve the well known deficiency of the \(\mathcal{N}=2\) results in the insulating regime. Interestingly, in the small and large \(U\) regimes, the quantity \(\Delta E/(2{\rm N}_{\rm orb})\) is largely independent of N\({}_{\rm orb}\), while in the intermediate regime it decreases with N\({}_{\rm orb}\). Similar behavior is observed in the kinetic energy per orbital (see Fig. 2b). The double occupancy \(D\) determines the interaction energy, and for convenience we plot the scaled double occupancy \(\tilde{D}=(D-D_{at})/(D_{0}-D_{at})\), where \(D_{0}\) and \(D_{at}\) are the non-interacting and atomic double occupancy, respectively (see Fig. 2c). In the small \(U\) regime, the scaled double occupancy decreases with N\({}_{\rm orb}\), while in the large \(U\) regime it is independent of N\({}_{\rm orb}\). It is also interesting to evaluate the quasiparticle weight as a function of \(U/t\), which determines the critical value of \(U\) for the MIT (see Fig. 3a). The \(\mathcal{N}=2\) result always produces a large quasiparticle weight than \(\mathcal{N}=3\), and therefore produces a larger critical value of \(U\) for the MIT. Interestingly, for \(\mathcal{N}=3\) at approximately \(U/t=1\), the quasiparticle weight is insensitive to N\({}_{\rm orb}\), and a similar effect is observed for \(\mathcal{N}=2\) at a slightly large value of \(U/t\). We also examine the transition value \(U_{c}/t\) as a function of N\({}_{\rm orb}\) and compare to the previously published scaling relation [2]\(U_{c}/t=1.7(2{\rm N}_{\rm orb}+1)(1+0.166{\rm N}_{\rm orb}^{-1})\) extracted from DMFT calculations which use QMC as the impurity solver (see Figure 3\(b\)). The \(\mathcal{N}=2\) case recovers the previously published results of the Gutzwiller approximation [9; 19], yielding a linear relation. Interestingly, \(\mathcal{N}=3\) also produces a nearly linear result, but the result is shifted downward, nearly coinciding with the DMFT extrapolation. We now proceed to compare the total energy at half filling and zero temperature from VDAT and DMFT. Given that the previously published DMFT results are at a finite temperature [2], a quadratic fit assuming \(E(T)-E(0)\propto T^{2}\) was used to extrapolate to zero temperature (see Fig. 4). We wer Figure 3: Results for VDAT within the SCDA at \(\mathcal{N}=2\) and \(\mathcal{N}=3\) for the SU(2N\({}_{\rm orb}\)) Hubbard model on the \(d=\infty\) Bethe lattice at half filling and \(T=0\), with N\({}_{\rm orb}=2-8\). (\(a\)) Quasiparticle weight. The legend is identical to Figure 1. (\(b\)) The critical value of \(U\) for the MIT, denoted \(U_{c}\), as a function of N\({}_{\rm orb}\). The DMFT curve is a plot of a previously published fit to DMFT results [2], given as \(U_{c}/t=1.7(2{\rm N}_{\rm orb}+1)(1+0.166{\rm N}_{\rm orb}^{-1})\). select cases where the QMC data sufficiently resembled a quadratic. For \(\mathrm{N_{orb}}=2\), we present VDAT results for \(\mathcal{N}=2-4\), which were previously published in Ref. [4], while for \(\mathrm{N_{orb}}=4\) we present VDAT results for \(\mathcal{N}=2-3\). As required by the variational principle, the energy within VDAT monotonically decreases with increasing \(\mathcal{N}\). The dramatic improvement of \(\mathcal{N}=3\) over \(\mathcal{N}=2\) is clearly illustrated, and it should be recalled that these two cases have a similar computational cost. Finally, we evaluate the doping dependence of the total energy, in addition to the corresponding first and second derivatives, using VDAT with \(\mathcal{N}=2\) and \(\mathcal{N}=3\). We compare to a previously published DMFT study which used the NRG impurity solver [18] to study \(2\mathrm{N_{orb}}=5\) for \(U/t=6\) and \(U/t=14\) (see Fig. 5), where \(U/t=6\) is a metal at all densities and \(U/t=14\) undergoes an MIT at the integer fillings of \(n=0.2\) and \(n=0.4\). We first compare the total energy, where the variational principle guarantees that the energy monotonically decreases from \(\mathcal{N}=2\) to \(\mathcal{N}=3\) to the numerically exact solution given by DMFT solved within NRG (see Fig. 5, panel \(a\)). It should be noted the DMFT study did not provide the total energy, and we obtained it by numerically integrating the chemical potential over the density. For clarity, we plot \(\Delta E/t\) where \(\Delta E=E(t,U)-E(0,U)\) and \(E(t,U)\) is the total energy evaluated at a given density. Overall, \(\mathcal{N}=3\) yields a dramatic improvement over \(\mathcal{N}=2\), especially at integer fillings. Interestingly, the trends in the absolute error of the total energy are notably different for \(\mathcal{N}=2\) and \(\mathcal{N}=3\), where the former has the largest absolute error at integer filling while the latter has the largest absolute error midway between integer fillings. The trend for \(\mathcal{N}=3\) might be attributed to the efficacy of the kinetic projector for capturing superexchange at integer filling. We proceed to compare the chemical potential, which is the derivative of the energy with respect to the density, as a function of density (see Fig. 5, panel \(b\)). Within VDAT, the chemical potential was obtained using finite difference to take the first derivative of the total energy with respect to the density, and a grid spacing of \(0.002\) was used for \(0.02<n_{\alpha\sigma}\leq 0.5\) while \(0.0001\) was used for \(n_{\alpha\sigma}<0.02\). For clarity, we plot \(\Delta\mu/U\) where \(\Delta\mu=\mu-2U\). The Mott transition can clearly be identified as a discontinuity in the chemical potential at integer fillings. While \(\mathcal{N}=2\) is reasonable overall, there are clear discrepancies near the MIT (see insets for absolute error in \(\Delta\mu/U\)), and \(\mathcal{N}=3\) largely resolves these issues. However, it should be recalled that unlike the total energy, the convergence of the chemical potential is not necessarily monotonic in \(\mathcal{N}\). For example, in the case of \(U/t=14\) and \(n\to 0.2^{-}\), the total energy for \(\mathcal{N}=2\) is substantially larger than \(\mathcal{N}=3\), yet the chemical potential for \(\mathcal{N}=2\) is much closer to the exact solution. To further scrutinize these same results from another viewpoint, we examine \(U\partial n/\partial\mu\) as a function of density (see Fig. 5\(c\)). Within VDAT, \(\partial\mu/\partial n\) was obtained using finite difference to take the second derivative of the total energy, and the same grid spacing was used as in the case of the chemical potential. For the low density region \(n_{\alpha\sigma}<0.1\), the \(\mathcal{N}=3\) results were smoothed using a spline interpolation. Similar to the chemical potential, the convergence of this quantity is not necessarily monotonic in \(\mathcal{N}\), and the same conclusions can be drawn. ## IV Conclusions In this paper, we proposed a gauge constrained algorithm to evaluate the SPD ansatz at \(\mathcal{N}=3\) within the SCDA for the multi-orbital Hubbard model. The key feature of this algorithm is that it automatically satisfies the self-consistency condition of the SCDA using the gauge freedom of the SPD, greatly facilitating the minimization over the variational parameters. Interestingly, the gauge constrained algorithm yields a simple analytical form of the single particle density matrix of the optimized SPD ansatz. The convenient mathematical form of the gauge constrained algorithm greatly simplifies the implementation of VDAT within the SCDA when treating a large Figure 4: The total energy of the \(\mathrm{SU(2N_{orb})}\) Hubbard model on the \(d=\infty\) Bethe lattice at half filling from published DMFT calculations solved using QMC at a finite temperature [2] and VDAT within the SCDA at zero temperature using \(\mathcal{N}=2-4\), with \(\mathrm{N_{orb}}=2\) in panel \(a\) and \(\mathrm{N_{orb}}=4\) in panel \(b\). The lines are a fit to the DMFT results assuming \(E(T)-E(0)\propto T^{2}\). number of orbitals. In order to showcase the power of the gauge constrained algorithm, we studied the \(\mathrm{SU(2N_{orb})}\) Hubbard model at zero temperature on the Bethe lattice in \(d=\infty\) for \(\mathrm{N_{orb}}=2-8\), and compare to numerically exact DMFT results where available. While the symmetry of the \(\mathrm{SU(2N_{orb})}\) Hubbard model greatly reduces the computational cost for solving the DMFT impurity model, computational limitations still restrict most studies to relatively small values of \(\mathrm{N_{orb}}\). A DMFT study using a numerical renormalization group impurity solver presented results up to \(2\mathrm{N_{orb}}=5\)[18], while a study using a quantum Monte-Carlo impurity solver presented finite temperature results up to \(\mathrm{N_{orb}}=8\)[2]; though the latter results were at relatively high temperatures and appear to have nontrivial stochastic error. Therefore, our successful execution of \(\mathrm{N_{orb}}=8\) showcases the utility of the gauge constrained algorithm for executing VDAT within the SCDA at \(\mathcal{N}=3\). At half filling, we evaluated the kinetic energy, interaction energy, and quasiparticle weight as a function of \(U/t\). For the doped case, we evaluated the density as a function of the chemical potential, in addition to the derivative, at various values of \(U/t\). As expected, \(\mathcal{N}=3\) yields a dramatic improvement over \(\mathcal{N}=2\), at a similar computational cost. The successful computation of the ground state energy for the \(\mathrm{N_{orb}}=7\) Hubbard model on a single processor core in under one hour demonstrates the viability of VDAT to study realistic \(f\)-electron systems. The technical developments in this work are a key step forward towards studying realistic Hamiltonians of complex strongly correlated electron materials. ## V Acknowledgements We thank Shuxiang Yang for useful discussions about the manuscript. This work was supported by the Columbia Center for Computational Electrochemistry. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. ## Appendix A Solving multi-orbital Hubbard model with a density constraint It is often desirable to solve a Hamiltonian with fixed densities for the spin orbitals, which can be efficiently executed by reparametrizing the variational parameters \(u\). We begin by realizing that the vector space associated with \(u\) can be constructed as a direct product of two dimensional vector spaces associated with each spin orbital. For an operator in the compound space \(\hat{O}=\prod_{\alpha\sigma}\hat{O}_{\alpha\sigma}\), the representation in the \(u\) basis can be constructed as \[(\hat{O})_{u}=(\hat{O}_{1\uparrow})_{u;1\uparrow}\otimes\cdots \otimes(\hat{O}_{N_{orb}\downarrow})_{u;N_{orb}\downarrow}. \tag{11}\] Using this relation, equations 32 and 34 are recast as \[\big{(}\hat{1}\big{)}_{u} =\big{(}\hat{1}\big{)}_{u;1\uparrow}\otimes\cdots\otimes\big{(} \hat{1}\big{)}_{u;N_{orb}\downarrow}\,, \tag{12}\] \[(\hat{a}_{\alpha\sigma}^{(\tau)}\hat{a}_{\alpha\sigma}^{(\tau^{ \prime})})_{u} =\big{(}\hat{1}\big{)}_{u;1\uparrow}\otimes\cdots\] \[\otimes(\hat{a}_{\alpha\sigma}^{(\tau)}\hat{a}_{\alpha\sigma}^{( \tau^{\prime})})_{u;\alpha\sigma}\cdots\otimes\big{(}\hat{1}\big{)}_{u;N_{orb }\downarrow}\,, \tag{13}\] Figure 5: Doping dependent results of the \(\mathrm{SU(2N_{orb})}\) Hubbard model on the \(d=\infty\) Bethe lattice at zero temperature from published DMFT calculations solved using NRG [18] and VDAT within the SCDA using \(\mathcal{N}=2\) and \(\mathcal{N}=3\), with \(2\mathrm{N_{orb}}=5\). (panel \(a\)) Total energy difference \(\Delta E\) vs. density, where \(\Delta E=E(t,U)-E(0,U)\). The DMFT curve is obtained by integrating the chemical potential over density. (panel \(b\)) \(\Delta\mu/U\) vs. density, where \(\Delta\mu=\mu-2U\). The insets plot the absolute error in \(\Delta\mu/U\) vs. density. (panel \(c\)) The derivative \(\partial n/\partial\mu\) times \(U\) vs. the density. where \[[(\hat{\mathbb{1}})_{w;\alpha\sigma}]_{\Gamma_{\alpha\sigma}\Gamma_{ \alpha\sigma}^{\prime}}=p_{\alpha\sigma}\left(\Gamma_{\alpha\sigma},\Gamma_{ \alpha\sigma}^{\prime}\right), \tag{100}\] \[[(\hat{a}_{\alpha\sigma}^{\dagger(\uparrow)}\hat{a}_{\alpha \sigma}^{(\tau^{\prime})})_{w;\alpha\sigma}]_{\Gamma_{\alpha\sigma}\Gamma_{ \alpha\sigma}^{\prime}}=g_{\alpha\sigma}^{\tau\tau^{\prime}}\left(\Gamma_{ \alpha\sigma},\Gamma_{\alpha\sigma}^{\prime}\right), \tag{101}\] where \(p_{\alpha\sigma}\left(\Gamma_{\alpha\sigma},\Gamma_{\alpha\sigma}^{\prime}\right)\) and \(g_{\alpha\sigma}^{\tau\tau^{\prime}}\left(\Gamma_{\alpha\sigma},\Gamma_{ \alpha\sigma}^{\prime}\right)\) are defined in equations 33 and 35, respectively. The relevant matrices which will be needed to constrain the orbital density are \[\left(\hat{\mathbb{1}}\right)_{u;\alpha\sigma}=\left(\begin{array}{cc} \mathcal{G}_{\alpha\sigma,1,2}^{2}+\frac{1}{4}&\frac{1}{4}-\mathcal{G}_{ \alpha\sigma,1,2}^{2}\\ \frac{1}{4}-\mathcal{G}_{\alpha\sigma,1,2}^{2}&\mathcal{G}_{\alpha\sigma,1,2} ^{2}+\frac{1}{4}\end{array}\right), \tag{102}\] \[\left(\hat{a}_{\alpha\sigma}^{\dagger(1)}\hat{a}_{\alpha\sigma}^{(1)}\right)_ {u;\alpha\sigma}=\left(\begin{array}{cc}0&0\\ \frac{1}{4}-\mathcal{G}_{\alpha\sigma,1,2}^{2}&\mathcal{G}_{\alpha\sigma,1,2} ^{2}+\frac{1}{4}\end{array}\right), \tag{103}\] \[\left(\hat{a}_{\alpha\sigma}^{\dagger(2)}\hat{a}_{\alpha\sigma}^{(2)}\right)_ {u;\alpha\sigma}=\left(\begin{array}{cc}0&\frac{1}{4}-\mathcal{G}_{\alpha \sigma,1,2}^{2}\\ 0&\mathcal{G}_{\alpha\sigma,1,2}^{2}+\frac{1}{4}\end{array}\right). \tag{104}\] In summary, Eq. 101 provides a simple mathematical structure to construct \((\hat{Q})_{u}\). We now proceed to reparametrize the variational parameters \(u\). In general, one can introduce a linear transformation over the variational parameters as \(u=Vw\), and by requiring \(w^{T}(\hat{Q})_{w}w=u^{T}(\hat{Q})_{u}u\), a new matrix form is obtained as \[(\hat{Q})_{w}\equiv V^{T}(\hat{Q})_{u}V. \tag{105}\] In order to preserve the direct product structure of \((\hat{Q})_{w}\), the transformation is constructed as \(V=V_{1\uparrow}\otimes\cdots\otimes V_{\mathrm{N}_{\mathrm{orb}}\downarrow}\), resulting in \[(\hat{Q})_{w}=(\hat{O}_{1\uparrow})_{w;1\uparrow}\otimes\cdots \otimes(\hat{O}_{\mathrm{N}_{\mathrm{orb}}\downarrow})_{w;\mathrm{N}_{\mathrm{orb }}\downarrow}, \tag{106}\] \[(\hat{O}_{\alpha\sigma})_{w;\alpha\sigma}\equiv V_{\alpha\sigma}^ {T}(\hat{O}_{\alpha\sigma})_{u;\alpha\sigma}V_{\alpha\sigma}. \tag{107}\] In order to ensure that \((\hat{\mathbb{1}})_{w;\alpha\sigma}\) is the identity matrix and the symmetric part of \((\hat{a}_{\alpha\sigma}^{\dagger(1)}\hat{a}_{\alpha\sigma}^{(1)})_{w;\alpha\sigma}\) is diagonal, we have \[\boldsymbol{V}_{\alpha\sigma}=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}\frac {1}{2\Theta_{\alpha\sigma,1,2}}+1&1-\frac{1}{2\Theta_{\alpha\sigma,1,2}}\\ 1-\frac{1}{2\Theta_{\alpha\sigma,1,2}}&\frac{1}{2\Theta_{\alpha\sigma,1,2}}+1 \end{array}\right), \tag{108}\] thus completely defining the reparametrization. One of the necessary reparameterized matrix elements is \[\boldsymbol{n}_{w;\alpha\sigma} \equiv\frac{1}{2}\left((\hat{a}_{\alpha\sigma}^{\dagger(1)}\hat{ a}_{\alpha\sigma}^{(1)})_{w;\alpha\sigma}+(\hat{a}_{\alpha\sigma}^{\dagger(1)} \hat{a}_{\alpha\sigma}^{(1)})_{w;\alpha\sigma}^{T}\right)\] \[=\left(\begin{array}{cc}-\frac{(1-2\mathcal{G}_{\alpha\sigma,1, 2})^{2}}{8\mathcal{G}_{\alpha\sigma,1,2}}&0\\ 0&\frac{(2\mathcal{G}_{\alpha\sigma,1,2}+1)^{2}}{8\mathcal{G}_{\alpha\sigma,1,2 }}\end{array}\right), \tag{109}\] and the others are provided in Supplementary Material [1]. We now proceed to constrain the density for each spin orbital, and we begin by considering the case where the interacting projector is a non-interacting density matrix, which can be written as \[w_{\Gamma;0}^{2}=\prod_{\alpha\alpha}\left(\left(1-n_{\alpha\sigma;eff}\right) \left(1-\Gamma_{\alpha\sigma}\right)+n_{\sigma\alpha;eff}\Gamma_{\alpha\sigma }\right), \tag{110}\] where \(n_{\alpha\sigma;eff}\) can be determined from \[\mathrm{Tr}\left(\boldsymbol{n}_{w;\alpha\sigma}\left(\begin{array}{cc}1-n_ {\alpha\sigma;eff}&0\\ 0&n_{\sigma\alpha;eff}\end{array}\right)\right)=n_{\alpha\sigma}, \tag{111}\] which can be solved as \[n_{\alpha\sigma,\mathrm{eff}}=\frac{2\left(2n_{\alpha\sigma}-1\right)\mathcal{G }_{\alpha\sigma,1,2}}{4\mathcal{G}_{\alpha\sigma,1,2}^{2}+1}+\frac{1}{2}. \tag{112}\] Subsequently, \(2^{\mathrm{2N}_{\mathrm{orb}}}-\left(1+2\mathrm{N}_{\mathrm{orb}}\right)\) variational parameters \(x_{\eta}\) can be introduced to describe the deviations from \(w_{\Gamma;0}^{2}\), which do not change the density or the normalization. It is then useful to define a \(2^{\mathrm{2N}_{\mathrm{orb}}}\times 2^{\mathrm{2N}_{\mathrm{orb}}}\) matrix \(V_{\Gamma\eta}\) as \[V_{\Gamma i\left(\{\alpha_{1}\sigma_{1},\ldots,\alpha_{n}\sigma_{n}\}\right)}= \prod_{j=1}^{n}(\Gamma_{\alpha_{j}\sigma_{j}}-\frac{1}{2}), \tag{113}\] where \(i(\{\alpha_{1}\sigma_{1},\ldots,\alpha_{n}\sigma_{n}\})=1,\ldots,2^{\mathrm{2N}_ {\mathrm{orb}}}\) is a convention for indexing all subsets of \(\{1\uparrow,1\downarrow,\ldots,\mathrm{N}_{\mathrm{orb}}\uparrow,\mathrm{N}_{ \mathrm{orb}}\downarrow\}\), and \(n=0,\ldots,2\mathrm{N}_{\mathrm{orb}}\) denotes the number of spin orbitals contained in a given subset. A convenient convention for sorting the subsets is first sorting by increasing cardinality and then by the binary interpretation of the subset. The subsets with cardinality greater than one form \(2^{\mathrm{2N}_{\mathrm{orb}}}-\left(1+2\mathrm{N}_{\mathrm{orb}}\right)\) orthogonal vectors that do not change the normalization or the orbital occupation. A similar approach has been used to represent the Bernoulli distribution [28]. A general interacting projector that is constrained to the given orbital occupation can be parameterized as \[w_{\Gamma}^{2}=w_{\Gamma;0}^{2}+\sum_{\eta=2+2\mathrm{N}_{\mathrm{orb}}}^{2^{ \mathrm{2N}_{\mathrm{orb}}}}V_{\Gamma\eta}x_{\eta}, \tag{114}\] where \(x_{\eta}\) are real numbers that are constrained by the condition that \(\omega_{\Gamma}^{2}\geq 0\). For example, \(\mathrm{N}_{\mathrm{orb}}=1\) results in one independent variational parameter \(x\), yielding \[w_{1}^{2}=\left(1-n_{\uparrow;eff}\right)\left(1-n_{\downarrow eff }\right)+\frac{1}{4}x, \tag{115}\] \[w_{2}^{2}=\left(1-n_{\uparrow;eff}\right)n_{\downarrow eff}- \frac{1}{4}x,\] (116) \[w_{3}^{2}=n_{\uparrow;eff}\left(1-n_{\downarrow eff}\right)- \frac{1}{4}x,\] (117) \[w_{4}^{2}=n_{\uparrow;eff}n_{\downarrow eff}+\frac{1}{4}x, \tag{118}\] where \(x\in[x_{\mathrm{min}},x_{\mathrm{max}}]\) and \[x_{\mathrm{min}}=-4\min((1-n_{\uparrow;eff})(1-n_{\downarrow eff}),n_{\uparrow; eff}n_{\downarrow eff}), \tag{119}\] \[x_{\mathrm{max}}=4\min((1-n_{\uparrow;eff})n_{\downarrow eff},n_{ \uparrow;eff}(1-n_{\downarrow eff})). \tag{120}\] Finally, the ground state energy can be obtained as \[E=\min_{\mathcal{G}_{12},x,\mathbf{b}}\Big{(}\int dk\epsilon_{k\alpha \sigma}n_{k\alpha\sigma}\left(\mathbf{a},\mathbf{b}\right)+E_{loc}\left(\mathcal{G}_{12 },x,\mathbf{b}\right)\Big{)}, \tag{100}\] where \(x=\{x_{\eta}\}\) and \(\mathbf{a}\) is determined from \(\{n_{\alpha\sigma}\}\), \(\mathcal{G}_{12}\), \(x\), and \(\mathbf{b}\). ## Appendix B the gauge constrained algorithm using general local projectors In this paper, we have assumed that the interacting projector can be written as a linear combination of diagonal Hubbard operators in the basis \(\alpha\sigma\), and that \(\mathbf{\mathcal{G}}\) is diagonal in basis \(\alpha\sigma\). Here we outline how to treat the general case, starting with the first assumption. A general local interacting projector can be an arbitrary linear combination of all possible Hubbard operators, including off-diagonal Hubbard operators. A general Hubbard operator can be constructed as \[\hat{P}_{i\Gamma} =\prod_{\alpha\sigma}\Bigl{(}\delta_{\Gamma_{\alpha\sigma},0}(1- \hat{n}_{i\alpha\sigma})+\delta_{\Gamma_{\alpha\sigma},1}\hat{n}_{i\alpha\sigma}\] \[\qquad\quad+\delta_{\Gamma_{\alpha\sigma},2}\hat{a}_{i\alpha \sigma}^{\dagger}+\delta_{\Gamma_{\alpha\sigma},3}\hat{a}_{i\alpha\sigma} \Bigr{)}, \tag{101}\] \[=\prod_{\alpha\sigma}\hat{P}_{i\alpha\sigma;\Gamma_{\alpha\sigma}}, \tag{102}\] where \(\Gamma-1=\left(\Gamma_{1\uparrow}\ldots\Gamma_{\mathrm{N}_{\mathrm{orb}} \downarrow}\right)_{4}\) and \(\Gamma=1,\ldots,4^{2\mathrm{N}_{\mathrm{orb}}}\). The most general interacting projector can be constructed as \(\hat{P}_{i}(u)=\sum_{\Gamma}u_{\Gamma}\hat{P}_{i\Gamma}\). However, given that we require \(\hat{P}_{i}(u)\) to obey certain symmetries and conservation relations, some \(u_{\Gamma}\) may be zero. In order to evaluate \(\langle\hat{P}_{i}^{(1)}\hat{P}_{i}^{(2)}\hat{O}\rangle_{\hat{\varrho}_{loci,0}}\), we first consider \[\hat{O}=\prod_{\alpha\sigma}\hat{O}_{\alpha\sigma}, \tag{103}\] where \(\hat{O}_{\alpha\sigma}\) is a single product in terms of \(\hat{a}_{i\alpha\sigma}^{\dagger(\tau)}\) and \(\hat{a}_{i\alpha\sigma}^{(\tau)}\), yielding \[\left\langle\hat{P}_{i}^{(1)}\hat{P}_{i}^{(2)}\hat{O}\right\rangle_{\hat{ \varrho}_{loci,0}}=\sum_{\Gamma\Gamma^{\prime}}u_{\Gamma^{\prime}}\left\langle \hat{P}_{i\Gamma}^{(1)}\hat{P}_{i\Gamma^{\prime}}^{(2)}\hat{O}\right\rangle_{ \hat{\varrho}_{loci,0}}, \tag{104}\] where \[\left\langle\hat{P}_{i\Gamma}^{(1)}\hat{P}_{i\Gamma^{\prime}}^{ (2)}\hat{O}\right\rangle_{\hat{\varrho}_{loci,0}}\] \[=\left\langle\prod_{\alpha\sigma}\hat{P}_{i\alpha\sigma;\Gamma_ {\alpha\sigma}}^{(1)}\prod_{\alpha\sigma}\hat{P}_{i\alpha\sigma;\Gamma^{ \prime}_{\alpha\sigma}}^{(2)}\prod_{\alpha\sigma}\hat{O}_{\alpha\sigma} \right\rangle_{\hat{\varrho}_{loci,0}} \tag{105}\] \[=\theta\left(\hat{O},\Gamma,\Gamma^{\prime}\right)\prod_{\alpha \sigma}\left\langle\hat{P}_{i\alpha\sigma;\Gamma_{\alpha\sigma}}^{(1)}\hat{P }_{i\alpha\sigma;\Gamma^{\prime}_{\alpha\sigma}}^{(2)}\hat{O}_{\alpha\sigma} \right\rangle_{\hat{\varrho}_{loci,0}}, \tag{106}\] where \(\theta(\hat{O},\Gamma,\Gamma^{\prime})=\pm 1\) and is determined by tracking the sign when ordering the expression from Eq. 105 to Eq. 106. For a general operator \(\hat{O}\), one can always decompose it into a sum of operators which has the form of Eq 103 and apply the above formulas. In order to treat a general \(\mathbf{\mathcal{G}}\) and a general operator \(\hat{O}\), one must straightforwardly apply Wick's theorem to evaluate expectation values [5], though the resulting gauge constrained algorithm will be more complicated. For example, simple closed form equations such as Eq. 22 may not be obtained, requiring a numerical minimization to obtain the density distribution. Appendix C Understanding how the \(\mathcal{N}=3\) gauge constrained algorithm with a restricted kinetic projector reduces to \(\mathcal{N}=2\) In Ref. [5], we illustrated how the SCDA at \(\mathcal{N}=2\) using the Gutzwiller gauge recovers the Gutzwiller approximation. In the present work where we address \(\mathcal{N}=3\), the gauge constrained algorithm uses a different type of gauge. Therefore, it is interesting to see how the \(\mathcal{N}=3\) gauge constrained algorithm with a restricted kinetic projector can recover the Gutzwiller approximation. In particular, the restricted kinetic projector will force the density distribution to be flat both above and below the fermi surface. We begin by assuming \[\mathbf{\mathcal{G}}_{\alpha\sigma}=\left(\begin{array}{ccc}\frac{1}{2}&\frac{1 }{2}&\frac{1}{2}\\ -\frac{1}{2}&\frac{1}{2}&\frac{1}{2}\\ -\frac{1}{2}&-\frac{1}{2}&\frac{1}{2}\end{array}\right), \tag{107}\] which is motivated by the Gutzwiller gauge. The canonical discrete action of \(\mathbf{\mathcal{G}}_{\alpha\sigma}\) corresponds to an SPD [5], which is the product of three identity operators, and we can write the \(A\) block for interacting Green's function as \[\mathbf{g}_{loc,\alpha\sigma;A}=\left(\begin{array}{cc}n_{\alpha \sigma}&a_{\alpha\sigma}r_{\alpha\sigma}\\ -a_{\alpha\sigma}r_{\alpha\sigma}&n_{\alpha\sigma}\end{array}\right), \tag{108}\] where \(a_{\alpha\sigma}=\sqrt{\left(1-n_{\alpha\sigma}\right)n_{\alpha\sigma}}\) and \(r_{\alpha\sigma}\) denotes the renormalization for the off-diagonal elements of the A-block compared to the reference interacting Green's function \[\mathbf{g}_{loc,\alpha\sigma;A;ref}=\left(\begin{array}{cc}n_{\alpha\sigma}&a_{ \alpha\sigma}\\ -a_{\alpha\sigma}&n_{\alpha\sigma}\end{array}\right), \tag{109}\] which corresponds to the case where \(\hat{P}\) is non-interacting, denoted as \(P_{0}\). The \(\hat{P}_{0}\) is chosen such that \[\frac{\mathrm{Tr}\left(\hat{P}_{0}^{2}\hat{n}_{\alpha\sigma}\right)}{\mathrm{ Tr}\left(\hat{P}_{0}^{2}\right)}=\frac{\mathrm{Tr}\left(\hat{P}^{2}\hat{n}_{ \alpha\sigma}\right)}{\mathrm{Tr}\left(\hat{P}^{2}\right)}=n_{\alpha\sigma}, \tag{110}\] and \(r_{\alpha\sigma}\) is given as \[r_{\alpha\sigma}=\frac{\mathrm{Tr}\left(\hat{P}\hat{a}_{\alpha \sigma}^{\dagger}\hat{P}\hat{a}_{\alpha\sigma}\right)}{\mathrm{Tr}\left(\hat{P} ^{2}\right)}/\frac{\mathrm{Tr}\left(\hat{P}_{0}\hat{a}_{\alpha\sigma}^{\dagger} \hat{P}_{0}\hat{a}_{\alpha\sigma}\right)}{\mathrm{Tr}\left(\hat{P}_{0}^{2} \right)}, \tag{111}\]
2307.12737
Exploring the Weber dependency of jet fragmentation: a Direct Numerical Simulation investigation
Jet fragmentation is investigated through a Direct Numerical Simulation campaign using Basilisk (Popinet & collaborators 2013). The simulations span over one order of magnitude of gaseous Weber numbers (13 to 165), i.e. over the second wind-induced and atomization regimes, and the jets develop over distances up to 28 nozzle diameters. The study focuses on the size and velocity distributions of droplets, as well as their joint distribution. Two models derived from different theoretical backgrounds, the statistical description of the turbulence intermittency (Novikov & Dommermuth 1997) and the empirical description of the ligament-mediated fragmentation (Villermaux et al. 2004), are compared for describing the droplet size distribution close to the nozzle. The characteristics of the size-velocity joint distribution are explained using the vortex ring theory (Saffman 1992) which highlights two sources of fragmentation. Finally, the joint histogram of the particulate Reynolds and Ohnesorge numbers is analysed and a normalisation is suggested. It reveals that the delimitations of the droplet phase space, once properly normalised, are self-similar and independent of the gaseous Weber number, both numerically and experimentally.
Romain Vallon, Malek Abid, Fabien Anselmet
2023-07-24T12:28:04Z
http://arxiv.org/abs/2307.12737v1
[ ###### Abstract Jet fragmentation is investigated through a Direct Numerical Simulation campaign using Basilisk (Popinet & collaborators 2013). The simulations span over one order of magnitude of gaseous Weber numbers (13 to 165), i.e. over the second wind-induced and atomization regimes, and the jets develop over distances up to 28 nozzle diameters. The study focuses on the size and velocity distributions of droplets, as well as their joint distribution. Two models derived from different theoretical backgrounds, the statistical description of the turbulence intermittency (Novikov & Dommermuth 1997) and the empirical description of the ligament-mediated fragmentation (Villermaux _et al._ 2004), are compared for describing the droplet size distribution close to the nozzle. The characteristics of the size-velocity joint distribution are explained using the vortex ring theory (Saffman 1992) which highlights two sources of fragmentation. Finally, the joint histogram of the particulate Reynolds and Ohnesorge numbers is analysed and a normalisation is suggested. It reveals that the delimitations of the droplet phase space, once properly normalised, are self-similar and independent of the gaseous Weber number, both numerically and experimentally. ]Exploring the Weber dependency of jet fragmentation: a Direct Numerical Simulation investigation Romain Vallon +, Malek Abid and Fabien Anselmet\({}^{1}\) Footnote †: Email address for correspondence: [email protected] ## 1 Introduction Jet fragmentation occurs in numerous natural mechanisms and industrial applications. It can appear in the form of an ocean or lava spray when waves crash on the shore or during volcanic eruption, yet it is more common to find this physical mechanism in medication sprays, fuel injection systems of combustion engines or agricultural sprinkling. Jet fragmentation can be a challenging configuration to be studied numerically. Fragmentation flows of high Reynolds and Weber numbers present a large diversity of scales and fluid objects whose dynamics are partly governed by the surface tension and the turbulent characteristics of the flow, which gives them a high complexity. Their Direct Numerical Simulation (DNS) requires to solve the two phase Navier Stokes equations with surface tension. A fine resolution of the interfaces is of utmost importance and can be achieved with an optimized use of computing resources thanks to adaptive grids. Those multiphase flows result from the injection of a dense phase at a velocity \(U_{inj}\) into a lighter phase through a nozzle of diameter \(d_{n}\) and produce a polydisperse spray. The phases are denoted by subscript \(i\) which takes the value \(1\) for the injected dense phase and \(2\) for the lighter phase. Both phases are respectively renamed liquid and gas in the following. With the Reynolds (\(Re\)) and Weber (\(We\)) numbers, the Ohnesorge (\(Oh\)) number completes the list of governing dimensionless numbers. The first one represents the ratio of inertia over viscosity and the second one the ratio of inertia over surface tension. The latter relates to the droplet deformation and represents the ratio of viscosity over the product of the surface tension and inertia. Their respective expressions follow: \[Re_{i}=\frac{\rho_{i}U_{inj}d_{n}}{\mu_{i}},\quad We_{i}=\frac{\rho_{i}U_{inj}^{ 2}d_{n}}{\sigma},\quad Oh_{1}=\frac{\mu_{1}}{\sqrt{\rho_{1}d_{n}\sigma}} \tag{1}\] where \(\rho_{i}\) and \(\mu_{i}\) denote the density and the dynamic viscosity of the phase \(i\) and \(\sigma\) the surface tension between the two phases, taken as constant. Lefebvre & McDonell (2017) categorized five fragmentation regimes for non-assisted fragmentation of round jets, whose delimitations mainly depend on the Weber number. The focus is given here on two of them: the second wind-induced regime for which \(We_{2}\in[13,40.3]\) and the so-called atomisation regime for which \(We_{2}>40.3\). Complementary, the jet configurations are distinguished between large jets, \(d_{n}>1\)mm, and small jets, \(d_{n}<1\)mm. In addition, the fragmentation of a jet is often split into several breakup types: the primary and the secondary breakups. The former corresponds to the generation of elements only coming from the dense core, while the latter considers large elements dumped from the core which undergo further fragmentation, illustrated in figure 1. Thus, the physical border of the two breakup types is the location where the dense core pinches off and generates large scale elements, which are unstable in flows of moderate or large liquid Reynolds number \(Re_{1}\) and gaseous Weber number \(We_{2}\). Figure 1: Scheme of the fragmentation in the second wind-induced regime of a liquid jet into a dispersed phase composed of droplets with size \(d\) and velocity vector \(\mathbf{u}\). The distances indicated here correspond to the ones observed in the experiments of Felis _et al._ (2020), for which the jet lies in the second wind-induced regime of fragmentation with \(We_{2}=24\). Numerical studies of jet fragmentation mainly focus on the primary breakup region, close to the nozzle, due to limitations on computational resources and numerical challenges (Gorokhovski & Herrmann, 2008; Fuster _et al._, 2009; Tryggvason _et al._, 2011; Popinet, 2018). Zandian _et al._ (2017) realised DNS to study the evolution of a planar jet and specifically focused on the development of three-dimensional instabilities. Ling _et al._ (2017_a_) studied a quasi planar gas-liquid mixing layer at moderate density ratio (\(\rho_{1}/\rho_{2}=20\), \(Re_{1}=160000\), \(We_{2}=20\)) thanks to finely resolved DNS. They were able to explain precisely the development of instabilities on the sheet interface. They captured the development of Taylor Culick instabilities (Taylor, 1959; Culick, 1960) as well as the fragmentation of a ligament into droplets and finally compared the droplet size distribution obtained for different grid refinements with the logarithmic normal and \(\Gamma\) laws. On the side of round liquid jets, the latest studies rely on DNS using the code Basilisk (Popinet & collaborators, 2013) or the SPH method. Chaussonnet _et al._ (2018) used the latter to explore the droplet population produced by a twin-fluid atomizer at high pressure up to \(x/d_{n}\approx 10\) (\(\rho_{1}/\rho_{2}=93\), \(Re_{1}=1.27\times 10^{7}\), \(We_{2}=1375\)). Ling _et al._ (2017_b_) used Basilisk to observe the influence of viscosity on the fragmentation of a round biodiesel jet (\(\rho_{1}/\rho_{2}=78.2\), \(Re_{1}=1450\), \(We_{2}=12.9\)) developing up to \(x/d_{n}\approx 20\) while testing different grid refinements. Zhang _et al._ (2020) observed the fragmentation of a round diesel jet injected through a solid G-spray injector, developing up to \(x/d_{n}\approx 20\) as well (\(\rho_{1}/\rho_{2}=233\), \(Re_{1}=13400\), \(We_{2}=177\)). Through their study, the authors were able to observe the fragmentation of the liquid core into droplets as well as the spatial distribution of the vortices along the core. In addition, the authors modeled the droplet size distribution relative to the azimuthal angle by a hyperbolic tangent function. Finally, both studies relying on Basilisk compared the logarithmic stable and the \(\Gamma\) laws, the former being derived in the context of turbulence (Novikov, 1994; Novikov & Dommermuth, 1997) and the latter in the context of ligament mediated fragmentation (Villermaux _et al._, 2004; Villermaux, 2020), to fit the droplet size distribution and concluded on the better performance of the fit with the logarithmic normal law in linear mode, i.e. fitting the signal as it is. Later experimental studies (Stevenin _et al._, 2016; Felis _et al._, 2020) used specific droplet tracking velocimetry (DTV) and laser Doppler velocimetry (LDV) apparatus to explore the dispersed zone of agricultural-like jets (\(\rho_{1}/\rho_{2}=828.5\), \(Re_{1}=41833\), \(We_{2}=24\)). The measurements were carried far away from the nozzle, \(x\geqslant 400\)\(d_{n}\), in the zone where the liquid core is fully atomized and where only the secondary breakup occurs. Based on those joint size-velocity measurements, Vallon _et al._ (2021) highlighted the multimodal nature of the droplet size distribution along with the existence of droplet subgroups, each of them being characterised by a specific pair of size and velocity. The present paper aims to complete the experimental campaigns by studying numerically the field close to the nozzle in similar flow conditions up to \(x/d_{n}=28\) in order to have a more global view of the fragmentation process that agricultural like jets undergo as well as to compare the logarithmic and \(\Gamma\) laws for describing the droplet size distribution. To do so, section 2 presents the flow modeling and the parameter framing. Section 3 is dedicated to the analysis of the overall flow characteristics. Section 4 focuses on the analysis of the droplet population statistics and the mechanisms from which they are generated, while section 5 puts in perspective the conclusions and opens up on the study of the droplet topography. ## 2 Flow modeling and parameter framing This section presents the governing equations, the numerical methods, the choice of the physical configurations, the numerical configuration and the computation cost. It finally introduces the selection of the most unstable mode of the jet, in order to stimulate the jet fragmentation. ### Governing equations Direct Numerical Simulations (DNS) aim to resolve all time and length scales by solving the Navier-Stokes equations. However, this resolution is often limited by the available computational resources. The fragmentation mechanism under consideration occurs at low Mach numbers neglecting gravitational forces and involves two immiscible, incompressible fluids. The flow dynamics is then governed by the unsteady Navier-Stokes equations and can be expressed in the theoretical framework of a one fluid flow with variable density and viscosity as: \[\frac{\partial\rho\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\boldsymbol{\nabla}) (\rho\mathbf{u})=-\nabla p+\boldsymbol{\nabla}\cdot\left(\mu(\boldsymbol{ \nabla}\mathbf{u}+\boldsymbol{\nabla}^{T}\mathbf{u})\right)+\mathbf{T}_{ \sigma} \tag{1}\] \[\frac{\partial\rho}{\partial t}+\boldsymbol{\nabla}\cdot(\rho\boldsymbol{u})=0, \tag{2}\] \[\boldsymbol{\nabla}\cdot\mathbf{u}=0 \tag{3}\] where \(\mathbf{u}\) is the velocity vector, \(p\) the pressure and \(\mathbf{T}_{\sigma}\) the surface tension force, only defined on the liquid-gas interfaces. The two phases are taken into account in the one fluid framework through the phase indicator, named fraction field and denoted \(\alpha\) in the following. The fraction equals 1 if a cell only contains liquid and 0 if it only contains gas. The one fluid viscosity and density are computed over the phase quantities following \(\mu=\alpha\mu_{1}+(1-\alpha)\mu_{2}\) and \(\rho=\alpha\rho_{1}+(1-\alpha)\rho_{2}\). Injecting the expression of \(\rho\) and \(\mu\) in Eq. 2 and noting that \(\partial_{t}\rho_{1}=\partial_{t}\rho_{2}=0\) lead to reformulate the continuity equation in terms of \(\alpha\): \[\partial_{t}\alpha+\boldsymbol{\nabla}\cdot(\alpha\mathbf{u})=0 \tag{4}\] which can also be seen as the advection equation of the volume fraction. Instead of resolving the Navier Stokes equations for each phase, this approach enables the resolution of only a single set of equations. This, though, implies the implicit assumption that the velocity field \(\mathbf{u}\) evolves continuously in space. ### Numerical methods The DNS under consideration are computed with the solver developed by the Basilisk community. Basilisk is an open source project which aims to develop efficient solvers and methods which can be adapted to a wide range of configurations (Popinet & collaborators 2016). This project is mainly led by Stephane Popinet and benefits from the contribution of all the Basilisk community. The present study largely relies on the atomisation code available on the wiki of the project (Popinet & collaborators 2016). The Navier-Stokes equations are solved for a biphasic flow with a constant surface tension using numerical schemes similar to those of Popinet (2003) and Lagree _et al._ (2011). The resolution of the equations relies on time steps limited by the Courant-Friedrichs-Lewy (CFL) condition, the advection scheme of Bell-Collela-Glaz (Bell _et al._ 1989) and an implicit solver for the viscosity. The gas-liquid interface is tracked with a Volume-Of-Fluid (VOF) scheme which is geometric, conservative and non-diffusive (Lopez-Herrera _et al._ 2015). Regarding the surface tension, the interfacial force is calculated as \(\mathbf{T}_{\sigma}=\sigma\kappa\mathbf{n}\delta_{s}\), where \(\kappa\) is the interface curvature, \(\delta_{s}\) is the interface Dirac function and \(\mathbf{n}\) is the normal vector to the interface pointing outward. Considering the Continuum-Surface-Force (CSF) method and the Peskin immersed boundary method, the interfacial force can be approximated by \(\mathbf{T}_{\sigma}=\sigma\kappa\nabla\alpha\) where \(\kappa\) is computed by the use of a height function (Abu-Al-Saud _et al._, 2018). A projection method is used to compute the centered pressure gradient and the acceleration field. The VOF scheme is combined with an octree adaptive grid (Agbaglah _et al._, 2011) while the grid adaptation algorithm relies on a wavelet estimated discretization error, described by Popinet (2015) and used for atmospheric boundary layer simulations by van Hooft _et al._ (2018). Such grids present the advantage of finely resolving the gas-liquid interface while having a coarser resolution away from the interfaces, and thus decreasing the time needed for computing the DNS. Finally, the droplet detection is achieved by a tag function which associates a different tag to each neighbourhood of connected cells respecting a threshold condition on the fraction field, set to \(\alpha>1\times 10^{-3}\) in our DNS. ### Physical configuration and parameters The domain is a cubic box of dimension \(L_{x}\). A liquid round jet is injected into a quiescent gas at a mean velocity \(U_{inj}\), directed along the \(x\)-axis, through a disc of diameter \(d_{n}\) and length \(l_{x}=d_{n}/3.2\). The latter disc is called nozzle in the following. The injection condition is set on the disc face located at \(x=0\) while a free stream condition is imposed at the location \(x=L_{x}\). In addition, a Neumann condition on the normal velocity is imposed on the lateral faces. A sinusoidal perturbation is superimposed on the injection velocity in order to accelerate the development of the Kelvin Helmholtz instability on the interface. The perturbation has an amplitude \(A\) and a frequency \(f\) such that the injection velocity follows \(u_{inj}=U_{inj}\big{(}1+A\sin(2\pi ft)\big{)}\). Finally, the advection timescale is defined by \(T_{a}=d_{n}/U_{inj}\). One aim of this study is to draw comparisons with the experiments of Felis _et al._ (2020). First and foremost, the turbulent property of the experimental inlet velocity profile is let aside and the numerical injection profile is set as laminar. Real world parameter values cannot be picked because the current computational resources do not allow to compute such configurations. For instance, the numerical constraints prohibit large values for the density ratio, \(\rho_{1}/\rho_{2}<200\), the Reynolds number, \(\max(Re)=O(10^{4})\) and the surface tension, \(\sigma=O(10^{-5})\)N/m. Those constraints are denoted \(\mathcal{C}_{0}\), \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\). Even if the real world values are unreachable, a specific attention can be set on reproducing configurations with dimensionless numbers close to the experimental ones. The latter study carried out DTV and LDV measurements on a water jet lying in the second wind-induced fragmentation regime (Lefebvre & McDonell, 2017). This regime is characterised by sharp limits on the gas Weber number: \(13<We_{2}<40.3\). The atomisation regime is also defined on the basis of the Weber number, \(We_{2}>40.3\). The first priority is thus to make the DNS Weber numbers evolve over this range of values, which defines a third constraint, \(\mathcal{C}_{3}\). In order to reproduce similar deformation regimes undergone by the droplets, considering the Ohnesorge number is relevant. Experimentally, \(Oh_{1}=3.4\times 10^{-3}\), reproducing the same order of values makes a fourth constraint \(\mathcal{C}_{4}\). Having a density ratio of \(O(10^{3})\), as for water injection in air, is impossible. Conserving the experimental viscosity ratio \(\nu_{2}/\nu_{1}=15\) could be interesting but it would slow down the fragmentation process, which goes against the optimisation of computer resources. One could then have a look at the conservation of the ratio \(\gamma=\mu_{1}/\mu_{2}=(\rho_{1}\nu_{1})/(\rho_{2}\nu_{2})\), where it is worth noting that the quantity \(\rho_{i}\nu_{i}d_{n}\) is homogeneous to a mass flow rate \(\dagger\). Furthermore, \(\gamma\) rewrites as \(\rho_{1}\rho_{2}^{-1}/(v_{2}\nu_{1}^{-1})=We_{1}We_{2}^{-1}/(Re_{1}Re_{2}^{-1} )=We_{1}Re_{1}^{-1}/(We_{2}Re_{2}^{-1})=Ca_{1}/Ca_{2}\), where \(Ca_{i}\) is the Capillary number of the phase \(i\). Experimentally, \(\gamma\) equals 55 and can be seen such that the mass flow rate of the liquid phase is 55 times higher than the mass flow rate of the gas phase, or equivalently \(Ca_{1}=55Ca_{2}\) Respecting this ratio makes a fifth constraint \(\mathcal{C}_{5}\). The list of constraints necessary to produce configurations close to the experiments is thus: \[\left\{\begin{array}{ll}\mathcal{C}_{0}:&\rho_{1}/\rho_{2}<200\\ \mathcal{C}_{1}:&\max(Re)=O(10^{4})\\ \mathcal{C}_{2}:&\sigma=O(10^{-5})\ \mathrm{N/m}\end{array}\right.\quad\&\quad \left\{\begin{array}{ll}\mathcal{C}_{3}:&We_{2}\in[13,40.3]\ \mathrm{or}\ We_{2}>40.3\\ \mathcal{C}_{4}:&Oh_{1}=O(10^{-3})\\ \mathcal{C}_{5}:&\rho_{1}\nu_{1}/\rho_{2}\nu_{2}=55\end{array}\right. \tag{5}\] which let the parameters \(\rho_{1}\), \(\rho_{2}\), \(\nu_{1}\), \(\sigma\), \(U_{inj}\) and \(d_{n}\) free to choose. In order to keep a constant geometry between different DNS, the nozzle diameter is set as constant and only the injection velocity varies to cover the range of Weber and Reynolds of interest. Table 1 gives the values chosen for the parameters along with the corresponding Ohnesorge number. Table 2 lists the chosen injection velocities and the corresponding gaseous Weber and liquid Reynolds numbers. Note that the Ohnesorge number is constant over all the configurations. Thus, for all the DNS, the critical breakup Weber number for a given droplet size is constant (Hinze 1955). Additionally, the breakup regimes in the secondary atomisation are defined on the same range of Weber numbers (Faeth _et al._ 1995) for the 10 DNS. _In fine_, the breakup regimes of the droplets are set and identical for any pair \((Re_{1},We_{1})\) and the DNS explore different breakup regimes of the jet by ranging from low to moderate \(Re\) and \(We\) numbers. ### Most unstable mode for triggering the jet fragmentation In order to trigger the jet fragmentation the earliest and save computational resources, it is worth destabilizing the jet interface. Following the work of Yang (1992) on the growth of waves in round jets, it is possible to characterise the most unstable axisymmetric mode. In this work, the author studied the stability of an infinitesimal perturbation on the surface of a round jet of radius \(a\) and derived the expression of the nondimensional temporal growth rate for the \(m\)-th transversal mode, \((\alpha_{r}^{*})_{m}^{2}\). This derivation is recalled in appendix A. Figure 1(a) gives the evolution of this nondimensional growth rate for an axisymmetric perturbation, selected with \(m=0\), \(U_{inj}=3.0\mathrm{m/s}\) and a zero gas velocity injection. The wavelength \(ka\) of the most unstable mode is such that \((\alpha_{r}^{*})_{m}^{2}\) is maximum. The mode \begin{table} \begin{tabular}{r c c c c c|c c c c c} \hline \hline DNS & \(U_{inj}\) & \(We_{2}\) & \(Re_{1}\) & \(f\) & \(St\) & DNS & \(U_{inj}\) & \(We_{2}\) & \(Re_{1}\) & \(f\) & \(St\) \\ & (m/s) & & & (kHz) & & & (m/s) & & & (kHz) & \\ 1 & 1.357 & 15 & 6079 & 0.340 & 1.12 & 6 & 2.216 & 40 & 9928 & 0.901 & 1.82 \\ 2 & 1.567 & 20 & 7020 & 0.454 & 1.30 & 7 & 3.0 & 73.3 & 13440 & 1.666 & 2.49 \\ 3 & 1.787 & 26 & 8004 & 0.587 & 1.47 & 8 & 3.5 & 99.8 & 15680 & 2.272 & 2.91 \\ 4 & 1.919 & 30 & 8598 & 0.676 & 1.58 & 9 & 4.0 & 130.3 & 17920 & 2.972 & 3.33 \\ 5 & 2.073 & 35 & 9287 & 0.795 & 1.72 & 10 & 4.5 & 165 & 20160 & 3.730 & 3.71 \\ \hline \hline \end{tabular} \end{table} Table 2: Injection velocities and corresponding gas Weber and liquid Reynolds numbers along with the frequency \(f\) of the most unstable mode and the corresponding forcing Strouhal number \(St\). pulsation \(\omega\) is then given by the imaginary part of the growth rate. During the computation, \(a\) was set to \(d_{n}\) instead of \(d_{n}/2\). The difference between the pulsation of the mode computed for \(a=d_{n}\) and \(a=d_{n}/2\) is of \(O(1\mathrm{rad}/\mathrm{s})\) while the pulsations are of \(O(10^{3}\mathrm{rad}/\mathrm{s})\). The relative difference is thus of \(O(10^{-3})\), which seems acceptable to the authors. Table 2 lists the most unstable mode frequencies \(f\) for each configuration defined in section 2.3 as well as the Strouhal number based on the forcing frequency, \(St=f\times d_{n}/U_{inj}\). ### Numerical configuration and computational cost The refinement level is set to 12 and the minimum cell size in an adaptive grid is given by \(\Delta_{min}=L_{x}/2^{12}\). Hence the minimum cell size is \(\Delta_{min}=30.5\ \mu m\) and \(d_{n}/\Delta_{min}=146.8\). The time step is set by the CFL condition, \(|u_{max}|\Delta t/\Delta_{min}<C\), where the Courant number is initially set to 0.8. Running the 10 DNS summed a total of 511 896 scalar hours of computation. Each of the DNS 3 to 10 ran for 60 480 h while DNS 1 and 2 respectively ran for 12 600 h and 15 456 h. The computational performances can be tracked by checking the total number of cells used for each DNS, \(\mathcal{C}_{tot}\), the mean numerical velocity, \(\overline{\mathcal{V}_{num}}\), the maximal physical time reached by the simulations, \(t_{max}/T_{a}\), the maximum jet elongation, \(L_{j,max}/d_{n}\) and the total number of detected droplets, \(N_{tot}\). The latter three are summarized in table 3 while the other performances parameters are given in appendix B. All the DNS are split into 3 runs and were computed in parallel on the Occigen HPC (CINES, France), typically on 840 cores. An example of atomisation produced at \(We_{2}=165\) (DNS 10) is given in figure 1(b). ## 3 Overall flow characteristics and droplet statistics This section characterises the Turbulent Kinetic Energy (TKE) in the domain, has a glance on the jet interface and introduces the statistics and PDFs of the droplet population. In the following, the evolution of several variables relatively to \(t/T_{a}\) is analysed. Figure 2: (a) Evolution of the nondimensional temporal growth rate \((\alpha_{r}^{*})_{0}^{2}\) for \(U_{inj}=3.0\ \mathrm{m}/\mathrm{s}\). (b) Atomisation and adaptive grid for \(We_{2}=165\) (DNS 10) at \(t/T_{a}=16.2\). If the liquid core motion was the one of a solid cylinder, then the jet length would theoretically be \(L_{j,theo}=d_{n}\times t/T_{a}\), i.e. \(L_{j,theo}/d_{n}=t/T_{a}\). However, a lag of the jet tip relatively to this theoretical position is observed. In order to link \(t/T_{a}\) and the actual jet length \(L_{j}\), figure 3 gives the temporal evolution of \(L_{j}/L_{j,theo}\). Here, \(L_{j}\) is defined as the 99% quartile of the axial positions of the interface cells, \(\alpha\in]0,1[\), and not the maximum position. Doing so enables to exclude liquid cells which would exist on the upstream face of the jet tip as well as to smooth out the effect of the grid refinement. Thus, the length of the jet equals in average 85% of the theoretical length, \(L_{j}/d_{n}\approx 0.85\times t/T_{a}\), and the velocity of the jet front equals 0.85 \(U_{inj}\). Note as well that \(t/T_{a}=33\) corresponds to the instant when the jet exits the computational domain. ### Turbulent kinetic energy One aim of this study is to draw conclusions on the statistics of the droplet population. To ensure converged statistics, the flow needs to reach a statistically steady state. Looking at the turbulent kinetic energy \(k_{i}\) enables to conclude on this, primarily the one of the gas phase. As shown in Table 3, the jet extension observed for \(We_{2}\in[73,165]\) (DNS 7 to 10) is smaller than the length \(L_{x}\) of the domain. Thus, a statistically steady state at the scale of the domain Figure 3: Color on-line.Temporal evolution of the jet length \(L_{j}\) compared to the theoretical jet length \(L_{j,theo}\) for the 10 DNS. The black dashed line represents the mean value \(L_{jet,max}/L_{jet,theo}=0.853\) averaged over \(t/T_{a}\in[0,33]\). The blue colours denote the DNS in the second wind-induced regime and the red colours the DNS in the atomisation regime. \begin{table} \begin{tabular}{c c c c c|c c c c c} \hline DNS & \(We_{2}\) & \(t_{max}/T_{a}\) & \(L_{j,max}/d_{n}\) & \(N_{tot}\) & DNS & \(We_{2}\) & \(t_{max}/T_{a}\) & \(L_{j,max}/d_{n}\) & \(N_{tot}\) \\ 1 & 15 & 34 & 28 & 70 & 6 & 40 & 34 & 28 & 3545 \\ 2 & 20 & 34 & 28 & 459 & 7 & 73.3 & 24.2 & 21.5 & 9725 \\ 3 & 26 & 34 & 28 & 1949 & 8 & 99.8 & 19.75 & 17.0 & 18478 \\ 4 & 30 & 34 & 28 & 2182 & 9 & 130.3 & 18.5 & 14.4 & 32922 \\ 5 & 35 & 34 & 28 & 2448 & 10 & 165 & 16.25 & 14.2 & 45046 \\ \end{tabular} \end{table} Table 3: Maximum normalised time \(t_{max}/T_{a}\) along with the corresponding normalised maximum jet elongation \(L_{j,max}/d_{n}\) and total number of droplets \(N_{tot}\). cannot be achieved. Even so, it is possible to slice the domain in different sections along the \(x\)-axis and conclude on the flow steadiness in each section. The domain of length \(L_{x}\) is sliced in 5 sections along the \(x\)-axis. The fifth section represents the outlet side of the domain and its length is set to \(d_{n}\). The rest of the domain is evenly sliced with a slice thickness equal to \((L_{x}-d_{n})/4\). The sections are denoted from 1 to 5, going from the nozzle to the outlet face. The turbulent kinetic energy is computed for both the gas and the liquid following \(k_{i}=\frac{1}{2}\int_{V_{s}}({u^{{}^{\prime}}_{x,i}}+{u^{{}^{\prime}}_{y,i}}+ {u^{{}^{\prime}}_{z,i}})dV\) with \(V_{s}\) the volume of the section under consideration. Figure 4 shows the time evolution of the TKE for \(We_{2}=40\) (DNS 6). The evolution is similar in each slice: \(k\) increases when the jet head enters the section, reaches a maximum, decreases when the head enters the following section and finally reaches a plateau. The time sampling is set with a step \(\Delta(t/T_{a})=1/4\) and smooths the fluctuations out of the plateau region. The increase of the \(k_{2}\) maximum and asymptote values between the slices is due to the ongoing fragmentation and the newly created droplets, which increases the agitation of the gas phase. Thus, once the jet head fully exits a section, the flow reaches a statistically steady state. One could expect that the TKE around the jet head, measured from a Lagrangian point of view, would reach an asymptote as well and, thus, a statistically steady state. ### Close up on the jet interface This section explores qualitatively the interface of the jet in two regions of interest: close to the nozzle where the unstable mode develops and around the tip of the jet where the front extends and fragments. #### 3.2.1 Development of the unstable mode In order to check qualitatively the forcing implemented in the simulations and its outcome, one could have a look on the jet interface in the region of the nozzle, where the unstable mode excited by the forcing should develop. To compare the interface evolution between the different DNS, the \(x\) coordinate needs to be normalised by the characteristic length scale of the forcing, i.e. \(U_{inj}f^{-1}\). Note here that \(U_{inj}f^{-1}=St/d_{n}\), so \(x/(U_{inj}f^{-1})=(x/d_{n})St\), where \(St\) is the Strouhal number based on the forcing, given in Table 2. Furthermore, the physical times chosen for the comparison have to be in phase relatively to the sinusoidal perturbation, i.e. the physical times should be chosen such that the perturbation waves superimpose on Figure 4: Turbulent kinetic energy in the gas phase for \(We_{2}=40\) (DNS 6). each other. Figure 5 shows the jet interface sliced at \(z=0\) and for \(y/d_{n}>0\), normalised as explained. For the 10 DNS, the perturbation waves collapse well after normalising by \(U_{inj}f^{-1}\) and picking in-phase physical times. Consider first the second wind-induced regime. The jet interfaces for \(We_{2}\in\{15,20\}\) (DNS 1 and 2) are represented separately from those for \(We_{2}\in[26,40]\) (DNS 3 to 6), figure 4(a) and 4(b), to highlight the different behavior of the forcing between them, even if DNS 1 to 6 lie in the second wind-induced regime. For the DNS 1 and 2, the development of the mode leads to waves which only break in large elements in the latter, while they are attenuated in the former. Contrarily, the perturbation in the DNS 3 to 6 leads to the development of shorter waves which break into a wider droplet population. Here, the wave develops in the radial direction. While the wave extends radially, up to \(y/d_{n}\approx 0.8\), its outskirt forms a rim and the space between the liquid core and the outer rim forms a sheet. The sheet becomes thinner the more the wave extends, before fragmenting for \(x/(U_{inj}f^{-1})\in[5.5,7]\). Once the sheet has fragmented, the rim destabilizes and fragments as well. A similar wave development can be observed for \(We_{2}\in[73,130]\) (DNS 7 to 9), except that the wave extension is smaller than previously, up to \(y/d_{n}\approx 0.6\), that the wave sheet fragments earlier, for \(x/(U_{inj}f^{-1})\in[5,6]\), and that the rim fragments faster for DNS 7 or even hardly exists for DNS 8 and 9. Finally, no rim is created when \(We_{2}=130\) (DNS 9). Specific attention is required for \(We_{2}=165\) (DNS 10). Figure 4(d) indicates the presence of a wall-induced regime in the second wind-induced regime. Figure 5: Color on-line. Superposition of the interface sliced at \(z=0\) in the region of the nozzle in the second wind-induced regime (a,b) and the atomisation regime (c,d). The blue and red colors indicate the second wind-induced and atomisation regimes, respectively. The physical times are chosen such that the sinusoidal perturbations are in phase. As a reminder, \(x/(U_{inj}f^{-1})=(x/d_{n})St\). of interface in the liquid core, meaning that the core is populated by some volume made of the lighter phase, i.e. bubbles. The "bubbles" are generated from \(t/T_{a}\approx 10\) and are likely to originate from a numerical artefact related to the volume fraction threshold used for droplet detection. Note that bubbles also appear, but later on timewise, in the DNS 9. The presence of bubbles changes the fluid dynamics inside the core but appears to modify only slightly the interface dynamics in the time scope of the study and any perturbation would be smoothed out by considering the overall droplet population. #### 3.2.2 Development of the jet head In addition to the development of the wave perturbation, it is possible to have a glance on the head of the jet. Figure 6 presents the jet interface sliced at \(z=0\) in the region of the head of the jet for both regimes at the same physical time, \(t/T_{a}=15\). What appears at first is the difference of geometry of the front between the two regimes. In the second wind-induced regime, the front is plane while it is parabolic in the atomisation regime. This difference results from the force equilibrium between the liquid and gas phases depending on the injection velocity. In both regimes, the head extends up to \(y/d_{n}\approx 2\) and experiences piercing (data not shown here) which could be due to the Taylor Culick instability (Taylor, 1959; Culick, 1960). However, the dynamics of the head extension is quite different. In the second wind-induced regime, the head extension can produce thick ligaments able to extend over distances of the order of \(d_{n}\) while, in the atomisation regime, the ligaments fragment once they are detached from the head sheet. The difference in the resulting droplet population is qualitatively visible in figure 6 where the droplets appear to be more numerous in the atomisation regime than in the second wind-induced regime. ### Statistics of the droplet population Figure 7 presents the evolution of \(N_{tot}\), the number of droplets detected by the tag function implemented in Basilisk. First and foremost, the droplets produced for \(We_{2}\in\{15,20\}\) (DNS 1 and 2) do not exceed 1000 elements, which is not enough to draw conclusions on the statistics of those two populations. Thus, the DNS 1 and 2 are discarded in the following. All the other DNS show a total number of elements larger than 1000, which enable to carry out a statistical analysis. The two regimes distinguish from each other by the total Figure 6: Color on-line. Superposition of the interface sliced at \(z=0\) in the region of the jet head in the second wind-induced regime (a) and in the atomisation regime (b) at \(t/T_{a}=15\). number of produced droplets. The total number is of \(O(10^{3})\) in the second wind-induced regime whereas it is of \(O(10^{4})\) in the atomisation regime, reaching up to \(5\times 10^{4}\) elements for \(We_{2}=165\) (DNS 10). Even so, after rescaling by \(We_{2}^{1.8}\), the numbers of elements for \(We_{2}\in[26,165]\) (DNS 3 to 10) collapse all together and \(N_{tot}\) tends toward \(5~{}We_{2}^{1.8}\) for both regimes, excepted DNS 1 and 2. The transition to a steady production of droplets differs between the two regimes. In the second wind-induced regime, the total number of elements quickly increases and drops before reaching a steady rate. The observed decrease could be due to the interactions between the jet head development and the corollas induced by the mode forcing, interactions which bring the droplets back to the liquid core. All the 10 DNS are close to a steady regime with a slight departure because of the difference in the maximum physical time reached by each DNS. The temporal evolution of the mean value for the size, axial and radial velocity as well as a proposal of scaling are given in appendix D. The arithmetic mean operator and the standard deviation are respectively denoted \(\langle\cdot\rangle\) and \(\sigma\) while the skewness and the excess kurtosis are respectively denoted \(S\) and \(\kappa\). Here, we considered the excess kurtosis equal to the kurtosis subtracted by 3 such that the normal distribution has a zero excess kurtosis. Figure 8 gives the evolution along \(We_{2}\) of the four first statistical moments along with the minimum and the maximum values for the size at the time instants \(t/T_{a}=15\) and \(t/T_{a}=25\). Note that the droplet tagging function implemented in Basilisk can return droplets with a volume \(V\) smaller than \(\Delta_{min}^{3}\), the volume of the smallest grid cell. This behavior is expected and due to the cells having a volume fraction \(f\) between \(10^{-3}\) and 1 and being disconnected from any liquid neighbourhood. To ensure physical consistency regarding the grid characteristics, all the droplets with a volume smaller than or equal to the minimum cell volume are discarded, i.e. any droplets such that \(V\leq\Delta_{min}^{3}\). Assuming spherical droplets, this condition implies a minimum droplet diameter \(d_{min}=\sqrt[3]{6/\pi}\Delta_{min}\approx 37.8~{}\mu\)m. Let us consider the statistical moments of the droplet size. Details about the evolution of the moments for the axial and radial velocities are given in appendix E. Globally, both the mean and standard deviation decrease with \(We_{2}\) and are of \(O(100~{}\mu\)m). Similarly, the maximum value decreases but is one order larger, \(O(1~{}\text{mm})\). Meanwhile, the skewness and the excess kurtosis slightly increase and are respectively of \(O(1)\) and \(O(10)\). An increase in \(We_{2}\) corresponds to an increase of the inertial forces relatively to the surface tension forces. Figure 7: Total number of detected droplets, (a) unscaled and (b) scaled by \(We_{2}^{-1.8}\). The blue and red colours denote the second wind-induced and atomisation regimes. The color code denoting the DNS is the same as in Figure 3. Then, the larger \(We_{2}\), the more likely the droplets undergo fragmentation and fewer sizes are stable. Thus, the mean diameter and the maximum diameter decrease and more droplets group around the mean diameter which, as a consequence, reduces the standard deviation. A decrease of the minimum diameter should also occur, but the condition on the minimal droplet volume numerically filters out any diameter smaller than 37.8 \(\mu\)m. In parallel, the increase in the skewness value, which is positive, points out that the droplet size distribution is slightly more right-tailed with higher \(We_{2}\). This indicates that the right-hand tail, towards sizes larger than \(\langle d\rangle\), exists on a size range larger than the one on which the left-hand tail exists. Finally, the positive sign and the increase of the excess kurtosis with \(We_{2}\) indicates that the distributions are leptokurtic for all the values of \(We_{2}\) and that the distribution tail increases in length relatively to the mean and the standard deviation. Equivalently, the range of rare very large sizes, relatively to the mean, is both broader with higher \(We_{2}\) and larger than the Gaussian distribution for which \(\kappa=0\). As the flow geometry remains the same for the different values of \(We_{2}\), the increase in the excess kurtosis and the skewness is due to the depletion of large sizes in the benefits of small sizes, grouped around the mean diameter. However, even if \(d_{max}\) decreases, the large values of \(S\) and \(\kappa\) show that the larger droplet sizes do not disappear completely from the flow and still exist at higher values of \(We_{2}\). Finally, the values of the skewness and the excess kurtosis of the size distribution are very large and one order of magnitude larger than those of the velocity distributions, see appendix E. This indicates a wider spanning range for the size distribution than for the two velocity distributions and justifies the use of a loglog scale to visualise the size distribution. ### Distributions of the size and the velocity Complementary to the statistical moments, it is worth looking at the distributions of the sizes and the velocities of the droplets. For the sake of clarity, the number PDF of any Figure 8: Color on-line. Evolution of \(\langle\cdot\rangle\) (red), \(\sigma\) (blue), \(S\) (green), \(\kappa\) (orange), the minimum (purple) and maximum (brown) against \(We_{2}\) for the size \(d\). The pluses (+) correspond to \(t/T_{\alpha}=15\) and the bullets (\(\bullet\)) to \(t/T_{\alpha}=25\). Note that \(S\) and \(\kappa\) are both dimensionless and that the dimensional variables are expressed with the SI base units. variable \(\zeta\) is denoted \(\mathcal{P}_{\zeta}\) in the following. Even if the mean values of the size and the velocities are not fully converged, we consider the PDF of each variable normalised by its mean. However, \(u_{y}\) being close to zero in average, normalising by \(\langle u_{y}\rangle\) is not relevant and \(u_{y}/\langle u_{x}\rangle\) is considered instead. Figure 9 gives the PDFs of \(d/\langle d\rangle\), \(u_{x}/\langle u_{x}\rangle\) and \(u_{y}/\langle u_{x}\rangle\) at the time instants \(t/T_{a}=15\), where both regimes are computed, and \(t/T_{a}=25\). First of all, it is interesting to note that the PDFs in each regime collapse for the three variables even if the mean values are not converged. From the three distributions, only that for \(u_{y}/\langle u_{x}\rangle\) shows a similar behavior between the two regimes of fragmentation, excepting for the width and the slope of the tails. For both regimes, the PDF tails scale with \(\exp(\pm a\times u_{y}/\langle u_{x}\rangle)\) where \(a\) nearly equals \(6\) in the second wind-induced regime and nearly equals \(3\) in the atomisation regime. The difference in the tail width goes along with the difference between the exponential coefficient. Indeed, the larger the coefficient is, the smaller the tail width is. This can be explained once again with the increase of the relative velocity between the injection and the gas phase, and thus the shear, when \(We_{2}\) increases. Note that, due to the flow symmetry, \(\mathcal{P}_{u_{2}/\langle u_{x}\rangle}\) follows a trend similar to that of \(u_{y}/\langle u_{x}\rangle\). Regarding the size distribution, different modes appear clearly between the two fragmentation regimes. The size PDF derived from the atomisation regime shows one main mode centered on \(d/\langle d\rangle=0.5\) while the PDF for the second wind-induced regime shows \(3\) modes centered on \(d/\langle d\rangle=\{0.2,1,2.5\}\), denoted from \(1\) to \(3\) in figure 8(b). Even if the main mode appears to be shifted towards larger \(d/\langle d\rangle\) when \(We_{2}\) increases, it refers to the same range of physical sizes \(d\) between \(47\)\(\mu\)m and \(58\)\(\mu\)m with a mean value of \(55\)\(\mu\)m, considering \(We_{2}\in[26,165]\) (DNS \(3\) to \(10\)). industrial jet shows a tail scaling as an exponential. Figure 10 gives the size distribution in a semi logarithmic scale. The time instant \(t/T_{a}=20\) has been chosen over \(t/T_{a}=25\) in order to highlight the trend of the size distribution in the second wind-induced regime thanks to the distribution for \(We_{2}=73\) (DNS 7). It appears that, at both time instants, none of the size distribution is also a good approximation of the \(W_{2}\) distribution. Figure 9: Distributions of \(d/\langle d\rangle\) (a,b), \(u_{x}/\langle u_{x}\rangle\) (c,d) and \(u_{y}/\langle u_{x}\rangle\) (e,f) at \(t/T_{a}=15\) (left) and \(t/T_{a}=25\) (right). The blue and red colours denote the second wind-induced and atomisation regimes. The colour code denoting the DNS is the same as in Figure 3. distribution follows a unique exponential decay. Instead, the distribution in the atomisation regime follows two exponential scalings, the first one for \(d/\langle d\rangle\in[0.5,2]\) and the second one for \(d/\langle d\rangle\in[4,8]\), with a transition region scaling as \(d/\langle d\rangle^{-2.7}\) for \(d/\langle d\rangle\in[2,4]\). Following the analysis of an experimental spray by Vallon (2021), the existence of those two scalings could indicate that the distribution is composed of two distributions originating from different fragmentation sources. The modelling of the size PDF by theoretical distributions is achieved in section 3.5. Regarding the distribution of the axial velocity, it is often expected in jet fragmentation flows that the droplets show a positive axial velocity less than or equal to the injection velocity as they are globally advected towards increasing \(x/d_{n}\). However, \(\mathcal{P}_{u_{x}/\langle u_{x}\rangle}\) shows large probabilities for a range of negative velocities, \(u_{x}/\langle u_{x}\rangle\in[-2,0]\), with a sharp peak at \(u_{x}/\langle u_{x}\rangle=0\). The right-hand tail exists in both regimes on a range of velocities larger than the injection velocity \(U_{inj}\). For instance, considering the droplet population for \(We_{2}=40\) (DNS 6) lying in the second wind-induced regime at \(t/T_{a}=25\) with \(\langle u_{x}\rangle=1\) m/s, we have \(\mathcal{P}(2<u/\langle u_{x}\rangle<3)>0\), meaning that there exists droplets such that \(u_{x}/U_{inj}\in[0.9,1.35]\), thus being faster than the injection velocity \(U_{inj}=2.216\)m/s. The same conclusion can be drawn for the other DNS. Surprisingly, for the atomisation regime, the tail expansions in the regions of negative velocities and velocities larger than \(U_{inj}\) follow a similar trend, scaling as \(\exp(\pm a\times u_{x}/\langle u_{x}\rangle)\) with \(a=4\). However, in the second wind-induced regime, the left-hand tail and the right-hand tail present two different scalings: the former scales as \((u_{x}/\langle u_{x}\rangle)^{7}\) and the latter as \((u_{x}/\langle u_{x}\rangle)^{-2.5}\). The argument of the increasing relative velocity between the injection and the gas phase, and consequently in the standard deviation, can be considered to explain the difference in the tail expansion between the two regimes. Finally, in addition to the sharp peak for zero velocities, the axial velocity PDF is centered on \(u_{x}/\langle u_{x}\rangle=1\) in the second wind-induced regime and presents a continuous decrease scaling as \(\exp(-0.7\times u_{x}/\langle u_{x}\rangle)\) in the atomisation regime. The specific characteristics of the velocity PDF, \(u_{x}<0\) and \(u_{x}\geqslant U_{inj}\), are explored in section 4.1. Figure 10: Distribution of \(d/\langle d\rangle\) in semi-logarithmic scale at \(t/T_{a}=15\) and \(t/T_{a}=20\). The blue and red colours denote the second wind-induced and atomisation regimes, respectively. The colour code denoting the DNS is the same as in Figure 3. The solid black line corresponds to a power law of coefficient \(-2.7\), as in figure 8(a) and 8(b). ### Modeling the droplet size PDF When it comes to modeling the distribution of the droplet size, one theoretical distribution is necessary to test: the \(\Gamma\) law derived from the ligament-mediated fragmentation framework (Villermaux, 2020) along with its refinement exposed by Kooij _et al._ (2018), here after denoted \(f_{\Gamma}\) and \(f_{\Gamma B}\). While the former was specifically designed to describe the droplet size PDF resulting from the breakup of a ligament, the latter was designed to describe the size PDF resulting from the overall fragmentation of a jet. A previous study carried out by Vallon _et al._ (2021) highlighted the limits of those two distributions for modelling size PDF far away from the nozzle, \(x/d_{n}\in[400,800]\) in the context of agricultural like sprays, and the satisfying performance of the law derived by Novikov & Dommermuth (1997) in the framework of turbulence intermittency, denoted \(f_{\epsilon}\) in the following. More details about each law and the related framework are given in Vallon _et al._ (2021). The three theoretical laws write as follows: \[f_{\Gamma}: \mathcal{P}(x=d/\langle d\rangle) =\frac{n^{n}}{\Gamma(n)}x^{n-1}e^{-nx}, \tag{1}\] \[f_{\Gamma B}: \mathcal{P}(x=d/\langle d\rangle) =\frac{2(mn)^{(m+n)/2}x^{(m+n)/1-1}}{\Gamma(m)\Gamma(n)}\mathcal{ K}_{m-n}(2\sqrt{nmx}),\] (2) \[f_{\epsilon}:\mathcal{P}(y=-\ln(l/l_{1})) =\frac{a^{3/2}}{\sqrt{2\pi}\sigma y^{3/2}}\exp\Bigg{\{}-\frac{a}{ 2\sigma^{2}}\big{(}ay^{-1/2}-y^{1/2}\big{)}^{2}\Bigg{\}},\;\;y\geqslant 0. \tag{3}\] In the expression of \(f_{\Gamma}\), \(n\) represents the corrugations of a ligament before its breakup. The corrugations determine the size PDF resulting from the breakup (Villermaux _et al._, 2004). The same logic takes place in the expression of \(f_{\Gamma B}\). Additionally, the ligaments can show a large variety of sizes in the flow. This variety is taken into account by \(m\) which sets the order of the ligament size distribution (Kooij _et al._, 2018). Finally, the expression is conditioned by the modified Bessel function of the second kind \(\mathcal{K}\) whose order is set by \(m-n\). Regarding \(f_{\epsilon}\), Novikov & Dommermuth (1997) considered a cascade mechanism and the ratio between the initial size \(l_{1}\) and the resulting size \(l\) of a fragmenting droplet, where \(a=\langle y\rangle\) and \(\sigma=\langle(y-a)^{2}\rangle\). It is worth noting that, even if \(f_{\epsilon}\) relies on the cascade concept initially derived by Richardson (1922) and used in the seminal papers of Kolmogorov (1941, 19), the infinitely divisible nature of this distribution ensures that it is at no point close to a logarithmic normal distribution resulting from the Central Limit Theorem. A systematic fit campaign is carried out to test the three distributions. Details of the procedure are given in appendix F. Figure 11 gives the best fits produced by \(f_{\Gamma}\), \(f_{\Gamma B}\) and \(f_{\epsilon}\) in both fragmentation regimes at the time instant \(t/T_{a}=15\) while Table 5 gives the corresponding final parameters and the square of the Pearson coefficient \(r^{2}\). Qualitatively, the three theoretical distributions capture well the size PDF in both regimes and describe with a good accuracy the right-hand tail on the available range of sizes. No relevant comment can be drawn about the left-hand tail as no physical droplet sizes are available in the DNS, see section 3.3, and this range was discarded in the fit procedure. The slight differences between the distributions mainly lie in the description of the main mode. In both regimes, \(f_{\epsilon}\) performs slightly better in capturing the main mode and the short left-hand tail. Additionally, the manual fit of the PDF modes for the second wind-induced regime indicates the good performance of \(f_{\Gamma}\) to describe the mode separately. Quantitatively, the values of \(r^{2}\) bring a sharp light on the performance of each theoretical distribution. For both regimes, the law exposed by Kooij _et al._ (2018) shows \(r^{2}\) values the closest to 1 with a mean computed over the two better fits equal to 1.00025. Then follows the \(\Gamma\) law and the distribution derived by Novikov & Dommermuth (1997) with mean \(r^{2}\) values respectively equal to 0.9245 and 0.878. Thus \(f_{\Gamma B}\) better describes the size PDF in the flow, close to the nozzle, while \(f_{\Gamma}\) describes each mode separately. Meanwhile, \(f_{\epsilon}\) shows a correct performance close to the nozzle, which achieves its good performance for describing multimodal size PDFs far away from the nozzle in the second wind-induced regime (Vallon _et al._, 2021). ## 4 Dynamics of the jet and the droplets: a two speed fragmentation This section brings explanations about the specific features of the PDF of the droplet axial velocity, in connection with the vortex ring theory, and about the joint distribution of the droplet size and velocity. It also analyses the distribution of the droplets in the Reynolds - Ohnesorge phase space and compares it to experiments. ### The axial velocity PDF and the jet head vortex ring The analysis of the distribution of the axial velocity of the droplets in section 3.4 highlighted the existence of droplets showing negative velocities and velocities larger than \(U_{inj}\), two infrequent features for a jet fragmentation. In order to investigate those two characteristics, it is interesting to have a glance on the spatial distribution of the droplets such that \(u_{x}/U_{inj}<0\) or \(u_{x}/U_{inj}>1\). To do so, the cylindrical coordinates \((x/d_{n},r/d_{n},\theta)\) are preferred to the Cartesian coordinates. Figure 12 gives the spatial evolution in cylindrical coordinates of the probabilities Figure 11: Color on-line. Fit of \(\mathcal{P}_{d/\langle d\rangle}\) by \(f_{\Gamma}\), \(f_{\Gamma B}\) and \(f_{\epsilon}\) in the second wind-induced regime (a) and the atomisation regime (b) at \(t/T_{a}=15\). The fit procedure is carried on the data shown here and the best fit is represented over \(d/\langle d\rangle\in[10^{-2},20]\). \begin{table} \begin{tabular}{c|c c c c|c c c c|c c c} Regime & \(f_{\Gamma}\) & & & \(f_{\Gamma B}\) & & & & \(f_{\epsilon}\) & & & \\ & \(C\) & \(n\) & \(r^{2}\) & \(C\) & \(m\) & \(n\) & \(r^{2}\) & \(C\) & \(a\) & \(\sigma\) & \(r^{2}\) \\ SWI & 0.704 & 0.932 & 0.919 & 0.741 & 2.513 & 2.513 & 0.998 & 0.751 & 0.921 & 1.111 & 0.829 \\ ATO & 0.594 & 1.269 & 0.930 & 0.660 & 2.411 & 2.411 & 1.007 & 0.670 & 1.058 & 0.670 & 0.927 \\ \end{tabular} \end{table} Table 5: Final parameters and \(r^{2}\), truncated at the third decimal, for the best fits given by \(f_{\Gamma}\), \(f_{\Gamma B}\) and \(f_{\epsilon}\) at \(t/T_{a}=15\). ATO: atomisation, SWI:second wind-induced. \(C\) is a prefactor applied to each function during the fit procedure. \(\mathcal{P}(u_{x}/U_{inj}<0)\) and \(\mathcal{P}(u_{x}/U_{inj}>1)\). For each 2D graph, the probabilities are integrated on the third direction, e.g. along the \(\theta\) direction for the \((x/d_{n},r/d_{n})\) graph. Note that the liquid core starts at \(r/d_{n}=0.5\) and that the jet extends up to \(x/d_{n}\approx 12.5\). In the \((x/d_{n},r/d_{n})\) space, the droplets appear to be located in four regions. The ones being faster than \(U_{inj}\) are preferentially located next to the nozzle \((0<x/d_{n}<0.5,r/d_{n}=0.5)\) and at the backside of the jet head up to half of the head sheet extension \((10<x/d_{n}<15,0.5<r/d_{n}<1.5)\). The former are due to the jet forcing. Indeed, the forcing described in section 2.3 is sinusoidal with a mean equal to \(U_{inj}\) and some droplets issued from the corolla fragmentation can show velocities larger than \(U_{inj}\). The droplets showing negative velocities are preferentially located at the backside of the jet head from the half of the head extension up to its edge and located on a tail expanding over \(x/d_{n}\in[0,10]\) and \(r/d_{n}\in[0.5,2.5]\). The negative velocity or the velocity larger than \(U_{inj}\) of the droplets located at the downstream face of the jet head can be connected to the recirculation occurring behind it. Finally, the negative velocities along the tail towards \((x/d_{n}=0,r/d_{n}=0.5)\) can correspond to some droplets ejected from the recirculation region, with \(r/d_{n}\) increasing because of the increasing radius of the jet head in the time range \(t/T_{a}\in[0,15]\). The spatial distribution in the \((r/d_{n},\theta)\) space shows homogeneity along the \(\theta\) direction and a clear distinction between the two droplet groups along the \(r\) direction. The velocities larger than \(U_{inj}\) are concentrated in the boundary layer region, \(r/d_{n}\in[0.5,1]\), while the negative velocities spread over it, \(r/d_{n}\in[1.5,2.5]\). Now that the droplets with, at first sight, unexpected axial velocities, \(u_{x}/U_{inj}<0\) and \(u_{x}/U_{inj}>1\), are located in the recirculation region behind the jet head, assessing this recirculation would help to explain why such velocities are reached. Looking at the distribution of \(u_{x}/U_{inj}\) is a time saver for this purpose, as it quantifies in a straightforward manner the range of velocities relatively to \(U_{inj}\) happening in this region. \(\mathcal{P}_{u_{x}/U_{inj}}\) is given in Figure 13 and the ranges of unexpected velocities are \(u_{x}/U_{inj}\in[-0.5,0]\) and \(u_{x}/U_{inj}\in[1,1.5]\). Assuming that the recirculation observed behind the jet head behaves as a vortex ring behind a plate, it is possible to use the developments of Saffman (1992) describing the dynamics of such unsteady objects to express the velocity at the vortex core \(u_{c}\) in terms of the plate velocity \(U_{d}\), see appendix G: Figure 12: Spatial evolution of the probabilities \(\mathcal{P}(u_{x}/U_{inj}<0)\) and \(\mathcal{P}(u_{x}/U_{inj}>1)\) for \(We_{2}=99.8\) (DNS 8) at \(t/T_{a}=15\) in cylindrical coordinates \((x/d_{n},r/d_{n},\theta)\). For each 2D graph, the probabilities are integrated on the third direction. On the \((x/d_{n},r/d_{n})\), the gray bullets represent the mean jet interface and few droplets, see appendix C for the details. \[u_{c}=\left(\frac{c}{R}\right)^{-1}\frac{2}{\pi^{2}\sqrt{2/3}}\ U_{d} \tag{1}\] where \(c\) and \(R\) are respectively the core radius and the vortex radius. For a uniform core \(c/R=0.19\), for a hollow core \(c/R=0.14\), leading to \(u_{c}\) respectively equal to \(1.31\ U_{d}\) and \(1.77\ U_{d}\). Taking \(c/R=0.165\), the mean value between \(0.19\) and \(0.14\), \(u_{c}=1.504\ U_{d}\). In our flow, the jet head can be approximated as a disc behind which a vortex ring develops. In section 3, we observed that the jet head has the same velocity as \(U_{inj}\), so \(U_{d}=U_{inj}\). The velocity at the core edge then equals \(\pm 1.5\ U_{inj}\) and corresponds to the range of unexpected droplet velocities, \(u_{x}/U_{inj}\in[-0.5,0]\) and \(u_{x}/U_{inj}\in[1,1.5]\). Thus, the negative velocities and velocities larger than \(U_{inj}\) result from the vortex ring dynamics taking place at the downstream side of the jet head. ### Joint distribution of the droplet size and axial velocity The fragmentation of a jet or droplets is governed by aerodynamic and surface tension forces. Depending on the equilibrium between those, droplets of a given size and velocity result. Those two quantities influence each other comparably to a two-way coupling mechanism. Thus, looking at the joint distribution of the size and the axial velocity could bring extra information to the analysis of the marginal PDF \(\mathcal{P}_{d/(d)}\) and \(\mathcal{P}_{u_{x}/(u_{x})}\). Figure 14 gives the joint distribution of \(d/\langle d\rangle\) and \(u_{x}/\langle u_{x}\rangle\) for \(We_{2}\in\{40,130\}\) (DNS 6 and 9), respectively in the second wind-induced regime and the atomisation regime. It is possible to recover the characteristics of the marginal PDF in the joint distributions. For instance, in the joint distribution for \(We_{2}=40\) (DNS 6), three patches are noticeable along the size axis and correspond to the three modes of \(\mathcal{P}_{d/(d)}\), \(d/\langle d\rangle\in\{0.2,1,2.5\}\). In addition, the negative velocities and velocities larger than \(U_{inj}\) described in section 3.4 and explained in section 4.1 are also noticeable. Observing so is expected as the marginal PDFs are simply the integration of the joint PDF on the size or the velocity axes. In the second wind-induced regime, the negative axial velocities and the velocities larger than \(U_{inj}\) are preferentially observed for the first two size modes, while the third size mode is concentrated around \((d/\langle d\rangle=2.5,u_{x}/\langle u_{x}\rangle\approx 1.5)\) and the tail, from \(d/\langle d\rangle\approx 3\) and towards large sizes, seems to be centered on \(u_{x}/\langle u_{x}\rangle=1\). Globally, the smaller droplets appear to have, at the same time, a dispersion being large along the velocity axis and being short Figure 13: Distribution of \(u_{x}/U_{inj}\) at \(t/T_{a}=15\). The blue and red colours denote the second wind-induced and atomisation regimes. The colour code denoting the DNS is the same as in Figure 3. on the size axis. Conversely, the larger droplets show a large dispersion along the size axis and a short one along the velocity axis. This corresponds to the literature and the common behaviors of tracers and ballistic objects which are classically observed. In comparison, even if tracers and ballistic objects are visible as well, the aspect of the joint distribution in the atomisation regime is different. Once again, the negative velocities and the velocities larger than \(U_{inj}\) are preferentially observed for the smaller droplets. However, the distribution shows two tails along the size axis, one centered on \(u_{x}/\langle u_{x}\rangle\approx 0.75\) and the second one centered on \(u_{x}/\langle u_{x}\rangle\approx 2\). Thus, the smaller and larger droplets still respectively behave like tracers and ballistic objets, but the ballistic objects show two traveling velocities. Conversely to the experimental observations (Vallon _et al._, 2021), the joint distributions do not show a clear elbow shape. Also, drawing a third group of droplets showing a similar dispersion along the size and velocity axes, as in the experimental analysis, seems less manifest here. Such a group could be extrapolated from the joint distribution and the second size mode, \(d/\langle d\rangle=1\), in the second wind-induced regime and for \(d/\langle d\rangle\in[1.5,2.5]\), while the velocity spans over \(u_{x}/\langle u_{x}\rangle\in[0,2]\) in both cases. Similarly to section 4.1, it is possible to check out the spatial distribution of the droplets corresponding to the tails of the joint distribution \(\mathcal{P}_{(d/\langle d\rangle,u_{x}/\langle u_{x}\rangle)}\) for \(We_{2}=130\) (DNS 9) in the atomisation regime. Those droplets are such that \(d/\langle d\rangle>4\) and are distinguished by their axial velocity being larger or smaller than \(1.5\langle u_{x}\rangle\). Alike the PDF of the axial velocity of the droplets, the joint distribution shows the same feature for the larger droplets for all the DNS in the atomisation regime. Once again, it is more practical to express the conditions on the size and the velocity independently of the arithmetic average, but relatively to the injection conditions. Thus, the condition on the size writes as \(d/d_{n}>0.075\) and \(0.4\ U_{inj}\) is considered to be the threshold to distinguish the two tails. Figure 15 gives the spatial evolution in cylindrical coordinates of the probabilities \(\mathcal{P}(d/d_{n}>0.075,u_{x}/U_{inj}<0.4)\) and \(\mathcal{P}(d/d_{n}>0.075,u_{x}/U_{inj}>0.4)\). As for Figure 12, the liquid core starts at \(r/d_{n}=0.5\) and the jet extends up to \(x/d_{n}\approx 12.5\). The droplets from each tail appear to exist in specific regions of the space. The large, fast droplets, \(d/d_{n}>0.075\) and \(u_{x}/U_{inj}>0.4\), are preferentially located in the boundary layer region, \(r/d_{n}\in[0.5,1]\), from the nozzle to the jet head. The large, slow droplets, for which \(d/d_{n}>0.075\) and \(u_{x}/U_{inj}<0.4\), are located on the downstream side of the jet head and around the maximal head sheet extension. The two Figure 14: Joint distributions of the size and the axial velocity of the droplets for (a) \(We_{2}=40\) (DNS 6) in second wind-induced regime at \(t/T_{a}=30\) and for (b) \(We_{2}=130.3\) (DNS 9) in atomisation regime at \(t/T_{a}=15\). groups show some overlapping in the recirculation region. It is possible that some droplets are caught in the vortex circulation, even if they preferentially behave as ballistic objects. In the \((r/d_{n},\theta)\) space, the distributions are homogeneous along the azimuthal axis, which respects the flow symmetry, and the same distribution along the \(r/d_{n}\)-axis appears between the two groups. Thus, the two tails of the joint distribution of the size and the velocity come from the existence of two sources of fragmentation in the flow: the head sheet edge and the corollas developing from the jet forcing. ### Governing parameters at the droplet scale The joint distribution of the size and the axial velocity of the droplets gives some insights on the droplet dynamics in the flow. However, compared with the experiments, the numerical distributions show a slightly more complex trend and do not allow to characterise droplets with different behaviors on the basis of the marginal PDF characteristics. Beyond this, it would be interesting to have a glance on the flow perceived by a droplet as well as the droplet deformation resulting from the droplet-flow interaction. Without detailing the flow around each droplet down to the smallest scales, it is possible to characterise such a flow by considering its governing parameters, the particulate Reynolds number and the particulate Ohnesorge number, respectively expressed as: \[Re_{p}=\frac{|u_{p,x}-U_{g,x}|d}{v_{l}},\quad Oh_{p}=\frac{\mu_{l}}{\sqrt{\rho_ {l}\sigma d}} \tag{10}\] where \(d\), \(u_{p,x}\) and \(|u_{p,x}-U_{g,x}|\) are the particle diameter, the particle axial velocity and its relative velocity compared to \(U_{g,x}\), the \(x\) component of the gas phase velocity averaged over the domain. The particulate Reynolds number not only brings light on the balance between the inertial and viscosity forces at the scale of a droplet but it also brings information on the product of the droplet relative velocity and its diameter. By concatenating the size and the velocity of a droplet, the latter quantity could be seen as a potential of fragmentation. The higher the product \(d\cdot|u_{p,x}-U_{g,x}|\) is, the more likely the droplet will fragment in multiple elements. It also enables to distinguish the droplet-flow interactions between droplets having the same size but different relative velocities or, equivalently, having the same relative Figure 15: Spatial evolution of the probabilities \(\mathcal{P}\left(d/d_{n}>0.075,u_{x}/U_{inj}<0.4\right)\) (blue) and \(\mathcal{P}\left(d/d_{n}>0.075,u_{x}/U_{inj}>0.4\right)\) (red) for \(We_{2}=99.8\) (DNS 8) at \(t/T_{a}=15\) in cylindrical coordinates. For each 2D graph, the probabilities are integrated on the third direction. On the \((x/d_{n},r/d_{n})\) graph, the gray boundary represents the mean jet interface and few droplets, see appendix C for the details. velocity and different sizes. In addition, the particulate Ohnesorge number characterises the ratio between the viscosity forces and the product of the inertial and surface tension forces. This dimensionless number is usually used to characterise droplet deformation in a given flow. The larger is \(Oh_{p}\), the less deformable is the droplet. Thus, even if they give global information on the droplet-scale flow, the combinations of \(Re_{p}\) and \(Oh_{p}\) could help to characterise the droplet behaviors depending on their possible deformation and potential of fragmentation. Figure 16 gives the normalised joint volume histogram of \(Re_{p}\) and \(Oh_{p}\) of the droplet population for \(We_{2}\in\{40,165\}\) (DNS 6 and 10) respectively at \(t/T_{a}=30\) and \(t/T_{a}=15\). Note that the quantity which is plotted is the total volume of the droplets contained in a bin \((\Delta Re_{p},\Delta Oh_{p})\), denoted \(V_{bin}\), and normalised by the maximum value of \(V_{bin}\), denoted \(V_{max}\). First of all, the \(Oh_{p}\) values larger than \(Oh_{\Delta min}\) correspond to the droplets smaller than the smallest cell size \(\Delta_{min}\) and are not physically relevant. Considering the pair \((Re_{p},Oh_{p})\) reshapes drastically the droplet data. Regarding \(We_{2}=165\) (DNS 10), the two peaks present for the large sizes as well as the peaks around large velocities and negative velocities for the small sizes in figure 14 do not appear anymore in the joint volume histogram of the particulate dimensionless numbers. In addition, while the trends of the size-velocity joint distributions are significantly different between the two fragmentation regimes, the limits of the joint volume histogram appear not only to be regular but also follow similar trends between the two fragmentation regimes, as shown by the comparison of the joint histogram for \(We_{2}=40\) (DNS 6) and the edge contour for \(We_{2}=165\) (DNS 10). Regarding the histogram values, different modes appear in the joint histogram of each DNS. For the DNS 6, in the swi regime, it is possible to denote the three size modes, observed in section 3.4, denoted from 1 to 3 and separated at \(Oh_{p}\in\{1.35\times 10^{-2},2.5\times 10^{-2}\}\) by the red vertical lines. Each droplet group shows some dispersion along the \(Re_{p}\) direction, dispersion which increases when the droplet size decreases. The population thus shows three subgroups whose dynamics seems to mainly be governed by their size. From those three size subgroups, only the modes 1 and 2 remain in the joint volume histogram of the DNS 10, in the atomisation regime. The mode of large sizes, mode 3, does not exist in the atomisation regime because Figure 16: Joint volume histogram of \(Re_{p}\) and \(Oh_{p}\) for the droplet population for \(We_{2}=40\) (DNS 6) at \(t/T_{a}=30\) (a) and for \(We_{2}=165\) (DNS 10) at \(t/T_{a}=15\) (b). The Ohnesorge number corresponding to the smallest grid cell, \(Oh_{\Delta min}\), is indicated by the vertical blue line. The vertical red lines indicate the limits between the 3 modes of the size PDF for \(We_{2}=40\) (DNS 6), figure 8(b). The orange line represents the isovalue \(V_{bin}/V_{max}=0.03\) for \(We_{2}=165\) (DNS 10). the corolla issued from the forcing cannot develop nor create rim leading to the generation of such droplet sizes. For the latter DNS, the mode 1, existing at large \(Oh_{p}\) and indicated by the red region in figure 15(b), gains in importance and is the main size mode in the atomisation regime, existing for \(Oh_{p}\in[2.5,5]\times 10^{-2}\) and \(Re_{p}\in[30,400]\). Additionally, the dispersion of the modes for moderate and small \(Oh_{p}\) increases between the two regimes while respecting the similar outer limits, as the droplet data spread over all the space delimited by the edge contour for \(We_{2}=165\) (DNS 10). Finally, it is worth noting the absence of droplets in the region of large particulate Reynolds and small particulate Ohnesorge, \((Re_{p},Oh_{p})\in([10^{3},10^{4}],[1,7]\times 10^{-3})\), i.e. droplets whose size and axial velocity are of the order of \(d_{n}\) and \(U_{inj}\). The comparison of the edge contour for \(We_{2}=165\) (DNS 10) with the joint volume histogram for \(We_{2}=40\) (DNS 6) given by figure 16 suggests that the joint histograms follow similar borders regardless of the fragmentation regime. Figure 17 dives in a more detailed comparison of the joint histogram borders by superposing the edges for all the DNS and proposes a normalisation of the two dimensionless numbers. The edges are obtained by sampling each joint histogram along the \(Oh_{p}\) direction and keeping for each sample the maximum of the ordinates and the ordinate of the percentile at 7 %. This technique enables to discard the outlier points existing at small \(Re_{p}\). From the edges of the non normalised joint histograms, it appears clearly that the joint histograms evolve in the same phase space for both fragmentation regimes and that the borders only show a slight evolution with the gaseous Weber number \(We_{2}\). Note that the isolated points in the top left corner are the \(Re_{p}\) and \(Oh_{p}\) values corresponding to the liquid core. Those points depart from \(Re_{1}\) and \(Oh_{1}\) because the liquid core has a volume larger than that of a sphere of diameter \(d_{n}\). As a reminder, the injection dimensionless numbers are given in Table 2. In order to normalise \(Re_{p}\) and \(Oh_{p}\), one can choose the Reynolds number on the jet axis \(Re_{axis}\), computed with the nozzle diameter \(d_{n}\) and \(u_{x,axis}\) the jet velocity on the \(x\)-axis, and the injection Ohnesorge number \(Oh_{1}\) computed with \(d_{n}\). In our simulation, the velocity of the jet along the \(x\)-axis does not show any diminution, thus \(u_{x,axis}=U_{inj}\) and \(Re_{axis}=Re_{1}\). Making the distinction between the injection velocity and the jet velocity on the axis might appear auxiliary here but is relevant for comparing the numerical data with experiments. Figure 17: Color on-line. Edges of (a) the joint volume histograms and of (b) the normalised joint volume histograms at \(t/T_{a}=15\). The blue and red colours denote the second wind-induced and atomisation regimes. The colour code denoting the DNS is the same as in Figure 3. Once normalised, all the upper and lower borders of the joint histograms collapse. Not only those borders collapse but also show a power law dependency. For \(Re_{p}/Re_{axis}\geqslant 10^{-2}\), the former scales such that \(Re_{p}/Re_{axis}=1.35(Oh_{p}/Oh_{1})^{-2}\) and the latter scales such that \(Re_{p}/Re_{axis}=0.37(Oh_{p}/Oh_{1})^{-3}\). The collapse however does not hold for the borders on the range \(Oh_{p}/Oh_{1}\in[4,10]\) and \(Re_{p}/Re_{axis}\leqslant 10^{-2}\), which could simply result from the difference in the relative velocity and the extreme values along the \(Re_{p}\)-axis. Additionally, not only the isolated points corresponding to the liquid core collapse, but also they lie within the space delimited by the two power laws. Thus, a spray developing after the pinching region, in the same configuration, could show a phase space delimited by the same power laws and spreading from the liquid core towards the smallest droplets. Finally, as all the contours collapse in figure 16(b), it is possible to add that this cascade in the phase space \((Re_{p}/Re_{axis},Oh_{p}/Oh_{1})\) is independent of the gaseous Weber number \(We_{2}\). Overall, the comparison of the joint volume histogram in the second wind-induced regime, \(We_{2}=40\) (DNS 6), and in the atomisation regime, \(We_{2}=165\) (DNS 10), indicates that the dominant modes of the droplet population evolve with \(We_{2}\). Particularly, the mode for small \(Oh_{p}\), i.e. large droplet sizes, does not exist in the atomisation regime. Also, the atomisation regime presents a larger dispersion in \(Oh_{p}\) and \(Re_{p}\). This is expected as the increase in \(We_{2}\) creates aerodynamic conditions in which large droplets are very unlikely to survive, or even be generated, and the increase of the relative velocity between the gas and the liquid induces an increase of the deviation of the size and the axial velocity, see section 3.4. This analysis highlights the possibility to reshape the size and velocity data of the droplets into a regular shape, even if the size-velocity joint distribution shows irregular boundaries and infrequent features. Additionally, it indicates that the joint histogram values of the particulate dimensionless numbers evolve with \(We_{2}\) while respecting outer borders which are largely independent of \(We_{2}\). Normalising the particulate dimensionless numbers by the injection dimensionless numbers shows that the droplets exist over a steady phase space, delimited by power and exponential laws. Finally, such joint volume histogram opens the way for qualifying the different flow regimes undergone by the droplets and the consequent fragmentation mechanisms. ### Droplet phase space: simulations and experiments Vallon _et al._ (2021) proposed, among others, a detailed analysis of the experimental joint distribution of the size and axial velocity of the droplets in the case of a water jet injected into quiescent air at \(We_{2}=24\) and lying in the second wind-induced regime. The experimental apparatus used to perform simultaneous measurements of the size and the velocity of the droplets is detailed by Felis _et al._ (2020). The originality of that experimental campaign lies in the simultaneity of the DTV size-velocity measurements and the distance where they were carried out: from 400 to 800 nozzle diameters along the jet axis. Complementarily, and following the insights of section 4.3, it is possible to look at the experimental joint volume histogram of the particulate Reynolds and Ohnesorge numbers. Figure 18 gives the joint volume histogram for \(Re_{p}/Re_{axis}\) and \(Oh_{p}/Oh_{1}\) derived from the experimental measurements of Felis _et al._ (2020). Note that the mean velocity of the liquid phase on the jet axis \(u_{x,axis}\) is no longer equal to the injection velocity, \(U_{inj}=35\) m/s, but has decreased by 20% at \(x/d_{n}=800\). Once again, the borders of the joint volume histogram are well-defined and can be easily modelled. The upper and lower borders split into two scalings. For \(Re_{p}/Re_{axis}\geqslant O(10^{-3})\), the borders follow a power law while they follow an exponential scaling for smaller values of \(Re_{p}/Re_{axis}\). The upper and lower borders are respectively denoted \(\mathcal{B}_{up}\) and \(\mathcal{B}_{low}\) and their scaling is such that: \[\mathcal{B}_{up}:\left\{\begin{array}{ll}\frac{Re_{p}}{Re_{axis}}=0.215\Big{(} \frac{Oh_{p}}{Oh_{1}}\Big{)}^{-2},&\forall\;Oh_{p}/Oh_{1}\in[1,10]\\ \frac{Re_{p}}{Re_{axis}}=\exp\bigg{(}-0.1\Big{(}\frac{Oh_{p}}{Oh_{1}}+45.1 \Big{)}\bigg{)}-1.90\times 10^{-3},&\forall\;Oh_{p}/Oh_{1}\in[10,20]\end{array}\right. \tag{10}\] \[\mathcal{B}_{low}:\left\{\begin{array}{ll}\frac{Re_{p}}{Re_{axis}}=0.215 \Big{(}\frac{Oh_{p}}{Oh_{1}}\Big{)}^{-2.61},&\forall\;Oh_{p}/Oh_{1}\in[1,3]\\ \frac{Re_{p}}{Re_{axis}}=\exp\bigg{(}-0.6\Big{(}\frac{Oh_{p}}{Oh_{1}}+3.65 \Big{)}\bigg{)}-6.5\times 10^{-3},&\forall\;Oh_{p}/Oh_{1}\in[3,5]\end{array}\right. \tag{11}\] Note that, for a given value of \(Oh_{p}\), the upper border describes the fastest droplets at a given size while, for a given value of \(Re_{p}\), it describes the smallest droplets at a given velocity. Thus, the upper border can be seen as the border describing the smallest and fastest droplets in a given region of the phase space, the reverse logic holds for the lower border. Additionally, two main "paths" can be distinguished in the joint histogram. The first one lies in the power law region and the second one in the exponential region, respectively denoted \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\), both of them follow a power law scaling such that: \[\mathcal{P}_{1}:\;\frac{Re_{p}}{Re_{axis}}=0.215\Big{(}\frac{Oh_ {p}}{Oh_{1}}\Big{)}^{-2.175}, \forall\;Oh_{p}/Oh_{1}\in[1,7] \tag{12}\] \[\mathcal{P}_{2}:\;\frac{Re_{p}}{Re_{axis}}=0.039\Big{(}\frac{Oh _{p}}{Oh_{1}}\Big{)}^{-2}, \forall\;Oh_{p}/Oh_{1}\in[7,14]\] Let us focus on the borders scaling as a power law. Starting from the expression of \(Oh_{p}\), it is possible to rewrite \(Re_{p}\) as \(Re_{p}=\sigma^{-1}\mu_{l}|u_{p,x}-U_{g,x}|Oh_{p}^{-2}\). Knowing that in this region \(Re_{p}=C\;Oh_{p}^{-2-\alpha}\) with \(C\in\mathbb{R}\), we then have \(|u_{p,x}-U_{g,x}|=\sigma C\;Oh_{p}^{-\alpha}\), which is equivalent Figure 18: Experimental joint volume histogram of \(Re_{p}/Re_{axis}\) and \(Oh_{p}/Oh_{1}\) at \(x/d_{n}=800\). The dash-dot lines represent the power law scalings while the solid lines represent the exponential scalings. The insert recalls the joint volume histogram without the modelled borders. to \(|u_{p,x}-U_{g,x}|\propto d^{\alpha/2}\). The droplets then show a velocity relative to the gas phase which increases with the droplet size. The coefficient \(\alpha\) necessarily lies in \(\mathbb{R}^{+}\). Indeed, a negative \(\alpha\) would mean that the relative velocity of a droplet decreases when its size increases and consequently that larger objects would be more sensitive to the gas phase flow, which goes against the observation of ballistic objects in fragmentation flows. Consequently, the borders scaling as \(Oh^{-2}\) seem to result from dynamical limits. Regarding the lower border scaling as a power law, we have \(\alpha\in\{0.4,0.45,0.5,0.56,0.61\}\) for \(x/d_{n}\in\{400,500,600,700,800\}\). Inferring a rule on the evolution of the upper bound of \(\alpha\) from the experimental data seems reckless. Regarding the exponential scaling of the borders, it is interesting to note the existence of an offset along \(Oh_{p}\) and \(Re_{p}\). The droplets being on the upper border preferentially have a smaller size and a larger relative velocity, while those on the lower border have a larger size and a smaller relative velocity. In order to have all the droplets lying in a stable configuration, which corresponds to the region where \(Re_{p}<O\left(10^{-3}\right)\), the difference in the droplet dynamics has to be accounted for. This is what the offsets along \(Oh_{p}\) and \(Re_{p}\) in the exponential scaling enable to do. Figure 19 compares the edge contour of the experimental and numerical joint volume histograms. When comparing the original experimental histograms and the numerical ones, it appears that the phase spaces in which the droplets evolve show ranges of existence being very similar between the experiments and the simulations. Even if an offset along the \(Re_{p}/Re_{axis}\)-axis exists, they lie in the same \(Oh_{p}/Oh_{1}\) range. By multiplying the edge contour of the experimental joint volume histogram by 3, the edges of the numerical and the experimental data collapse. Thus, we have \((Re_{p}/Re_{axis})_{num}=C\times(Re_{p}/Re_{axis})_{exp}\), where \(C\approx 3\), which leads to: \[\frac{|u_{p,x}-U_{g,x}|_{num}\ d_{num}}{|u_{p,x}-U_{g,x}|_{exp}\ d_{exp}}=C\ \frac{d_{n,num}\times u_{x,axis,num}}{d_{n,exp}\times u_{x,axis,exp}} \tag{6}\] Figure 19: Color on-line. Comparison of the borders of the joint volume histograms obtained from the DNS campaign and from the experimental data of Felis _et al._ (2020). The blue and red colours denote the second wind-induced and atomisation regimes. The color code denoting the DNS is the same as in Figure 3. The green triangles represent the experimental data and the orange bullets represent DNS with \(U_{inj}=2.216\)m/s and different density ratios, \(\rho_{1}/\rho_{2}=82.5,110,165\). In (b), the black solid line indicates the mean value of the numerical joint histogram edges for \(We_{2}\in[26,165]\) (blue and red). and results to \[|u_{p,x}-U_{g,x}|_{num}\ d_{num}\approx 1.8\ |u_{p,x}-U_{g,x}|_{exp}\ d_{exp} \tag{10}\] with \(u_{x,axis,exp}=0.8\times U_{inj,exp}=28\) m/s and \(u_{x,axis,num}=U_{inj,num}=4.5\) m/s. The numerical and experimental contours lie in the same range of \(Oh_{p}/Oh_{1}\). Thus, it can be assumed that \(d_{exp}/d_{n,exp}\approx d_{num}/d_{n,num}\) which implies: \[\frac{d_{num}}{d_{exp}}\approx 3.73,\ \ \ \ \frac{|u_{p,x}-U_{g,x}|_{num}}{|u_{p, x}-U_{g,x}|_{exp}}\approx 0.48. \tag{11}\] The experimental and numerical mean sizes are respectively \(\langle d\rangle_{exp}=95\ \mu\)m, averaged over the \(5\ x/d_{n}\) positions, and \(\langle d\rangle_{num}\approx 300\ \mu\)m, at \(t/T_{a}=34\) in the second wind-induced regime. The ratio of the means equals 3.16, thus \(\langle d\rangle_{num}/\langle d\rangle_{exp}\approx d_{num}/d_{exp}\) and verifies the previous result. Explaining why the last three ratios appear, Eqs. 10 and 11, must be made carefully. On the one hand, the way the measurements of the size and the velocity of the droplets is carried out greatly differs between the experiments and the simulations. In the simulations, once a droplet is detected thanks to the tag function of Basilisk, see section 2.2, its volume and velocity are computed as the volume average in 3D of the cell values contained in the droplet. In the experiments, the measurements of the droplet size and velocity are carried out with a 2D laser sheet thanks to DTV. In addition, the measurement of the mean gas phase velocity also differs. While numerically it results from the velocity average over all the cells in the gas phase, the experimental mean gas phase velocity is estimated from LDV measurements at different radial positions and then averaged along those positions. On the other hand, it is important to keep in mind that the experimental and numerical data correspond to two drastically different physical spaces. The former were measured for \(x/d_{n}=800\) and the latter for \(x/d_{n}\approx 20\). With these limits in mind, different hypotheses can be sketched. As the measurements are carried in two drastically different regions of the fragmenting jet, the three ratios could reveal some dynamics occurring at the overall jet scale, for instance the overall slowdown of the droplet population when the droplet spray moves towards larger \(x/d_{n}\). If it was the case, it could be expected that the edges of the experimental joint volume histograms would be translated towards smaller \(Re_{p}/Re_{axis}\), see Figure 19a, as they span over a distance of 400 \(d_{n}\). But, no such translation is noticeable for the experimental edges. The difference along the \(Re_{p}\)-axis between the experimental and numerical joint histograms could also raise from the difference of the density ratios considered in the simulations and experiments. Indeed, since gravity is not considered, the effects of density are accounted by the Reynolds and Ohnesorge numbers. Figure 19b gives the evolution of the joint histogram edges obtained for 3 density ratios, \(\rho_{l}/\rho_{g}\in\{82.5,110,165\}\), such that \(We_{2}=40\), see appendix H. These density ratios remain small compared to experimental values, \(O(1000)\) for water injected in air, but are already quite computationally expensive. No translation of the joint histogram edges towards smaller \(Re_{p}/Re_{axis}\) values is noticeable, thus discarding this hypothesis too. The choice of the normalisation for the Reynolds number could also be an explanation. Using the Reynolds number computed over \(d_{n}\) and the averaged velocity of the dispersed liquid phase, instead of \(u_{axis}\), could help to make the edges of the joint volume histograms collapse, both experimentally and numerically. Notwithstanding those limitations and differences, it still seems legitimate to conclude that the joint histogram edges are self similar. This conclusion only holds for the edges and not for the joint histogram values, which evolve very differently between the experiments and the simulations. Yet, the two jet flows differ in terms of fragmentation mechanisms. In the experimental flow, the bag breakup fragmentation plays an important role while it is totally absent in the numerical flows. In the experiments, the droplets undergoing bag breakup originate from the liquid core pinch and are characterised by a large size and axial velocity. As the liquid core is still developing in the simulations, the absence of such droplets is expected. Even if the joint histogram edges are self similar, they slightly differ for large values of particulate Reynolds and Ohnesorge numbers where the experimental joint histograms exhibit a well-defined tail. Besides, section 3.5 shows that the ligament-mediated fragmentation describes well the droplet fragmentation in the numerical flows. Thus, it is tempting to conclude that the droplets are likely to undergo a bag breakup fragmentation when \(Oh_{p}/Oh_{1}<2\) and a ligament mediated fragmentation when \(Oh_{p}/Oh_{1}\geq 2\). ## 5 Conclusion In this work, the droplet population generated by the fragmentation of a round jet in a quiescent gas medium was studied numerically for different gaseous Weber numbers \(We_{2}\) spanning the second wind-induced regime and part of the atomisation regime. At first, the statistical moments of the size, the axial velocity and the radial velocity were depicted and their evolution with \(We_{2}\) was detailed. The study of the distribution of the droplet size shows the existence of three modes in the second wind-induced regime, while only one mode exists in the atomisation regime. Complementary, the size distribution shows two exponential decays connected by a transition region scaling as a power law. In this near-field (close to the nozzle) fragmentation, the size distribution is better modelled by the law derived by Kooij _et al._ (2018) in the context of ligament-mediated fragmentation than by the law derived by Novikov & Dommermuth (1997) in the framework of turbulence intermittency. This could, at first sight, raise from the difference in the fragmentation mechanisms occurring in the region close to the nozzle, studied here, and the region far away from the nozzle studied in Vallon (2021). On the side of the axial velocity distribution, additionally to elucidating the scaling of the axial velocity distribution tails, the origin of the droplet velocities being negative and larger than the injection velocity \(U_{inj}\) is explained thanks to the vortex ring theory of Saffman (1992), vortex ring which sustains the recirculation region on the downstream side of the jet head. The existence of a double tail along the size direction for the size-velocity joint distribution is also explained by spatially separating the droplets evolving in the boundary layer and those ejected from the jet head. The analysis also scaled down to the flow perceived by the droplets with the study of the droplet volume histogram over the phase space of the particulate Reynolds and Ohnesorge numbers. Properly scaled by the injection Ohnesorge number \(Oh_{1}\) and the Reynolds number computed on the jet axis \(Re_{axis}\), the boundaries of the joint volume histograms from the DNS collapse, thus indicating the weak dependence of the joint histogram boundaries on the gaseous Weber number. The collapse is also obtained between the numerical and the experimental joint volume histograms, with a slight correction along the Reynolds axis for the far-field experimental one. This highlights the existence of a phase space properly bounded which contains the whole droplet population as well as the jet liquid core. Advantages could be taken from this result for modeling the turbulent jet fragmentation in terms of particulate dimensionless numbers or for improving the model of mass transfer proposed by Vallon (2021). Overall, the good accuracy of the statistical properties of the droplet population with the theoretical models, as well as with the experimental data, validate the accuracy of the simulations within the numerical limitations. Further work could be done regarding the droplet dynamics and geometry. Now that the interface of the jet and the droplet population are described, it could be possible to focus on the size distribution resulting from specific fragmentation mechanisms, like the fragmentation of the rims in the second wind-induced regime. This could help to understand the origin of the 3 modes observed for the size distribution in this regime. Also, performing a Lagrangian tracking of the rims in the DNS lying in the second wind-induced regime would enable to verify the break-up of such toroidal ligaments and compare the resulting size distribution with the \(\Gamma\) distribution from the ligament-mediated fragmentation theory. Finally, a statistical analysis of the ligament geometry in the atomisation regime, specifically in the DNS with the highest \(We_{2}\), could help to better describe the ligament size and corrugation distributions over the jet fragmentation. This work was granted access to the HPC resources of CINES under the allocation 2019-A0072B11103 made by GENCI. This research received no specific grant from any funding agency, commercial or not-for-profit sectors. The authors report no conflict of interest. The authors report no conflict of interest. The authors report no conflict of interest. R. Vallon, [https://orcid.org/0000-0003-0770-787X](https://orcid.org/0000-0003-0770-787X); M. Abid, [https://orcid.org/0000-0002-0438-4182](https://orcid.org/0000-0002-0438-4182); F. Anselmet, [https://orcid.org/0000-0001-6443-7437](https://orcid.org/0000-0001-6443-7437) R.V. performed the direct numerical simulations, developed the data analysis scripts, carried most of the data analysis and wrote the manuscript. M.A. provided scientific and technical supervisions, including key insights regarding the fragmentation and vortex theories. F.A. provided scientific supervision. F.A. and M.A. contributed equally to designing the work and proofreading the manuscript. All authors contributed equally to reaching conclusions. ## Appendix A Computation of the most unstable mode Following the work of Yang (1992) on the growth of waves in round jets, it is possible to characterise the most unstable axisymmetric mode. The author studied the stability of an infinitesimal perturbation on the surface of a round jet of radius \(a\). The configuration is the same as described in 2.3. Additionally, the gas phase can be injected at a velocity \(U_{2}\) and the fluids are incompressible and inviscid. In this section, the injection velocity previously denoted \(U_{inj}\) is denoted \(U_{1}\). The velocity and pressure fields can be split into an averaged part and a fluctuation part: \(\mathbf{u}_{i}=\mathbf{U}_{i}+\mathbf{u}_{i}^{\prime}\) and \(p_{i}=P_{i}+p_{i}^{\prime}\), where \(i\in\{1,2\}\) respectively denotes the liquid and gaseous phases. Injecting this decomposition into the governing equations, expressed in cylindrical coordinates \((r,\theta,z)\), and applying the divergence operator gives the pressure disturbance equation : \[\nabla^{2}p_{i}^{\prime}=0,\quad\nabla^{2}=\frac{1}{r}\frac{\partial}{ \partial r}r\frac{\partial}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}}{ \partial\theta^{2}}+\frac{\partial^{2}}{\partial z^{2}} \tag{1}\] Assuming a 3D disturbance with a normalised wavelength number \(ka\) and \(m\) in the streamwise and azimuthal directions, the perturbed quantities are \(p_{i}^{\prime}=p_{i}^{\prime}(r)e^{i(kz+m\theta)+\alpha_{tg}t}\) and \(\mathbf{u}_{i}^{\prime}=\mathbf{u}_{i}^{\prime}(r)e^{i(kz+m\theta)+\alpha_{tg }t}\), where \(\alpha_{tg}\) is the temporal growth rate and \(m\) introduces the non axisymmetric variations of the disturbance. Eq. 1 then becomes: \[\left(\frac{1}{r}\frac{\partial}{\partial r}r\frac{\partial}{\partial r}- \frac{m^{2}}{r^{2}}-k^{2}\right)p_{i}(r)=0 \tag{2}\] Resolving this equation gives a solution for \(p_{i}(r)\), Eq. 3, depending on the first and second type modified Bessel functions of order m, respectively denoted \(I_{m}\) and \(K_{m}\). This solution can be used with the mass conservation equation for the linearised perturbation to derive a solution for \(\mathbf{u}_{i}^{\prime}\), Eq. 4. \[p_{i}(r)=C_{i,1}I_{m}(kr)+C_{i,2}K_{m}(kr) \tag{3}\] \[\mathbf{u}_{i}^{\prime}=-\frac{\nabla\left(p_{i}(r)e^{i(kz+m\theta)+\alpha_{tg }t}\right)}{\rho_{i}(\alpha_{tg}+ikU_{i})} \tag{4}\] where the four constants \(C_{i,1}\) and \(C_{i,2}\) have to be derived regarding the boundary conditions. The pressure is finite in the liquid at \(r=0\) and in the gas when \(r\rightarrow+\infty\), thus \(C_{1,2}=C_{2,1}=0\). Let \(\eta_{1}\) and \(\eta_{2}\) denote the perturbed displacements of the interface and \(\Delta p_{\sigma}\) the pressure jump due to the surface tension \(\sigma\). The pressure follows \(\Delta p_{\sigma}=\sigma(1/R_{1}+1/R_{2})\) with \(R_{1}\) and \(R_{2}\) the principal radii of curvature. The remaining two constants can be derived from the pressure jump, \(p_{1}-p_{2}=\Delta p_{\sigma}\), and the interface displacement, \(\eta_{1}=\eta_{2}\). The perturbed displacements satisfy: \[v_{i}=\frac{\partial\eta_{i}}{\partial t}+U_{i}\frac{\partial\eta_{i}}{ \partial x} \tag{5}\] with \(v_{i}\) the velocity component in the radial direction. By letting \(\eta=\eta_{1}=\eta_{2}=\eta_{0}e^{i(kz+m\theta)+\alpha_{tg}t}\), Yang (1992) showed that to the first order of \(\eta\), \(1/R_{1}+1/R_{2}=1/d_{n}-1/d_{n}^{2}\left[1-m^{2}-(ka)^{2}\right]\eta_{0}\). The continuity equations then become: \[C_{11}\left(I_{m}(ka)-\frac{\sigma[1-m^{2}-(ka)^{2}]}{a^{2}}\frac{I_{m}^{ \prime}(ka)}{\rho_{1}(\alpha_{tg}+ikU_{1})^{2}}\right)-C_{22}K_{m}(ka)=0 \tag{6}\] \[C_{11}\frac{I^{\prime}_{m}(ka)}{\rho_{1}(\alpha_{Ig}+ikU_{1})^{2}}-C_{22}\frac{K^ {\prime}_{m}(ka)}{\rho_{2}(\alpha_{Ig}+ikU_{2})^{2}}=0 \tag{10}\] The latter equation system admits a non-trivial solution when its determinant is zero. This condition gives the following dispersion equation: \[(\rho_{1m}+\rho_{2m})\alpha_{Ig}^{2}+2ik\alpha_{Ig}(\rho_{2m}U_{2}+\rho_{1m}U_ {1})-k^{2}(\rho_{2m}U_{2}^{2}+\rho_{1m}U_{1}^{2})-\frac{k\sigma}{a^{2}}[1-m^{2 }-(ka)^{2}]=0 \tag{11}\] with: \[\left\{\begin{array}{ll}\rho_{1m}&=\gamma_{m}\rho_{1}\\ \rho_{2m}&=\beta_{m}\rho_{2}\\ \gamma_{m}&=kI_{m}(ka)/I^{\prime}_{m}(ka)\\ \beta_{m}&=-kK_{m}(ka)/K^{\prime}_{m}(ka)\\ I^{\prime}_{m}(ka)&=\frac{dI_{m}(kr)}{dr}_{r=a}\\ K^{\prime}_{m}(ka)&=\frac{dK_{m}(kr)}{dr}|_{r=a}\end{array}\right. \tag{12}\] The dispersion equation, Eq. 11, is a quadratic equation in \(\alpha_{Ig}\) and the expression of the nondimensional temporal growth rate for the \(m\)-th transversal mode can be derived from it: \[(\alpha_{r}^{*})_{m}^{2}=\frac{\gamma_{m}\beta_{m}Q\cdot(ka)^{2}}{(\gamma_{m} +\beta_{m}Q)^{2}}+\frac{ka}{We}\frac{1-m^{2}-(ka)^{2}}{\gamma_{m}+\beta_{m}Q} \tag{13}\] where \((\alpha_{r}^{*})_{m}^{2}=(\alpha_{r})_{m}^{2}/[(U_{1}-U_{2})^{2}/d_{n}^{2}]\), \(We=[d_{n}(U_{1}-U_{2})\rho_{1}]/\sigma\) and \(Q=\rho_{2}/\rho_{1}\). ## Appendix B Numerical performances of the DNS The computational performances can be tracked by checking the total number of cells used for each DNS, \(\mathcal{C}_{tot}\), the mean numerical velocity, \(\overline{\mathcal{V}_{num}}\), the maximal physical time, \(t_{max}/T_{a}\), the maximum jet elongation, \(L_{j,max}/d_{n}\) and the total number of detected droplets, \(N_{tot}\). Table 6 summarizes the related numerical performances. All the DNS are split into 3 runs and were computed on the Occigen HPC (CINES, France). \begin{table} \begin{tabular}{c c c c c c} DNS & \(\mathcal{C}_{tot}\) (\(10^{6}\)) & \(\overline{\mathcal{V}_{num}}\) & \(\left(10^{6}\mathrm{cells/s}\right)\) & \(t_{max}/T_{a}\) & \(L_{j,max}/d_{n}\) & \(N_{tot}\) \\ 1 & 4.43 & 0.59 & 34 & 28 & 70 \\ 2 & 29.29 & 0.93 & 34 & 28 & 459 \\ 3 & 53.83 & 2.23 & 34 & 28 & 1949 \\ 4 & 42.15 & 1.96 & 34 & 28 & 2182 \\ 5 & 51.41 & 1.73 & 34 & 28 & 2448 \\ 6 & 34.13 & 1.92 & 34 & 28 & 3545 \\ 7 & 77.45 & 2.46 & 24.2 & 21.5 & 9725 \\ 8 & 105.6 & 2.59 & 20.0 & 17.0 & 18,478 \\ 9 & 141.7 & 2.59 & 17.5 & 14.4 & 32922 \\ 10 & 154.4 & 2.56 & 16.5 & 14.2 & 45046 \\ \end{tabular} \end{table} Table 6: Numerical performances. ## Appendix C Computation of the mean interface of the jet Computing the mean interface of the jet is not straightforward and requires some processing. To extract it, it is first necessary to compute the joint distribution of the interface points in the physical space \((x/d_{n},r/d_{n})\). Once computed, the mean interface can be extracted from the distribution by filtering out the most probable interface points. This extraction step however relies on the choice of a threshold. This threshold can be empirically chosen such that it enables to depict the mean interface while discarding most of the interface related to the droplets. Figures 19(a) and 19(b) respectively give the joint distribution of the interface points across the \((x/d_{n},r/d_{n})\) space and the interface filtered from the joint distribution with a threshold of 0.2, i.e. the interface points with a probability larger than 0.2. A way to refine the mean interface of the jet would be to consider the interface of the liquid core only, instead of considering all the interface points in the jet, i.e. the liquid core and all the droplets. The droplets would be then naturally discarded and the resulting interface would depict more precisely the mean interface around the jet head. Even so, the method used here is satisfactory for the following analysis. ## Appendix D Temporal evolution of the mean values of the size, axial velocity and radial velocity Figure 21 gives the temporal evolution of the mean values of the size, the axial velocity and the radial velocity. Regarding the size and the axial velocity, after reaching a peak value for \(t/T_{a}\in[5,10]\), the mean values increase relatively steadily within the time scope under consideration. The time evolution of the mean of each DNS can be rescaled with \(We_{2}\). On one side, the mean size scaled by \(We_{2}^{0.6}\) seems to evolve linearly with \(t/T_{a}\). On the other side, it is possible to collapse the time evolution of the mean axial velocity for each regime by considering \(\langle u_{x}\rangle\)\(We_{2}^{-1}\) for the second wind-induced regime and \(\langle u_{x}\rangle\)\(We_{2}^{-0.3}\) for the atomisation regime. The evolution of \(u_{y}\) is specific in the sense that the flow is statistically axisymmetric and \(\langle u_{y}\rangle\) should naturally be set to zero, which is verified here asymptotically. Due to the flow symmetry, the mean of \(u_{z}\) behaves the same as the one of \(u_{y}\). The standard deviations, not shown here, reach a steady state faster than the mean values for the size and the velocities. Figure 20: Joint distribution of the interface points in the \((x/d_{n},r/d_{n})\) space for \(We_{2}=99.8\) (DNS 8) at \(t/T_{a}=15\) (a) and the interface filtered with a threshold of 0.2 (b). ## Appendix E Evolution of the velocity statistical moments with \(We_{2}\) Regarding the distribution of \(u_{x}\), all the four statistical moments increase with \(We_{2}\). The increase in the mean and standard deviation indicates that the droplets are accelerated with \(We_{2}\), which is obvious as \(U_{inj}\) increases meanwhile, and that the dispersion in terms of Figure 21: Temporal evolution of the mean \(\langle\cdot\rangle\), unscaled (left) and scaled by \(We_{2}\) (right) of the droplet size \(d\) (a,b), the axial velocity \(u_{x}\) (c,d) and the transverse velocity \(u_{y}\) (e). The units of the variables are the SI base units. velocity is larger, which also seems natural as the relative velocity between the injection and the gas phase velocity increases too. The same observation holds to explain the evolution of \(u_{x}^{max}\) and \(u_{x}^{min}\). Concurrently, the skewness is positive and increases with \(We_{2}\), thus the axial velocity distribution is right-tailed with an increasing asymmetry. Compared to the skewness of the size distribution, the skewness of the distribution of \(u_{x}\) is much smaller and the distribution should be moderately skewed. Finally, the excess kurtosis not only increases but also changes sign for \(We_{2}\in[40,70]\). The second wind-induced regime is then characterised by a negative excess kurtosis, which indicates tails being shorter and a peak being flatter than the ones of the Normal distribution. Conversely, the excess kurtosis in the atomisation regime, for the values of \(We_{2}\) under consideration, is positive, indicating larger tails and a sharper peak compared to the Normal distribution. Furthermore, the excess kurtosis is smaller than 3 and the distribution has tails shorter than the ones of the Gaussian distribution. Thus, each fragmentation regime shows a characteristic tail spanning for the distribution of \(u_{x}\). The interpretation of the evolution of the statistical moments for the distribution of \(u_{y}\) is straightforward. As discussed previously, the statistical axisymmetry of the flow enforces a zero mean value as well as a symmetric distribution of \(u_{y}\) around its mean, i.e. a zero skewness. Those two consequences of the flow symmetry are verified for each \(We_{2}\) value and highlighted by the evolution of \(u_{y}^{min}\) and \(u_{y}^{max}\). Similarly to the distribution of \(u_{x}\), the standard deviation increases with the gaseous Weber number because of the increasing relative velocity between the liquid injection and the gas phase and thus the shear. The excess kurtosis, the one subtracted by 3, remains stable and is positive. This indicates a steady behavior and tails being larger than those of the Gaussian distribution. Figure 22: Evolution of \(\langle\cdot\rangle\) (red), \(\sigma\) (blue), \(S\) (green), \(\kappa\) (orange), the minimum (purple) and maximum (brown) against \(We_{2}\) for the axial velocity \(u_{x}\) (a) and the transversal velocity \(u_{y}\) (b). The pluses (+) correspond to \(t/T_{a}=15\) and the bullets (\(\bullet\)) to \(t/T_{a}=25\). Note that \(S\) and \(\kappa\) are both dimensionless and that the dimensional variables are expressed with the SI base units. ## Appendix F Systematic fit campaign for testing the theoretical size distributions The systematic fit campaign carried out to test the three distributions uses the fitting algorithm of the Ezyfit toolbox developed by Moisy (2020) on Matlab. This algorithm is said to be able to capture a given signal with a reference function when the parameters are set with initial values of the same order as the final values. Thus, the space of initial values has to be explored sufficiently to ensure that the optimum set of parameter values is captured for each theoretical distribution. To do so, the fit campaign is performed in two phases. In the first phase, 23 combinations of initial values are explored in linear and logarithmic modes, i.e. fitting the signal or its logarithmic transform. In the second phase, the best fits in each fitting mode and at the two time instant:s \(t/T_{a}=\{15,25\}\) are selected and tested a second time in order to improve the fit quality. Even if \(\mathcal{P}_{d/(d)}\) shows several modes in the second wind-induced regime, the fit of the size distribution is carried out for the main mode only, i.e. with only one theoretical distribution at a time. Finally, each theoretical PDF is weighted by a coefficient \(C\) which is let free in the fitting algorithm. Generally speaking, a fit shows a good agreement with a given signal when the Pearson coefficient \(r\) is close to 1. One can also use \(r^{2}\) as a more discriminating criterion. ## Appendix G Vortex ring dynamics, deriving the velocity at the edge of the vortex core Assuming that the recirculation observed behind the jet head behaves as a vortex ring behind a plate, it is possible to use the developments of Saffman (1992) which describe the dynamics of such unsteady objects. Let us consider a disc of radius \(a\) moving at a velocity \(U_{d}\) in the direction normal to the disc surface, denoted \(x\) hereafter, a vortex ring can develop on the downstream face and the velocity potential \(\phi\) on the upstream face follows: \[\phi=\mp\frac{2U_{d}}{\pi}\sqrt{a^{2}-r^{2}},\ \ x=\pm 0,\ \ y^{2}+z^{2}=r^{2}<a^{2} \tag{1}\] If the disc dissolves, the vortex ring remains with a strength \(\kappa(r)=4U_{d}/\pi\times r/\sqrt{a^{2}-r^{2}}\) and a vorticity \(\omega=\kappa\theta\delta(x)\). The amplitudes of the hydrodynamic impulse 1\(I\) in the \(x\) direction and the kinetic energy \(E\) are thus: Footnote 1: The concept of hydrodynamic impulse has a long history in theoretical hydrodynamics having been described by Lamb (1932). The advantage of the theory of hydrodynamic impulse is that it describes the physical origin of hydrodynamic forces and moments in terms of the vorticity generated at the body surface and its subsequent position in the fluid volume (Holloway & Jeans, 2020). \[I=\frac{1}{2}\int\,(\mathbf{x}\times\omega)_{x}\ dV=\frac{1}{2}\int_{0}^{a}2 \pi r^{2}\kappa dr=8U_{d}a^{3}/3 \tag{2}\] \[E=\frac{1}{2}\int\,\phi\frac{\partial\phi}{\partial n}dS=4U_{d}^{2}a^{3}/3 \tag{3}\] In addition, the circulation \(\Gamma\) containing the disc while starting and ending at the disc center is such that: \[\Gamma=\int_{0}^{a}\kappa dr=[\phi]_{r=0}=4U_{d}a/\pi \tag{4}\] Let us denote the vortex radius and the vortex core radius \(R\) and \(c\) and assume the conservation of the ring circulation and the hydrodynamic impulse. Knowing that the hydrodynamic impulse equals \(\Gamma\pi R^{2}\)(Taylor, 1953), the combination of Eqs. 2 and 4 results in \(\sqrt{2/3}a\). Further calculations give the expression of the vortex ring velocity \(U_{vr}\) and of its energy depending on \(\Gamma\), \(R\) and \(c\): \[U_{vr}=\frac{\Gamma}{4\pi R}\left[\log\left(\frac{8R}{c}\right)-\frac{1}{2}+\int_ {0}^{c}\left(\frac{\Gamma(s)}{\Gamma}\right)^{2}\frac{ds}{s}+o\left(\frac{c}{R }\right)\right] \tag{11}\] \[E=\frac{1}{2}\Gamma^{2}R\left[\log\left(\frac{8R}{c}\right)-2+\int_{0}^{c} \left(\frac{\Gamma(s)}{\Gamma}\right)^{2}\frac{ds}{s}+o\left(\frac{c}{R}\right)\right] \tag{12}\] Combining the latter two equations with Eq. 11 enables to express the ratio of the vortex ring velocity \(U_{vr}\) along \(x\) and the disc velocity \(U_{d}\): \[\frac{U_{vr}}{U_{d}}=\frac{1}{4}+\frac{1}{\pi^{2}}\left(\frac{3}{2}\right)^{3/ 2}=0.44 \tag{13}\] The question of the velocity at the edge of the vortex core remains and is of most importance as it sets the droplet motion in the recirculation region. For a uniform core, \(c/R\) equals \(0.19\) while it equals \(0.14\) in the case of a hollow core. The velocity at the core edge, denoted \(u_{c}\), can be expressed as a function of the circulation \(\Gamma\), \(u_{c}=\Gamma/2\pi c\). Using the expression of \(\Gamma\) given in Eq. 11 and \(R=\sqrt{2/3}a\), \(u_{c}\) rewrites as: \[u_{c}=\left(\frac{c}{R}\right)^{-1}\frac{2}{\pi^{2}\sqrt{2/3}}\;U_{d} \tag{14}\] ## Appendix H Large density-ratio DNS The comparison of the edges of the experimental and numerical joint volume histogram over the space \((Oh_{p}/Oh_{1},\,Re_{p}/Re_{axis})\), given in Figure 19a indicates a discrepancy along \(Re_{p}/Re_{axis}\). In order to figure out if this discrepancy is due to the difference in the density ratio, several DNS with larger density ratio were ran. The gaseous Weber number is set to \(40\), as for the DNS \(6\). Except for \(\rho_{2}\), the parameters given by Table 1 are kept the same. Table 7 gives the injection velocity, \(\rho_{2}\), the corresponding \(We_{2}\) and \(Re_{1}\) along with the frequency \(f\) of the most unstable mode and the forcing Strouhal number \(St\). \begin{table} \begin{tabular}{r c c c c c c} DNS & \(U_{inj}\) & \(\rho_{2}\) & \(We_{2}\) & \(Re_{1}\) & \(f\) & \(St\) \\ & (m/s) & (kg/m\({}^{3}\)) & & & (kHz) & \\ 11 & 2.714 & 1/82.5 & 40 & 12159 & 0.918 & 1.52 \\ 12 & 3.134 & 1/110 & 40 & 14040 & 0.915 & 1.31 \\ 13 & 3.838 & 1/165 & 40 & 17195 & 0.911 & 1.06 \\ \end{tabular} \end{table} Table 7: Injection velocities, gaseous density and corresponding gas Weber and liquid Reynolds numbers along with the frequency \(f\) of the most unstable mode and the corresponding forcing Strouhal number \(St\).
2303.00808
Atmosphere and Greenhouse Gas Primer
We discuss how greenhouse gases affect radiation transfer in Earth's atmosphere. We explain how greenhouse gases like water vapor or carbon dioxide, differ from non-greenhouse gases like nitrogen or oxygen. Using simple thermodynamics and fluid mechanics, we show that a planet with sufficiently high concentrations of greenhouse gases must develop a convecting troposphere. The planet must also develop a non-convecting stratosphere above the tropopause. In the simplest approximation of an atmosphere that is transparent to sunlight and has frequency-independent opacity for thermal radiation, one can find simple formulas for the tropopause altitude, and for the altitude profiles of pressure and temperature. The troposphere is nearly isentropic and the stratosphere is nearly isothermal. Earth's real atmosphere is much more complicated but it does have a troposphere and a stratosphere. Between the surface and the tropopause the entropy per kilogram of real tropospheric air increases slowly with altitude. The entropy increases much more rapidly with altitude in the stratosphere. The stratosphere has a nearly isothermal lower part and a hotter upper part due to absorption of solar ultraviolet radiation by ozone. The thermal opacity of the real atmosphere has a complicated frequency dependence due to the hundreds of thousands of vibration-rotation transitions of its greenhouse molecules. Unlike the simple model where nearly all radiation to space originates at the tropopause altitude, radiation to space from Earth's real atmosphere originates from both the surface and all altitudes in the troposphere. A small additional amount of radiation originates in the stratosphere. When these complications are taken into account, model calculations of the thermal radiation spectrum at the top of the atmosphere can hardly be distinguished from satellite observations.
W. A. van Wijngaarden, W. Happer
2023-03-01T20:21:29Z
http://arxiv.org/abs/2303.00808v1
# Atmosphere and Greenhouse Gas Primer ###### Abstract We discuss the basic ways greenhouse gases affect radiation transfer in Earth's atmosphere. We explain how greenhouse gases like water vapor, H\({}_{2}\)O, or carbon dioxide, CO\({}_{2}\), differ from non-greenhouse gases like nitrogen, N\({}_{2}\), or oxygen, O\({}_{2}\). Using simple thermodynamics and fluid mechanics, we show that the atmosphere of a planet with sufficiently high concentrations of greenhouse gases must develop a convecting troposphere between the surface and the tropopause altitude. The planet must also develop a non-convecting stratosphere for altitudes above the tropopause. In the simplest approximation of an atmosphere that is transparent to sunlight and has frequency-independent opacity for thermal radiation (an infrared gray atmosphere), one can find simple formulas for the tropopause altitude, and for the altitude profiles of pressure and temperature. The troposphere is nearly isentropic and the stratosphere is nearly isothermal. The real atmosphere of the Earth is much more complicated than the simple model, but it does have a troposphere and a stratosphere. Between the surface and the tropopause the entropy per kilogram of real tropospheric air increases slowly with altitude. The entropy increases much more rapidly with altitude in the stratosphere. The stratosphere has a nearly isothermal lower part and a hotter upper part due to absorption of solar ultraviolet radiation by ozone. The thermal opacity of the real atmosphere has a complicated frequency dependence due to the hundreds of thousands of vibration-rotation transitions of its greenhouse molecules. Unlike the simple model where nearly all radiation to space originates at the tropopause altitude, radiation to space from Earth's real atmosphere originates from both the surface and all altitudes in the troposphere. A small additional amount of radiation originates in the stratosphere. When these complications are taken into account, model calculations of the thermal radiation spectrum at the top of the atmosphere can hardly be distinguished from those observed from satellites. Introduction Worldwide industrialization and the associated combustion of fossil fuels have increased the concentrations of carbon dioxide (CO\({}_{2}\)) and methane (CH\({}_{4}\)) since 1750. These gases along with nitrous oxide (N\({}_{2}\)O) and assorted lesser players like halocarbon refrigerants are examples of "greenhouse gases". It should be noted that by far the most abundant greenhouse gas in the atmosphere is water vapor. There is little that one can do about water vapor on our water planet Earth, with 70% of its surface covered by oceans. Greenhouse gases were first discovered by John Tyndall in the course of brilliant experimental work in the 1850's [1]. Tyndall recognized that greenhouse gases warm Earth's surface. Some 50 years later Svante Arrhenius made the first theoretical estimates of how much surface warming would result if atmospheric concentrations of carbon dioxide were doubled [2]. The atmosphere and oceans are so complicated that to this day no one knows what the exact warming will be. But basic physics and the geological record indicate that the warming will be small and probably good for life on Earth. The additional carbon dioxide of the past century has already benefitted agriculture, forestry and photosynthetic life in general [3, 4]. Both Tyndall and Arrhenius thought that greenhouse warming was a good thing. ## 2 Earth's Atmosphere We begin with the basic physics of heat transfer in Earth's atmosphere. Recall that the largest part of Earth's atmosphere is a gas, a state of matter where molecules or atoms spend most of their time hurtling through free space. The molecules have occasional collisions with others. At sea level each molecule experiences a collision about once a nanosecond (a billion collisions per second). A collision sends a colliding pair of molecules careening off in different directions, like the collisions of billiard balls on a pool table. If the molecule consists of two or more atoms, like most molecules of Earth's atmosphere, some of the energy of internal vibrations and rotations can be exchanged. But the sum of the translational, rotational and vibrational energies of a colliding pair of molecules is the same before and after the collision. The total energy of the colliding pair is conserved, even though the energy of one of the colliding molecules may be bigger and that of the other smaller after the collision. If one were to follow an individual molecule over the billion or so collisions it experiences every second, it would turn out that the molecule has a well-defined probability to be found in any vibration-rotation state of quantized energy \(E_{j}\). This is an example of the famous ergodic theorem of statistical physics, and it describes a system in local thermodynamic equilibrium. The probability of finding the molecule in a state of energy \(E_{j}\) is proportional to the Boltzmann factor, \(e^{-E_{j}/k_{B}T}\). Here \(T\) is the absolute temperature of the gas, and Boltzmann's constant has the value \[k_{B}=1.38\times 10^{-23}\mbox{J/K}. \tag{1}\] Austrian Ludwig Boltzmann, for whom Boltzmann's constant \(k_{B}\) is named, and his thesis advisor, Joseph Stefan, played very important roles in our understanding of thermal radiation, statistical mechanics and thermodynamics. In addition to molecules, Earth's atmosphere also contains radiation. Sunlight is present during the day. But thermal radiation is present both day and night. Like a hot coal from a campfire, Earth glows in the dark with thermal infrared radiation. There is much more thermal radiation near the warm equator than near the colder poles or near the surface than at the cold tropopause, around 11 km in altitude for temperate latitudes. It is often convenient to think of the radiation as photons, which are a bit like air molecules, and which have quantized energies \(E=h\nu\), where \(h\) is Planck's constant and \(\nu\) is the photon frequency. Inside a dense, nighttime cloud, the radiation can be very close to thermal equilibrium with the air molecules and cloud particulates, all of which will have nearly the same temperature. Then the radiation is well described by Planck's formula for blackbody radiation, the very first formula of quantum physics. But most of the time, atmospheric radiation is far from thermodynamic equilibrium because the photon mean free paths are much longer than the length scales for atmospheric temperature changes. The energy density of radiation in Earth's atmosphere is many orders of magnitude smaller than the translational, rotational and vibrational energy densities of the molecules. Radiation has relatively little heat capacity. It is ludicrous to hear about "heat trapped" as photons. Molecules account for almost all of atmospheric heat. A substantial fraction of thermal-radiation frequencies is in the "infrared window" where there is negligible clear-sky opacity between the surface and outer space. Here, there is extremely efficient "ballistic" heat transfer by radiation. Greenhouse gases can efficiently absorb thermal radiation with frequencies in opaque spectral regions. But greenhouse gases almost never scatter thermal radiation. This is unlike radiation transport of visible light through clouds or the Rayleigh scattering of blue and ultraviolet light that makes the blue sky of a sunny day. Visible photons can be scattered hundreds of times with little absorption. Although greenhouse gases do not scatter thermal radiation, they do emit it spontaneously. So thermal radiation in opaque spectral regions is absorbed and independently reemitted as though it were being isotropically scattered. The kinetic theory of gases tells us that the diffusion coefficient \(D\) of a gas of molecules (or of a gas of photons) of mean free path \(l\) and velocity \(v\), is \(D=vl/3\). For quasi-diffusional transport of thermal radiation in opaque spectral regions, the mean free paths, \(l\), of photons between absorptions and reemissions are much longer than those of molecules. This greatly increases the heat-transport efficiency of "diffusing" photons, compared to that of diffusing molecules. Photons also move at the speed of light \(c\), about a million times faster than the fluctuating molecular speed \(v\), with an average value close to the speed of sound in air. So both \(l\) and \(v\) are orders of magnitude larger for photons than for molecules. In summary heat transport by thermal radiation in Earth's atmosphere is orders of magnitude faster than heat transfer by molecular diffusion. Heat transfer by conduction in air (that is, by molecular diffusion) is so small that it is normally irrelevant compared to heat transfer by radiation or heat transport by convection. Heat convection by moist air, which can carry lots of latent heat, as well as sensible heat, is especially important. ### Greenhouse Gases What is a greenhouse gas? It is a gas that is almost transparent to sunlight, and allows the Sun to heat the ground on a cloud-free day. But greenhouse gases have significant opacity for the thermal radiation that is constantly emitted by Earth's surface and by the warm greenhouse gases themselves. The nitrogen molecules, N\({}_{2}\), and oxygen molecules, O\({}_{2}\), which make up 99% of Earth's atmosphere, are not greenhouse gases, since they are nearly transparent to both sunlight and thermal radiation. Vibrating or rotating electric dipole moments are the most efficient molecular "antennas" for emitting or absorbing electromagnetic radiation. The electric dipole moment of a molecule is the product of the spatial separation between the center of negative electron charge and the center of positive nuclear charge, and the magnitude of the positive charge. By symmetry, the centers of both positive and negative charges of N\({}_{2}\) and O\({}_{2}\) are at the center of the molecule (and at the center of mass). There is no separation of the charge centers, so the dipole moment is zero, no matter how the molecules vibrate or rotate. The diatomic molecule carbon monoxide, CO, has much the same electronic structure as the nitrogen molecule N\({}_{2}\), but because of the lack of symmetry, the O end of the CO molecule has a negative charge and the C end has a positive charge. Therefore, CO is a greenhouse gas, but isoelectronic N\({}_{2}\) is not. The greenhouse effects of CO are so much smaller than those of H\({}_{2}\)O and CO\({}_{2}\) that CO gets little attention in discussions of climate. The situation with polyatomic molecules is more complicated and interesting. To illustrate the basic ideas, we will discuss the triatomic CO\({}_{2}\) molecule in some detail. Fig. 1 shows the main features of the CO\({}_{2}\) molecule, as they were sketched by Enrico Fermi in his classic paper on mode mixing and "Fermi resonances," published in the year 1931 [5]. Similar considerations apply to other polyatomic molecules like methane or nitrous oxide. The unexcited CO\({}_{2}\) molecule, with neither rotational nor vibrational energy, is linear. The C atom in the center has a slight positive charge because of the strongly electronegative O atoms on each end. The CO\({}_{2}\) molecule has no electric dipole moment because the centers of positive and negative charge both coincide with the center of symmetry. A CO\({}_{2}\) molecule can bend and vibrate after being hit by another molecule during a collision, much like a xylophone bar will vibrate after being hammered by a musician. Just as the vibrating xylophone bar emits a sound wave into the air, the vibrating CO\({}_{2}\) molecule emits thermal radiation. A bent CO\({}_{2}\) molecule has an electric dipole moment that points from the center of mass of the two, slightly negative O atoms toward the slightly positive C atom. The vibrations of this bending mode make the biggest contribution to the absorption and emission of thermal radiation by CO\({}_{2}\) molecules. As sketched in Fig. 1, the CO\({}_{2}\) molecule also has two modes of vibration that keep the straight, linear alignment of the three atoms in the unexcited molecule. In the symmetric stretch mode, the C atom remains fixed as the O atoms vibrate with equal and opposite displacements along the molecular axis. For this mode of vibration, there is no vibrating dipole moment, since the centers of positive and negative charge remain at the center of the molecule. So the symmetric stretch mode of CO\({}_{2}\) does not contribute directly to the greenhouse properties of the molecule, although it can have an indirect influence through "mode mixing." For the asymmetric stretch mode of CO\({}_{2}\), the carbon atom moves in one direction along the symmetry axis, and the two O atoms on either side move in the opposite direction. The vibration frequency of the asymmetric stretch mode is higher than the frequencies of most thermal photons, so it has little direct effect on the greenhouse properties of the molecule. However, the resulting strong absorption band, centered at a wavelength of 4.3 micrometers, is used for most non-dispersive infrared monitors that measure CO\({}_{2}\) concentrations in ambient air. Molecules in Earth's atmosphere not only vibrate but rotate. This adds rotational sidebands to the vibration-frequencies of molecules. Because of the transverse nature of electromagnetic waves, vibrating molecules do not easily emit nor absorb radiation propagating along the direction of the vibrational axis. The vibrating electric dipole produces an anisotropic radiation pattern with most radiation emitted at right angles to the axis of vibration. Rotating molecules produce a rotating radiation pattern, much like the light beam of a rotating searchlight or the radar beam from a rotating airport antenna. This causes the observed intensity to blink on and off at the rotation frequency. As in amplitude modulation of "carrier" frequencies of radio transmitters, upper and lower sidebands, displaced by the rotational frequency, are added to the vibrational carrier frequency of the radiation. The low-frequency sideband is called the P branch and the high-frequency side band is called the Figure 1: The three vibrational modes of the CO\({}_{2}\) from Fermi’s classic paper on mode mixing [5]. For the most abundant isotopolgue, \({}^{16}\)O \({}^{12}\)C \({}^{16}\)O, the mode frequencies are: symmetric stretch (b) \(\nu_{1}=1388\) cm\({}^{-1}\); (c) bending \(\nu_{2}=667\) cm\({}^{-1}\); and (d) asymmetric stretch, \(\nu_{3}=2349\) cm\({}^{-1}\). Only the bending mode has a large direct effect on global warming of Earth, but the other two modes have indirect effects since they have “Fermi resonances” with overtones of the bending mode. R branch. Electronic or nuclear excitations are needed to permit linear molecules like N\({}_{2}\), O\({}_{2}\), CO or CO\({}_{2}\) to rotate around their axial symmetry axis. The energies are much too high to be thermally excited in Earth's atmosphere. So the rotation axis for linear molecules is always perpendicular to the symmetry axis. This causes the radiation patterns from axial vibrations of CO or CO\({}_{2}\) to split into the lower and upper sidebands that make up the P and R branches. There is no radiation at the vibration frequency of the non-rotating molecule. A CO\({}_{2}\) molecule vibrating in the asymmetric stretch mode radiates in much the same way as a "double-sideband carrier-suppressed" radio transmitter, sometimes used by ham radio operators. Rotations of CO\({}_{2}\) molecules with bending-mode vibrations are more interesting. The rotation axis must still be perpendicular to the molecular symmetry axis. But the molecule is vibrating at right angles to the symmetry axis, so the vibration axis may be perpendicular or parallel to the rotation axis. If the rotation and vibration axes happen to be perpendicular, as is always the case for asymmetric stretch vibrations, P and R sidebands are produced by the spinning antenna pattern. But if the vibrations happen to be parallel to the rotation axis, the radiation pattern will be unaffected by rotations, and the molecule will emit radiation at the unshifted vibrational frequency. This is called the Q branch of the bending-mode band, and it is at the frequency of the bending mode. Slightly displaced sidebands are produced by centrifugal stretching of the rotating molecule, but the strongly displaced P and R branches are missing. The Q branch of the bending mode of CO\({}_{2}\) can be seen clearly in high-resolution, spectral measurements of thermal radiation from Earth's atmosphere. The vibration-rotation spectrum of nitrous oxide, N\({}_{2}\)O, another linear molecule like CO\({}_{2}\) has properties similar to those discussed above for CO\({}_{2}\). But the analog of CO\({}_{2}\)'s symmetric stretch mode for the less symmetric molecule N\({}_{2}\)O can emit and absorb radiation, unlike CO\({}_{2}\) where this mode is "infrared inactive". The tetrahedral molecule methane, CH\({}_{4}\), is somewhat more complicated but like CO\({}_{2}\) it has several vibrational modes, some of which can absorb and emit radiation, and some of which cannot. Like the CO\({}_{2}\) bending mode, the methane bands have P, Q and R branches. ### A Model Atmosphere How greenhouse gases affect Earth's climate is a complicated issue, where atmospheric thermodynamics and convection are intimately involved. We will simplify the discussion as much as possible, but we will also try to adhere to Einstein's admonition: "Everything should be made as simple as possible, but not simpler". We hope our readers will take the trouble to understand the basic thermodynamics that we review below. Solar heating drives Earth's climate. At the mean distance of Earth from the Sun, sunlight carries an energy flux of about 1,360 Watts per square meter (W/m\({}^{2}\)). We are familiar with this flux, part of which warms us when we sunbathe at the beach on a cloud-free summer day. The flux at the top of Earth's atmosphere actually varies a little bit over the year, since Earth's orbit around the Sun is slightly elliptical. Earth is about 3.3% closer to the Sun in early January than in early July. Since solar flux decreases as the square of the distance from the Sun, the solar flux at the top of the atmosphere is about 6.7% or 91 W/m\({}^{2}\) greater in January than in July. As we will discuss in more detail below, for cloud-free temperate latitudes, doubling the concentration of CO\({}_{2}\) would decrease thermal radiation to space by about 3 W/m\({}^{2}\). Note 3 W/m\({}^{2}\) is much less than the planet-wide, winter-summer difference of 91 W/m\({}^{2}\). And we are a long way from doubling CO\({}_{2}\). Launching into our maximally simplified discussion of the greenhouse effect, we consider a hypothetical Earth with a transparent atmosphere that is 80% nitrogen and 20% oxygen, and with the same mass as today's atmosphere. But we assume no greenhouse gases at all, no CO\({}_{2}\), no H\({}_{2}\)O and no clouds. To be consistent with no clouds, this hypothetical Earth must have no oceans, from which water vapor could evaporate. Oxygen, O\({}_{2}\), actually does absorb a very small amount of sunlight and also thermal radiation, but we will ignore that absorption and assume the atmosphere is completely transparent. To further simplify the problem, we assume that the Sun shines steadily with equal intensity on every part of Earth's surface, from the tropics to the poles. Earth's rotation provides daily averaging of insolation over longitude but not latitude. We will neglect the Coriolis force which is caused by the Earth's rotation, that is so important for dynamics of the real atmosphere. With its radius, \(r=6378\) km, the Earth intercepts the solar flux with an equivalent disk area of \(\pi r^{2}\). But the area of the spherical surface of the Earth is \(4\pi r^{2}\). So the average flux per unit area of Earth's surface is one quarter of that at the top of the atmosphere or \[F=1360/4\ {\rm W/m}^{2}=340\ {\rm W/m}^{2}. \tag{2}\] Figure 2: To calculate greenhouse effects in detail, one must include the opacity of hundreds of thousands of individual line intensities, shown here as colored dots from the HITRAN data base. More details of how line intensities are used can be found in references [6, 7]. We assume the Earth is a perfect "blackbody" for both sunlight and thermal radiation, in the sense that the surface absorbs all radiation incident on it, irrespective of the frequency or direction of the incident light. We assume negligible heat exchange between the surface and the interior of the Earth. Then heat can leave the surface in two ways: \(\bullet\) Heat can leave by thermal radiation through the transparent atmosphere to space; \(\bullet\) Heat can flow to or from the atmosphere, either by conduction or convection, but not by radiation since we assume the atmosphere has no greenhouse gases and is completely transparent. ### An Isothermal Atmosphere We assume that the Earth has come to complete thermal equilibrium with an absolute surface temperature \(T_{0}\), which is also equal to the temperature of the hypothetical, completely transparent atmosphere above. The flux of thermal radiation by the Earth to space is equal to the absorbed solar flux (2) so the surface neither heats nor cools. Then the Stefan-Boltzmann law of thermal radiation by blackbodies requires the thermal radiation emitted to space be \[F=340\ {\rm W/m^{2}}=\sigma T_{0}^{4}. \tag{3}\] Here the Stefan Boltzmann constant is \[\sigma=5.67\times 10^{-8}\ {\rm W/(m^{2}K^{4})}. \tag{4}\] The temperature that solves equation (3) is \[T_{0}=278.3\ {\rm K}. \tag{5}\] Smaller temperatures for a greenhouse-free planet Earth are often quoted because clouds of condensed water reflect about 30% of the solar radiation in an atmosphere paradoxically devoid of Earth's most important greenhouse gas, water vapor. In discussions of the most basic physics of greenhouse warming, clouds are an unnecessary distraction. But clouds are very important for a detailed understanding of greenhouse effects. Once the surface and the atmosphere have reached the same temperature, the temperature (5) of the surface will be the same value the Earth would have if there were no atmosphere at all. But the Earth's atmosphere is massive, with a surface pressure of about 10 tons per square meter, the weight of air molecules in a one square meter column of air rising from the surface to infinity. For convenience, we assume the hypothetical atmosphere produces the "standard" surface pressure \[p_{0}=10^{5}\ {\rm Pa}=1000\ {\rm hPa}. \tag{6}\] The MKS unit of pressure, Pa = Pascale, is one Newton per square meter. Besides mass, the atmosphere also contains lots of energy, both thermal energy and gravitational potential energy. For computational convenience and physical clarity we will consider the energy per atmospheric molecule. The average thermal energy of a molecule of temperature \(T_{0}\) is \[u=c_{v}k_{B}T_{0}. \tag{7}\] Here the per-molecule heat capacity at constant volume, in units of Boltzmann's constant (1), is very nearly \[c_{v}=5/2. \tag{8}\] Recall from the kinetic theory of gases that the 5 in the numerator of (8) is the sum of 3, the number of translational degrees freedom, and 2, the number of rotational degrees of freedom for linear molecules like O\({}_{2}\) or N\({}_{2}\). Atmospheric temperatures are not high enough for vibrational degrees of freedom to make a significant contribution to the heat capacity. For a static atmosphere, the air pressure at any altitude \(z\) is simply the weight, per unit area, of air at higher altitudes. The differential expression of this fact is the equation of hydrostatic equilibrium, \[dp=-\ \frac{mg}{v}dz, \tag{9}\] where the per-molecule gas volume is \(v\) and \(m\) is the mass per molecule. According to (9), the decrease \(dp\) in pressure for an incremental increase \(dz\) of altitude is the weight per unit area of an atmospheric slab of width \(dz\). Using the per-molecule ideal gas formula \[pv=k_{B}T, \tag{10}\] we can integrate (9) at constant temperature, \(T=T_{0}\), to find the barometric formula for an isothermal atmosphere, \[p=p_{0}e^{-z/z_{T}}. \tag{11}\] The average altitude of an atmospheric molecule in an isothermal atmosphere is \[z_{T}=\frac{k_{B}T_{0}}{mg}. \tag{12}\] The mass per molecule of our model atmosphere of 80% N\({}_{2}\) and 20% O\({}_{2}\) by mole fraction is \[m=4.81\times 10^{-26}\ \mathrm{kg}. \tag{13}\] The average acceleration of gravity at Earth's surface is \[g=9.8\ \mathrm{ms}^{-2}. \tag{14}\] So the average gravitational potential energy of a molecule in an isothermal atmosphere is \[\epsilon_{g}=mgz_{T}=k_{B}T_{0}. \tag{15}\] Summing the thermal energy (7) and the gravitational potential energy (15), we find the total, per-molecule energy of an isothermal atmosphere, \[\epsilon=u+\epsilon_{g}=(c_{v}+1)k_{B}T_{0}=c_{p}k_{B}T_{0}. \tag{16}\] Here the per-molecule heat capacity at constant pressure is \[c_{p}=c_{v}+1=7/2. \tag{17}\] The per-molecule enthalpy is \[h=u+k_{B}T_{0}=(c_{v}+1)k_{B}T_{0}=c_{p}k_{B}T_{0}, \tag{18}\] the same as the formula (16) for the sum of the kinetic energy and gravitational potential energy. The identity of (16) and (18) is not accidental. The enthalpy increase \(dH=Ndh\) of a gas of \(N\) molecules due to an increment \(dQ\) of absorbed heat is the sum of the increase of thermal energy, \(dU=Ndu\) and the work, \(dW=pdV=Nk_{B}dT\), done when the gas expands by a volume increment \(dV=Ndv\), at constant pressure \(p\), into the surrounding environment. A fraction \(c_{v}/c_{p}=5/7\) of the heat added to an isothermal atmosphere goes into increasing the thermal energy of the molecules. A fraction \(1/c_{p}=2/7\) goes into lifting the molecules to higher altitudes. ### Non-Isothermal Atmospheres For many reasons, the Earth's atmosphere is not isothermal. Here we want to focus on the most important reason: greenhouse gases force Earth's lower atmosphere, the troposphere, to convect. The convection drives the troposphere toward an adiabatic atmosphere, for which the dry adiabatic lapse rate is a temperature drop of \(dT/dz=-9.8\) K/km. To understand the essence of convection we use the concept of a fluid "parcel." A parcel is a fixed mass \(M\) of the fluid in a volume \(V\). Noting that the number of air molecules in a mass \(M\) is \(N=M/m\), we can use the mass per molecule (13) and the ideal gas law (10) to show that a parcel with a mass \(M=1\) kg at the temperature (5) and pressure (6) would occupy a volume of \(V=0.78\) cubic meters (m\({}^{3}\)). We assume that the parcel is small enough that all parts of it have nearly the same temperature \(T\) and pressure \(p\). The parcel is surrounded by other gas at nearly the same pressure \(p\). If the parcel volume \(V\) changes reversibly by the increment \(dV\), the parcel will do an increment of work \[dW=pdV \tag{19}\] on the surroundings. The parcel can also reversibly exchange increments of heat, \[dQ=TdS, \tag{20}\] with its surroundings. Heat is absorbed, \(dQ>0\) if the entropy increment of the parcel is positive, \(dS>0\), and heat is released, with \(dQ<0\) if the entropy change of the parcel is negative, \(dS<0\). The first law of thermodynamics is a statement of the conservation of energy. It can be written as \[dU=dQ\text{ -- }dW=TdS-pdV, \tag{21}\] The expression after the first equal sign is true in general, even for a non-reversible process where entropy increases inside the parcel with no addition of heat. This often involves explosive processes like the ignition of the gas-air mixture of an internal combustion engine. The expression after the second equal sign is true for a reversible process like those of (19) and (20), where the "entropy of the universe" remains constant. ### Entropy Entropy is one of the most profound and poorly understood concepts of physics. It is also very helpful in understanding how greenhouse gases work, so we briefly review it here. The most fundamental definition of the thermodynamic temperature follows from (20) and is, \[\frac{1}{T}=\frac{dS}{dQ}. \tag{22}\] The ratio of the small entropy increase \(dS\) of a parcel in thermal equilibrium to the small quantity of heat, \(dQ\), that was absorbed to cause the entropy increase, is the inverse of the thermodynamic temperature. Adding heat to a system can increase its temperature, so the heat increment \(dQ\) must be small enough to cause a negligible increase of the temperature \(T\). In (22) we have assumed that no water contributes to the energy change of the parcel, \(dW=0\). Like energy, entropy is additive, so we can compute the total entropy of the atmosphere by adding up the entropies of all of its parcels. But unlike energy, which is rigorously conserved, "the entropy of the universe" is a steadily increasing quantity. This is the essence of the Second Law of Thermodynamics. The German physicist Clausius, who is credited, along with the Scottish physicist William Thomson (Lord Kelvin), with discovering the significance of entropy, summarized all of thermodynamics in the laconic words: _Die Energie der Welt ist konstant. Die Entropie der Welt strebt einem Maximum zu._ The energy of the universe is constant. Its entropy tends to a maximum [8]. For example, suppose two air parcels are in contact, the first with absolute temperature \(T_{1}\) and the second with a different temperature \(T_{2}\). Transferring a small, positive quantity \(dQ\) of heat from parcel 1 to parcel 2 will change the entropy of parcel 2 by \(dS_{2}=dQ/T_{2}\) and will change the entropy of parcel 1 by \(dS_{1}=-dQ/T_{1}\). Since energy is conserved, the increment of heat \(dQ\) gained by parcel 2 is equal to the increment of heat lost by parcel 1. As a result of this exchange, the entropy of the universe changes by \[dS=dS_{1}+dS_{2}=dQ\bigg{(}\frac{-1}{T_{1}}+\frac{1}{T_{2}}\bigg{)}. \tag{23}\] We have assumed a positive heat increment, \(dQ>0\), so to ensure that \(dS>0\), (22) implies that \(1/T_{2}>1/T_{1}\); that is, \(T_{1}>T_{2}\). Heat can flow spontaneously from a hot parcel to a cold parcel, but never in the opposite direction. An exception is exceptionally small parcels where "fluctuations" allow heat to flow temporarily from cool surroundings into a warmer, small parcel. An extreme example is a "parcel" consisting of a single CO\({}_{2}\) molecule which regularly gets so much hotter than the surrounding gas that it can emit a photon at the frequency of a vibrational mode, even though the photon energy is several times larger than \(k_{B}T\). The concept of entropy originated in classical thermodynamics, in the mid-19th century. But it played a key role in Boltzmann's development of classical statistical physics, and in Planck's invention of quantum mechanics toward the end of the century. It was Boltzmann who first recognized the close connection between entropy \(S\) and the thermodynamic probability, \(W\). Here \(W\) stands for Wahrscheinlichkeit = probability in German, not for work. Boltzmann considered the connection so important, and rightly so, that engraved on his tombstone in Vienna's Zentralfriedhof is the equation \[S=k\ logW. \tag{24}\] Here \(k=k_{B}\) is Boltzmann's constant of (1), the natural unit of entropy. In statistical mechanics, entropy is a measure of disorder of the physical system. For example, one kg of boiling water at a temperature of 100 C, a pressure of 1 atmosphere and volume of 1 liter, has far less entropy or disorder than one kg of steam, which has the much larger volume of 1244 liters at the same temperature and pressure. Entropy is an extensive state function of thermodynamic systems, much like the volume \(V\) or the internal energy \(U\). The units of entropy are J/K, Joules per degree Kelvin, the same units as Boltzmann's constant (1). For an air parcel with \(N\) molecules and total entropy \(S\), we will define the entropy per molecule as \[s=\frac{S}{N}. \tag{25}\] Then the first law of thermodynamics (21) for an ideal gas can be written in the per-molecule form \[c_{v}k_{B}dT=Tds\text{ -- }pdv. \tag{26}\] Integrating (26) with the aid of the per-molecule ideal gas law (10), we find the per-molecule entropy \[s=s_{o}+c_{p}k_{B}ln(T/T_{0})\text{ -- }k_{B}ln(p/p_{0}). \tag{27}\] The per-molecule entropy at the Earth's surface is \(s_{o}\). The per-molecule heat capacity at constant pressure \(c_{p}\) was given by (17). The pressure and temperature at the surface are \(p_{0}\) and \(T_{0}\). For an isothermal atmosphere, \(T=T_{0}\) at all altitudes \(z\), the pressure is given by the barometric formula (11). So the per-molecule entropy (27) for an isothermal atmosphere increases linearly with altitude and is given by \[s=s_{o}+\frac{mgz}{T_{0}}. \tag{28}\] For a general, non-isothermal atmosphere we can differentiate (27) with respect to altitude \(z\), and use the equation of hydrostatic equilibrium (9) for \(dp/dz\), to find \[\frac{ds}{dz}=\frac{c_{p}k_{B}}{T}\bigg{(}\frac{dT}{dz}+L\bigg{)}, \tag{29}\] In (29) we have introduced the "adiabatic lapse rate" for dry air \[L=\frac{mg}{c_{p}k_{B}}=9.8\text{ K/km}. \tag{30}\] In atmospheric physics the entropy per molecule is often defined in terms of a function of temperature and pressure called the "potential temperature," \[\theta=T\biggl{(}\frac{p_{0}}{p}\biggr{)}^{1/c_{p}}. \tag{31}\] In terms of the potential temperature (31) the expression (27) of the per-molecule entropy simplifies to \[s=s_{o}+k_{B}ln\biggl{(}\frac{\theta}{T_{0}}\biggr{)}^{c_{p}}. \tag{32}\] Molecules with the same potential temperature have the same entropy \(s\). ### Atmospheric Stability The linear increase of entropy with altitude of an isothermal atmosphere, which is described by (28) gives the atmosphere "stability," in the sense that air parcels try to remain at their original altitudes. To get a more quantitative understanding of atmospheric stability we note that the mass density of environmental air is \[\rho=\frac{m}{v}=\frac{mp}{k_{B}T}. \tag{33}\] According to the Archimedes Principle, the per-molecule force \(f\) on a parcel p of mass density \(\rho_{\rm p}=m/v_{\rm p}\) and per-molecule volume \(v_{\rm p}\), that has displaced an equal volume \(v=v_{\rm p}\) of environmental air of mass density \(\rho\) is \[f=-gv_{\rm p}(\rho_{\rm p}-\rho). \tag{34}\] The first term of (34), \(-gv_{\rm p}\rho_{\rm p}=-gm\), is the negative force of gravity per parcel molecule. The second term, \(gv_{\rm p}\rho\), is the upward buoyant force of displaced environmental air. The buoyant force will not be big enough to support the parcel's weight if \(\rho<\rho_{\rm p}\) and the parcel will sink down. Or the buoyant force may exceed the parcel's weight if \(\rho>\rho_{\rm p}\) and the parcel will float up. Suppose that at altitude \(z\), the parcel is a sample volume of environmental air, so that \(\rho_{\rm p}(z)=\rho(z)\). Then from (34), \(f(z)=0\) and the parcel will be in mechanical equilibrium with no net force acting on it. Now imagine lifting a parcel of environmental air by an altitude increment \(dz\), where the environmental air density is \(\rho(z+dz)=\rho(z)+d\rho\). Since the parcel is neutrally buoyant, negligible work is needed to lift it by a small amount. The lifted parcel will expand and do an increment \(dW\) of work on the lower-pressure surrounding air. From the first law of thermodynamics the energy for the work comes from heat \(dQ\) that flows into the parcel, and from \(-dU\), the decrease of internal energy. Because of the very small thermal conductivity of air, it takes a very long time for appreciable heat to flow into or out of a parcel of reasonable size. For example, temperature equilibration by molecular heat conductivity alone in a parcel with a diameter \(d=1\) meter would require a time of order, \(t=d^{2}/D\), or about a day for a sea-level molecular diffusion coefficient of \(D=0.1\) cm\({}^{2}\) s\({}^{-1}\). But the pressure equilibration times are on the order of \(d/v\), the parcel diameter divided \(d\) by the speed of sound, \(v\), around 300 m/s. The pressure equilibration time for the parcel would only be a few milliseconds. We will therefore consider lifting processes where there is not enough time for appreciable heat flow into or out of a parcel, and we can set \(dQ=0\). Then the lifted parcel retains the same per-molecule entropy as it had at the altitude \(z\), and the lifting will be adiabatic. At the altitude \(z+dz\) the force may no longer be zero since the lifted parcel may have a different density from the environmental air it displaced. The net per-molecule force at the altitude \(z+dz\) will be \[df=-gv(d\rho_{\rm p}-d\rho). \tag{35}\] The density increment of the adiabatically lifted air is \(d\rho_{\rm p}\) and the density increment of the displaced air is \(d\rho\). Differentiating the expression (33) and using the condition (9) for hydrostatic equilibrium as well as the ideal gas law (10), we find that the density increment for the environmental air is \[d\rho=-\frac{m}{Tv}\bigg{(}\frac{mg}{k_{B}}+\frac{dT}{dz}\bigg{)}dz. \tag{36}\] To order \(dz\) we get the density increment for the adiabatically lifted parcel from (36) by replacing \(dT/dz\) by the negative of the adiabatic lapse rate (30), \[d\rho_{\rm p}=-\frac{m}{Tv}\bigg{(}\frac{mg}{k_{B}}-L\bigg{)}dz. \tag{37}\] Substituting (36) and (37) into (35), as well as using (29) and (30), we find the force per parcel molecule \[df=-\frac{mg}{T}\bigg{(}L+\frac{dT}{dz}\bigg{)}dz=-\frac{mg}{c_{p}k_{B}}\ \frac{ds}{dz}\ dz=-Lds. \tag{38}\] If a parcel is adiabatically lifted by an increment \(dz\) in altitude, the increment \(df\) of the buoyant force per molecule is simply the negative product of the adiabatic lapse rate \(L\) of (30) and the per-molecule entropy increment, \(ds\), of environmental air between altitudes \(z+dz\) and \(z\). If the per-molecule entropy increases with increasing altitude, \(ds/dz>0\), there will be a net restoring force that will try to push the parcel back down to its original altitude, or push it back up if \(dz<0\). For a stable atmosphere, with \(ds/dz>0\), the molecules behave as if they were bound to their initial altitudes by springs of force constants \(k=Lds/dz\). We recall from elementary mechanics that a particle of mass \(m\) held by a spring of force constant \(k\) will oscillate with a frequency \(\Omega=(k/m)^{1/2}\). So a stable atmosphere has a buoyancy frequency \(\Omega\) given by \[\Omega^{2}=\frac{L}{m}\ \frac{ds}{dz}=\frac{g}{c_{p}k_{B}}\ \frac{ds}{dz}. \tag{39}\] The buoyancy frequency, \(\Omega\) defined by (39), is called the Brunt-Vaisala frequency [9]. For an isothermal atmosphere, with \(dT/dz=0\), we can use the expression (29) for \(ds/dz\) to write the square of the buoyant frequency as \[\Omega^{2}=\frac{mg^{2}}{c_{p}k_{B}T_{0}}=\frac{gL}{T_{0}}. \tag{40}\] Using the acceleration of gravity \(g\) of (14), the adiabatic lapse rate \(L\) and the surface temperature \(T_{0}\) of (5) in (40), we find that the buoyancy frequency of a hypothetical, isothermal atmosphere is \[\Omega=0.0186\ \mathrm{s}^{-1}=\frac{2\pi}{338\ \mathrm{s}} \tag{41}\] The oscillation period is 338 s, or between 5 and 6 minutes. This period is comparable to the fluctuation times of the internal structure of clouds, drifting through a blue sky. We recall that the squared frequency of a pendulum of length \(l\) is \[\Omega^{2}=\frac{g}{l}. \tag{42}\] Comparing (40) and (42), we see that the buoyancy oscillations of an isothermal atmosphere have the same frequency as a pendulum of length \[l=\frac{T_{0}}{L}=z_{a}=28.4\ \mathrm{km}. \tag{43}\] As we will discuss in the following section, the length \(z_{a}\) defined by (43) is the height of an adiabatic atmosphere of ideal gas, with a surface temperature \(T_{0}\). The numerical value, 28.4 km, would be in the mid stratosphere of Earth's atmosphere. ### An Adiabatic Atmosphere In addition to the isothermal atmosphere discussed above, another important limit is the adiabatic atmosphere, where the per-molecule entropy \(s\) is independent of altitude \(z\). For parcels in Earth's atmosphere, we will use the adjectives adiabatic and isentropic interchangeably, but under some conditions, adiabatic processes can differ dramatically from isentropic processes. An isentropic change in the volume of a parcel is one where the parcel entropy is the same before and after the change. An adiabatic change of the volume of a parcel is one in which no heat is exchanged between the parcel and the environment. But the moving boundaries of the parcel can do positive or negative work on the parcel. Adiabatic volume changes cannot be too fast or they will increase the parcel entropy. For example, if the containing boundaries move at supersonic speeds, converging boundaries can produce shock waves, and expanding boundaries can produce expansion fans in the parcel gas. Both phenomena increase the parcel entropy, even though no heat has been added. This is an important issue for laser-fusion work, where the parcel is the gas in a small pellet that is imploded by intense laser pulses. Most natural atmospheric changes are slow enough that isentropic and adiabatic mean the same thing. From inspection of (29) we see that for an adiabatic atmosphere, with \(ds/dz=0\), the rate of change of temperature with altitude must be equal and opposite to the adiabatic lapse rate, \[\frac{dT}{dz}=-L=-9.8\ \mathrm{K/km}. \tag{44}\] The temperature profile of an adiabatic atmosphere must therefore be \[T=T_{a}-zL. \tag{45}\] The temperature, \(T_{a}\), at the bottom of the adiabatic atmosphere need not be the same as the equilibrium surface temperature \(T_{0}\) of (5) for an atmosphere without greenhouse gases. The thickness, \(z_{a}\), of an adiabatic atmosphere, \[z_{a}=\frac{T_{a}}{L}=\frac{c_{p}k_{B}T_{a}}{mg} \tag{46}\] is the altitude at which the absolute temperature (45) goes to zero, \(T=0\). We already mentioned in connection with (43) that a representative thickness is \(z_{a}=28.4\) km. Using the adiabatic temperature profile (45) with the formula (27) for the per-molecule entropy and replacing the surface temperature \(T_{0}\) with \(T_{a}\), we find that the pressure in an isentropic atmosphere must be given by the formula \[p=p_{0}(1-z/z_{a})^{c_{p}}. \tag{47}\] The fraction of atmospheric molecules between altitudes with pressure \(p\), and altitudes with pressure \(p+dp\), is \(dp/p_{0}\). So the average temperature of a molecule in an adiabatic atmosphere is \[<T>=\frac{1}{p_{0}}\int_{0}^{p_{0}}Tdp. \tag{48}\] Noting from (45) and (47) that \[T=T_{a}\biggl{(}\frac{p}{p_{0}}\biggr{)}^{1/c_{p}}, \tag{49}\] we can integrate (48) to find \[<T>=T_{a}\frac{c_{p}}{c_{p}+1}. \tag{50}\] We can replace \(T_{0}\) with \(<T>\) in (7) to find that the per-molecule thermal energy of an adiabatic atmosphere is \[u=c_{v}k_{B}T_{a}\frac{c_{p}}{c_{p}+1}. \tag{51}\] In like manner, one can show that the mean altitude of a molecule is \[<z>=\frac{1}{p_{0}}\int_{0}^{p_{0}}zdp=\frac{z_{a}}{c_{p}+1}. \tag{52}\] Then the mean per-molecule gravitational energy becomes \[\epsilon_{g}=mg<z>=k_{B}T_{a}\frac{c_{p}}{c_{p}+1}. \tag{53}\] For an adiabatic atmosphere with a surface temperature \(T_{a}\) equal to the surface temperature \(T_{0}\) of an isothermal atmosphere, both the thermal and gravitational energies of the adiabatic atmosphere are smaller than those of the isothermal atmosphere by the factor \(c_{p}/(c_{p}+1)\) or \(7/9\) for the hypothetical atmosphere of diatomic molecules with negligible vibrational excitation. For the same surface temperature, molecules of an adiabatic atmosphere have a lower average altitude and a lower average temperature than those of an isothermal atmosphere. Finally, we note that since \(ds/dz=0\) for an adiabatic atmosphere, the buoyancy frequency for the atmosphere, given by (39), is zero, \[\Omega=0. \tag{54}\] Air parcels in an adiabatic atmosphere have no tendency to return to their original altitude if they are displaced upward or downward. There is no restoring force, as there is for an isothermal atmosphere. In the absence of greenhouse gases, the isothermal atmosphere will not change with time, since there is no thermal gradient to drive heat flow. However, without greenhouse gases, a thermally isolated adiabatic atmosphere would slowly evolve to an isothermal atmosphere because of conductive heat flow from the warmer lower atmosphere to the colder upper atmosphere. As we will discuss in the next section, atmospheres with sufficient concentrations of greenhouse gases can maintain an approximately adiabatic troposphere and an approximately isothermal stratosphere indefinitely, because greenhouse gases facilitate convective transfer of heat from the solar heated surface through the troposphere. ### Atmospheres with Greenhouse Gases The physics we have outlined above changes dramatically if we add greenhouse gases to our model. For simplicity, we will consider a well-mixed greenhouse gas, like CO\({}_{2}\) in Earth's atmosphere. In keeping with our earlier discussion of greenhouse gases, our model atmosphere with greenhouse gas lets the Sun continue to heat the Earth's surface with the flux (3), but the greenhouse gases attenuate thermal radiation such that the fraction of monochromatic vertical radiation that reaches outer space without absorption is \[f=e^{-\tau_{o}}. \tag{55}\] Here \(\tau_{o}\), the vertical optical depth between the surface and outer space, will be proportional to the partial pressure of the greenhouse gases. For Earth and other planets of the Sun, the optical depth can greatly exceed unity at the centers of strong absorption bands. Negligible surface radiation reaches outer space for these strongly absorbed frequencies, and most escaping thermal radiation is emitted by greenhouse gases at altitudes well above the surface. The radiation to space by greenhouse gases cools air parcels. The cooling is most pronounced at higher altitudes where the radiation has a good chance to reach outer space without being absorbed by still higher-altitude greenhouse gases. Therefore, greenhouse gases cause high-altitude air parcels to cool. As the parcels cool, contract and become heavier than the surrounding air, they sink and are replaced by rising parcels of warmer air that float up from below. Since greenhouse gases do not absorb solar radiation, the Sun continues to heat the surface at the rate (3). The rising and sinking air parcels exchange little heat with the surrounding air, so they behave like an adiabatic conveyor belt that carries the solar energy that has been absorbed by the surface to sufficiently high altitudes where radiating greenhouse gases can release the energy to space. The most dramatic effect of sufficiently high concentrations of greenhouse gases is to create a nearly adiabatic troposphere of convecting air. Because of the complicated vibrations and rotations of real greenhouse molecules, the optical depth \(\tau_{o}\) of (55) depends strongly on the frequency of the thermal radiation. There are infrared "window" frequencies where there is almost no attenuation and one can set \(\tau_{o}=0\). But for frequencies in the center of the Q branch of CO\({}_{2}\), one finds optical depths of order \(\tau_{o}=500,000\). ## 3 A Gray Atmosphere A model for the frequency dependence of atmospheric opacity (which pushes the limits of Einstein's admonition - that models should not be simpler than possible) is a gray atmosphere, with no frequency dependence at all. Then the optical depth is the same for all thermal radiation frequencies. We will also assume that as one descends into the atmosphere from outer space, where the pressure is \(p=0\), to an altitude of pressure \(p\), the optical depth grows to the value \[\tau=\tau_{o}\frac{p}{p_{0}}. \tag{56}\] The surface pressure \(p_{0}\) was given by (6). For an adiabatic atmosphere, we can use (47) to write the optical depth (56) as \[\tau=\tau_{o}\biggl{(}1-\frac{z}{z_{a}}\biggr{)}^{c_{p}}. \tag{57}\] For optically thick atmospheres, with \(\tau_{o}>>1\), most of the monochromatic radiation from the atmosphere to space comes from altitudes \(z\), or pressures \(p\), near those for which the optical thickness to space is \(\tau=1\). So we can use (56) to define emission pressure as \[p_{e}=\frac{p_{0}}{\tau_{o}} \tag{58}\] and we can use (57) to define an emission altitude as \[z_{e}=z_{a}\biggl{(}1-\frac{1}{{\tau_{o}}^{1/c_{p}}}\biggr{)}. \tag{59}\] At altitudes above (59) so little greenhouse gas remains that the radiative flux remains constant versus increasing altitude. The radiation flux below the emission altitude rapidly drops to nearly zero because the intensity becomes isotropic, with nearly as much downward radiative flux from the atmosphere above as upward flux from the atmosphere below. The emission level is a site of intense radiative cooling of the atmosphere. The greenhouse molecules at the emission altitude supply the model planet's thermal radiation to space. Energy to replace that radiated away at the emission altitude is provided by convection of warm air parcels floating up from below. Having cooled, the parcels sink back to the surface to complete the convection cycle. For the model of a gray atmosphere, the emission altitude (58) can be considered the tropopause, below which there is a convective troposphere, and above which there is a nearly isothermal stratosphere, with negligible convection. For this maximally simplified model, heat transport from the surface to the tropopause is almost all convective and heat transport above the troposphere is all radiative. According to (45) and (46), the temperature at the emission altitude is \[T_{e}=T_{a}\bigg{(}1-\frac{z_{e}}{z_{a}}\bigg{)}. \tag{60}\] Here \(T_{a}\) is the surface temperature of an adiabatic atmosphere. The emission altitude \(z_{e}\) of (59) is where the gray adiabatic atmosphere sends the absorbed solar flux (3) back to space as thermal radiation. So the emission temperature \(T_{e}\) must be the same as the surface temperature \(T_{0}\) of (5) for an Earth with no greenhouse gases. So (60) implies that \[T_{0}=T_{a}\bigg{(}1-\frac{z_{e}}{z_{a}}\bigg{)}. \tag{61}\] From (61) and (59) we find the surface temperature of the gray atmosphere is \[T_{a}=T_{0}\tau_{o}{}^{1/c_{p}}. \tag{62}\] Differentiating this equation yields \[\frac{dT_{a}}{T_{a}}=\frac{d\tau_{o}}{c_{p}\tau_{o}}. \tag{63}\] All but 1% of Earth's atmosphere is comprised of diatomic gases N\({}_{2}\) and O\({}_{2}\) for which \(c_{p}=3.5\). Hence, the relative change in surface temperature is 29% of the relative change of the atmosphere's optical depth. From (46), (61) and (62) we see that the emission (or tropopause) altitude of the gray atmosphere is \[z_{e}=(\tau_{o}{}^{1/c_{p}}-1)\frac{c_{p}k_{B}T_{0}}{mg}. \tag{64}\] The pressure at Earth's tropopause varies from around 100 hPa in the tropics to 300 hPa at the poles. Taking a midlatitude pressure of \(p_{e}=200\) hPa with the surface pressure \(p_{0}=1000\) hPa of (6), we see from (58) that the optical depth of our gray atmosphere would have to be \[\tau_{o}=5. \tag{65}\] According to (62), for the heat capacity \(c_{p}=3.5\) of (17) the optical depth (65) would require the surface to warm by a factor \[\tau_{o}{}^{1/c_{p}}=5^{1/3.5}=1.58. \tag{66}\] For the surface temperature \(T_{0}=278.3\) K of a model planet with no greenhouse gases, the temperature increase would then be \[\Delta T=T_{a}-T_{0}=({\tau_{o}}^{1/c_{p}}-1)T_{0}=162\ \mbox{K}. \tag{67}\] Using (66) in (64) we find a tropopause altitude for the gray atmosphere \[z_{e}=16.6\ \mbox{km}. \tag{68}\] The height of the troposphere in the real atmosphere of the Earth ranges from as low as 6 km over the midwenter poles to almost 20 km over the equator. The results of this section are summarized in Fig. 3. The simple "one-dimensional" model of a gray atmosphere is only semi-quantitative, but it leads us to a number of important insights on how greenhouse gases work. \(\bullet\) Greenhouse gases cool the upper atmosphere by radiating heat to space. \(\bullet\) For a gray atmosphere, surface radiation to space is attenuated by factor \(e^{-\tau_{o}}\) where Figure 3: The simplest example of greenhouse warming. Greenhouse gases in a gray atmosphere, which is uniform in the horizontal direction, produce a convecting, adiabatic troposphere below the tropopause, at the emission altitude \(z_{e}\) of (68). The emission temperature \(T_{0}\) is the same as the surface temperature (5) of the surface if there were no greenhouse gases. In the adiabatic troposphere, the heat flux \(Z\) is convective and is equal to the solar heating flux \(F=\sigma T_{0}^{4}\) of (3). The heat flux in the troposphere is entirely convective while in the isothermal stratosphere the flux is entirely radiative. is the vertical optical depth from the surface to outer space. Absorbed surface radiation is replaced by radiation emitted by greenhouse gases higher in the atmosphere. \(\bullet\) For optical depths, \(\tau_{o}>>1\), atmospheres develop a lower troposphere where convection by parcels of warm air floating upward and parcels of cold air sinking downward transport most of the solar energy absorbed by the surface. Greenhouse gases radiate heat to space from altitudes close to the tropopause. Radiation to space from the surface and from greenhouse molecules in the lower troposphere is negligible. \(\bullet\) Radiative heat transport is negligible compared to convective heat transport below the tropopause. \(\bullet\) Convective heat transport is negligible compared to radiative heat transport above the tropopause. \(\bullet\) Without greenhouse gases, the adiabatic temperature profile of the troposphere would evolve into an isothermal profile with the same temperature as the surface. \(\bullet\) Greenhouse gases increase the temperature of the surface, compared to the surface temperature in an atmosphere without greenhouse gases. ## 4 Realistic Model of Atmosphere Earth's real atmosphere responds to greenhouse gases in a more complicated way than the simple model discussed above. But as the model predicts, the Earth's atmosphere is partitioned into a convecting troposphere between the surface and the tropopause, and a stratosphere above the tropopause where there is very little vertical convection. There is greenhouse warming of the real Earth's surface, but much less than for the model discussed above. We briefly discuss some of the most important details of the atmosphere that must be accounted for to make a realistic model. The Earth is not uniformly heated by the Sun, as was assumed in the model. The midday heating is greatest at the subsolar latitudes, where the Sun is directly overhead at noon. The subsolar latitude reaches its most northern bound, the Tropic of Cancer at \(23.5^{o}\) north around 21 June, the northern summer solstice. The most southern bound, the Tropic of Capricorn at \(23.5^{o}\) south, is attained on the southern summer solstice, about 21 December. As shown in Fig. 4, the yearly averaged solar insolation at the top of the atmosphere is about six times larger at the equator than at either pole. A significant part of the solar energy absorbed in the tropics is convected by the atmosphere and oceans toward the poles before it is radiated to space by Earth's surface and greenhouse gases. In the tropics the Earth radiates less energy to space than the solar energy it receives, since lots of heat is exported north and south. Near the poles, the Earth radiates more heat to space than the solar energy it receives. Heat convected toward the poles from the tropics makes up the difference. The intertropical convergence zone is where trade winds from the north and south converge. There, they left huge quantities of moist air up to the stratosphere and produce heavy rainfall. The cloud tops in the tropopause of the intertropical convergence zone can be up to 18 km high and with correspondingly low temperatures, down to 200 K or less. The weak thermal emission to space from the high, cold cloud tops of the tropics are responsible for the dip in the thermal flux at the top of the atmosphere, which can be seen in Fig. 4. The intertropical convergence zone migrates north and south along with the subsolar latitude. This leads to the dramatic monsoons of tropical and subtropical latitudes. The complicated distribution of land and oceans on Earth's surface means that for a given longitude, the well-defined subsolar latitude may be greater or less than the latitude of the intertropical convergence zone. Earth's winds are strongly affected by the rotation of the Earth, which was also neglected in the simple model. The Coriolis forces for the rotating Earth push moving currents of atmospheric air (or ocean water) to the right in the northern hemisphere and to the left in the southern hemisphere. A dramatic result is the formation of jet streams in the upper atmosphere. The preferential heating of the tropics causes atmospheric isobars, surfaces of constant pressure, to slope downward from the tropics to the poles. The resulting "baroclinicity," is a key driving factor for the Rossby waves of temperate latitude weather fronts and for tropical cyclones like hurricanes. Unlike the model, which assumed dry air, the real atmosphere of the Earth is moist. The evaporation of water at the surface, the condensation of water to clouds in the troposphere, and the precipitation of rain and snow, transport large amounts of heat both vertically Figure 4: The continuous blue curve is the yearly average of incoming short-wave solar flux (net visible, near infrared and ultraviolet) absorbed by the Earth. The dashed red curve is the yearly average of the outgoing thermal flux (net longwave infrared) radiated to space by the Earth. Excess solar energy absorbed in the tropics is transported to the poles by mass flux in the atmosphere and oceans. The data is from satellite observations [10]. and horizontally. The condensation of moisture in upwardly convecting air parcels releases large amounts of heat to the surrounding N\({}_{2}\) and O\({}_{2}\) molecules that make up most of the atmosphere. As a result, the dry adiabatic lapse rate of (30), \(L=9.8\) K/km, is only observed near the top of the troposphere, where little water vapor remains to condense. The lapse rate can be half the dry rate or less for very moist air near the surface. A typical average lapse rate between the surface and the tropopause is about 6.5 K/km, as shown in the "standard" temperature profile of the troposphere, shown in the left panel of Fig. 5. Unlike the simple model, the per-molecule entropy, \(s\), of the real troposphere increases slightly with increasing altitude, so the troposphere is not fully adiabatic and is marginally stable to vertical convection. The model stratosphere is isothermal, but the temperature of the real stratosphere shown in Fig. 5 increases due to heating of the ozone layer by solar ultraviolet radiation. The rising temperature increases the restoring force \(df\) of (38) for displaced air parcels. This increases the stability of the stratosphere to vertical convection. Figure 5: **Left.** A standard midlatitude atmospheric temperature profile [11]. The altitudes of the tropopause, stratopause and mesopause are shown. Higher parts of the atmosphere have negligible effects on radiative transfer. The Earth’s mean surface temperature is 288.7 K. **Right.** Observed concentrations, \(C_{\rm sd}^{\{i\}}\) for greenhouse molecules versus altitude [12]. The sea level concentrations are 7750 ppm of H\({}_{2}\)O, 1.8 ppm of CH\({}_{4}\) and 0.32 ppm of N\({}_{2}\)O. The O\({}_{3}\) concentration peaks at 7.8 ppm at an altitude of 35 km. The CO\({}_{2}\) concentration is 400 ppm at all altitudes. ### Clear Skies One can accurately calculate thermal radiation transfer for skies without clouds using the Schwarzschild equation \[\cos\theta\ \frac{\partial\tilde{I}}{\partial\tau}=\tilde{B}-\tilde{I}. \tag{69}\] A detailed discussion of how to solve this equation is given in reference [13], to which we refer readers who are interested in more details. We give only a brief summary of the meaning of (69) here. The monochromatic intensity of radiation with a spatial frequency (wave peaks per cm) \(\nu\) and making an angle \(\theta\) to the vertical, is \(\tilde{I}=\tilde{I}(\nu,\tau,\theta)\). The vertical optical depth \(\tau\) is the number of e-foldings by which vertically propagating surface radiation of frequency \(\nu\) is attenuated when it reaches the altitude \(z\). Note that in (69) the optical depth is measured from the bottom of the atmosphere, but in (57) it was measured from the top. The Planck intensity for isotropic radiation of frequency \(\nu\) in thermal equilibrium at an absolute temperature \(T\) is \[\tilde{B}(\nu,T)=\frac{2hc^{2}\nu^{3}}{e^{x}-1}. \tag{70}\] Here \(c\) is the speed of light, \(h\) is Planck's constant and the photon energy, in units of \(k_{B}T\) is \(x=\nu ch/(k_{B}T)\). The net upwards flux, \(\tilde{Z}\), the "upwelling" minus "downwelling," is the sum of the projections of the intensities \(\tilde{I}\) onto the vertical directions \[\tilde{Z}(\nu,z)=2\pi\int_{-1}^{1}\tilde{I}(\nu,\tau,\theta)\ \cos\theta\ d\cos\theta. \tag{71}\] Calculations of \(\tilde{Z}\) at various frequencies as a function of altitude are shown in Fig. 6. These were done for the temperature profile and greenhouse altitudinal dependences shown in Fig. 5. The model included the 5 most important naturally occurring greenhouse gases: H\({}_{2}\)O, CO\({}_{2}\), O\({}_{3}\), N\({}_{2}\)O and CH\({}_{4}\). For the gray atmosphere, there is a single emission altitude \(z_{e}\), given by (64), where most radiation is released to space. The complicated line intensities of Fig. 2 were replaced by a Figure 7: Effects of changing concentrations of carbon dioxide, CO\({}_{2}\) on the thermal radiative flux to space from the top of a midlatitude standard atmosphere at altitude \(z_{\rm mp}=86\) km, with the temperature profile of Fig. 5. The smooth blue line is the spectral flux from a surface at the temperature \(T_{0}=288.7\) K for a transparent atmosphere with no greenhouse gases. The green line is the flux if all CO\({}_{2}\) were removed but with all the other greenhouse gases at their standard concentrations. The black line is for all greenhouse gases at their standard concentrations. The red line is for twice the standard concentration of CO\({}_{2}\) but with all the other greenhouse gases at their standard concentrations. Doubling the standard concentration of CO\({}_{2}\) from 400 to 800 ppm increases the forcing (the area between the black and red lines) by 3.0 W m\({}^{-2}\). cross section that is independent of the thermal radiation frequencies, and independent of temperature and pressure. All thermal radiation frequencies are attenuated equally for the gray atmosphere. For the real atmosphere, Fig. 2 implies an extremely complicated dependence of the emission altitude on frequency, due to the hundreds of thousands of vibrational-rotational lines of the greenhouse molecules. The emission altitude varies dramatically across the thermal radiation spectrum. Emission altitudes are indicated by dashed-horizontal red lines in Fig. 6. The green lines show the heat flux carried by frequencies given at the tops of the panels. Fig. 6a shows that for a frequency \(\nu=500.5\) cm\({}^{-1}\), in the pure rotational band of water vapor, the spectral flux \(\tilde{Z}\), "breaks out" near the emission altitude \(z_{e}=2.9\) km, in the lower troposphere. The energy for the flux is radiated by H\({}_{2}\)O molecules, which extract heat from convecting air parcels. The atmosphere is cooler at an altitude of 2.9 km than at the surface, so the spectral flux \(\tilde{Z}\) that is radiated to space, the value of the green line at the top of the atmosphere, is only about 80% of the surface Planck value \(\pi\tilde{B}_{o}\). Fig. 6b shows that for a frequency \(\nu=667.4\) cm\({}^{-1}\), in the middle of the strong Q branch of the CO\({}_{2}\) bending-mode band, the emission altitude is near the top of the atmosphere, \(z_{e}=85.3\) km. Here the energy for the emission is not from convecting air parcels. Most of the emitted energy is supplied by the absorption of solar ultraviolet radiation. Heavily absorbed frequencies, like \(\nu=667.4\) cm\({}^{-1}\), make a negligible contribution to radiative heat transport in the troposphere. Fig. 6c shows that for a frequency \(\nu=971\) cm\({}^{-1}\), in the middle of the atmospheric window, greenhouse gases are so nearly transparent that the surface radiates directly to space. This provides extremely efficient, ballistic heat transfer from the surface to cold space. It is one of the factors that leads to dew or frost on calm, dry, cloud-free nights. The spectral flux \(\tilde{Z}\) is equal to the spectral Planck flux \(\pi\tilde{B}_{o}\) at the surface. Fig. 6d shows that for a frequency of \(\nu=1016.2\) cm\({}^{-1}\), in the middle of the O\({}_{3}\) absorption band, the emission altitude \(z_{e}=34.0\) km is at the same altitude where the O\({}_{3}\) concentration is a maximum as is shown in Fig. 5. There is little ozone in the troposphere, so the spectral flux \(\tilde{Z}\) in the troposphere is not much smaller than the surface Planck flux \(\pi\tilde{B}_{o}\). Most of the flux from the surface and troposphere is absorbed by O\({}_{3}\) in the lower stratosphere, to be replaced by emission from the warmer, upper parts of the stratosphere, where the temperature maximizes because of absorption of solar ultraviolet light by ozone. The absorbed ultraviolet light provides the energy that is radiated back to space at the frequency \(\nu=1016.2\) cm\({}^{-1}\). The effect of the greenhouse gases on the spectral flux \(\tilde{Z}\) emitted at the top of the atmosphere, located at \(z_{\rm mp}=86\) km, is shown in Fig. 7. Absorption by pure rotational transitions of H\({}_{2}\)O at frequencies below 550 cm\({}^{-1}\), absorption and emission by CO\({}_{2}\) near its bending mode frequency of 667 cm\({}^{-1}\), absorption and emission by O\({}_{3}\) at 1016 cm\({}^{-1}\) and absorption and emission by bending modes of H\({}_{2}\)O at frequencies above 1200 cm\({}^{-1}\) dominate the spectrum. Integrating the monochromatic flux over all frequencies gives the total infrared flux \[Z(z)=\int_{0}^{\infty}d\nu\ \tilde{Z}(\nu,z). \tag{72}\] One can define the forcing as the difference between the infrared flux emitted by the surface at temperature \(T_{0}\) through a transparent atmosphere and the infrared actual flux \(Z\) which includes the effects emission and absorption by greenhouse gases \[F(z)=\sigma{T_{0}}^{4}\ \text{--}\ Z(z). \tag{73}\] The upward flux \(Z\) and the forcing \(F\) are plotted in Fig. 8 as a function of altitude for three different CO\({}_{2}\) concentrations while maintaining the other greenhouse gases at their standard concentrations as given in Fig. 5. Fig. 8 illustrates the radiative flux, modeled with the Schwarzschild equation (69), for the Earth's real atmosphere. This should be compared to the flux for a gray atmosphere shown in the left panel of Fig. 3. Unlike the gray atmosphere, which has only convective heat flux in the troposphere, the real atmosphere has both radiative flux \(Z(z)\) at the altitude \(z\), which is shown explicitly, and convective flux, which is not shown explicitly, as it was in Fig. 3. The convective flux is implicitly given by \(Z(11\ \text{km})\text{--}Z(z)\), since it vanishes at the tropopause altitude \(z=11\ \text{km}\). To the extent that absorption of solar radiation in the troposphere can be neglected, the sum of the steady-state radiative and convective fluxes must be constant to conserve energy. Unlike the gray atmosphere of Fig. 3, where there is negligible radiative flux in the tro Figure 8: Upward flux \(Z\) or forcing \(F\) calculated from the Schwarzschild equation for a standard atmosphere having 400 ppm CO\({}_{2}\), double that amount, 800 ppm, and half that amount, 200 ppm. The other greenhouse gas concentrations were kept fixed at their standard values shown in Fig. 5. Values for \(Z\) and \(F\) are given at the surface, tropopause altitude of 11 km and top of the atmosphere altitude, \(z_{\text{mp}}=86\ \text{km}\). osphere, Fig. 8 shows that there is a significant amount of radiative flux in the troposphere, even at the surface, because of the atmospheric window, which allows ballistic radiative heat transport to space. For the gray atmosphere of Fig. 3, there is an abrupt increase of radiative flux at the tropopause, where all of the heat convected from the surface is released. As shown in Fig. 8, the varied emission altitudes of the real atmosphere ensure that convected heat continues to be released at all altitudes between the surface and the tropopause. For the real atmosphere, the radiative flux increases roughly linearly, as shown in Fig. 8, as convective heat is released at increasing altitudes \(z\). As shown in Fig. 7 and Fig. 8, the decrease in upwards flux at the top of the atmosphere, \(Z(z_{\rm mp})\), due to doubling the CO\({}_{2}\) concentration, 3 W m\({}^{-2}\), is about a 1% decrease of \(Z(z_{\rm mp})\). Doubling the other anthropogenic greenhouse gases, N\({}_{2}\)O or CH\({}_{4}\), results in even smaller changes to the upwards flux. These flux changes are called the instantaneous forcings. If there were no change in absorbed solar radiation the instantaneous forcing would cause heat to build up in the atmosphere and Earth below. This is where the reliability of climate modeling ends, since no one knows just how the complicated climate system of Earth's atmosphere and oceans will respond to the small forcings. As shown in reference [6], if one assumes no change in solar heating, and that convective heat transfer keeps the troposphere Figure 9: The “blue marble” a photograph taken by the crew of the Apollo 17 mission to the Moon on December 7, 1972, close to midsummer of the southern hemisphere. The clouds of the intertropical convergence zone can be seen over Africa, as well as heavier, low cloud cover further south and over Antarctica [15]. approximately adiabatic, the surface temperature will warm by about 1 C from doubling CO\({}_{2}\), and the upper stratosphere will cool by about 8 C. A surface warming of 1 C is too small a prediction for those who claim that increasing concentrations of greenhouse gases have already led to a climate emergency. Various strongly positive feedbacks have been proposed to predict more surface warming. One of the favorite feedback mechanisms is lofting water vapor, the most important greenhouse gas to higher, colder emission altitudes. The decreased emission would have to be compensated for by warmer surface temperatures at the bottom of a nearly adiabatic atmosphere, just as for the model gray atmosphere of Fig. 3. Like greenhouse gases, emission and absorption of thermal radiation by clouds can modify the release of thermal radiation to space. High clouds with cold tops are very poor thermal radiators. Increases or decreases in high clouds could decrease or increase thermal radiation to space. The iris-effect of Lindzen and Chou [14] is a possible negative feedback due to changes in high cirrus clouds. The tops of low clouds are not much cooler than Earth's surface, so low clouds decrease thermal radiation to space much less than high clouds. Thick low clouds also reflect sunlight back to space before it is absorbed and converted Figure 10: Vertical intensites \(I(0)\) at the top of the atmosphere observed with a Michaelson interferometer in a satellite [17], and modeled with radiation transfer theory for the Sahara desert, Mediterranean and Antarctica. The intensity unit is 1 i.u. = 1 mW m\({}^{-1}\) cm sr\({}^{-1}\). Radiative forcing is negative over wintertime Antarctica since the relatively warm greenhouse gases in the troposphere mostly CO\({}_{2}\), O\({}_{3}\) and H\({}_{2}\)O, radiate more flux to space than the cold ice surface at a temperature of \(T=190\) K could radiate through a transparent atmosphere. to atmospheric heat. Unlike greenhouse gases, which are nearly transparent to sunlight, clouds can be very non-transparent due to scattering and weak absorption of visible light. The beautiful images of Earth taken from space, "the blue marble", show the large amount of sunlight reflected from clouds, land and oceans [15]. Any sunlight you can see has not heated the Earth. Averaged over its surface, Earth reflects about 30% of sunlight back to space; in other words, the "Bond albedo" of Earth is about 0.3. Another positive feedback that has been proposed is a change in the albedo in polar regions due to melting ice. However, a reduction in summer Arctic ice of about 2 million km\({}^{2}\) as has been observed during primarily from 1990 to 2007 [16] represents less than 0.4% of the Earth's surface and therefore has primarily a regional as opposed to a global impact. An important check on any calculation is to compare observations to modeled results. Fig. 10 shows intensities emitted by the top of the atmosphere as measured by an interferometer in a satellite. The modeled results were obtained by solving the Schwarzschild equation and using the appropriate temperature profiles. The surface water vapor concentration varied significantly from 31,000 ppm for the Sahara, 12,000 ppm for the Mediterranean and only 2,000 ppm for Antarctica. The excellent quantitative agreement with observations at very different latitudes give confidence that the calculations of infrared flux are correct. ## 5 Summary Greenhouse gases are responsible for the most striking feature of Earth's atmosphere, a lower troposphere and an upper stratosphere. In the troposphere, below the tropopause boundary, a large fraction of the energy flux from the solar heated surface is carried by convection, and not by thermal radiation. Convection maintains average temperature lapse rates in the troposphere that are close to adiabatic. In the stratosphere, most of the upward heat flux is carried by radiation. Greenhouse gases warm the surface because they increase the "thermal resistance" of the atmosphere to the vertical flow of energy from the solar-heated surface to space. The larger the thermal resistance between the surface and the emission altitude, the larger the temperature difference needed to drive the solar energy absorbed by the surface back to space. Without the thermal resistance induced by greenhouse gases, Earth's surface would be much colder and life as we know it would not be possible. Increasing carbon dioxide will cause a small additional surface warming. It is difficult to calculate exactly how much, but our best estimate is that it is about 1 C for every doubling of CO\({}_{2}\) concentration, when all feedbacks are correctly accounted for. Alarming predictions of dangerous warming require large positive feedbacks. The most commonly invoked feedback is an increase in the concentration of water vapor in the upper troposphere. But most climate models have predicted much more warming than has been observed, so there is no observational support for strong positive feedbacks [18, 19]. Indeed, most feedbacks in nature are negative as expressed by Le Chatelier's Principle [20]: _When any system at equilibrium for a long period of time is subjected to a change in concentration, temperature, volume or pressure, the system changes to a new equilibrium, and this change partly counteracts the applied change._ We have barely touched atmospheric dynamics, perhaps the most interesting part of the grand drama of weather and climate. We are all familiar with manifestations of atmospheric dynamics: warm fronts, cold fronts, droughts, floods, hurricanes, tornados etc. Equally fascinating ocean dynamics, like the El Nino cycles of the tropical Pacific Ocean, also contributes to weather and climate. Earth's atmosphere works like an extremely complicated engine that transforms heat from the Sun into the work that drives the winds, the weather and ocean dynamics. Greenhouse gases are the heat exchanger which allows the atmospheric heat engine to dump waste heat into cold space. ## Acknowledgements The Canadian Natural Science and Engineering Research Council provided financial support of one of us.
2302.06188
Enhancing SMT-based Weighted Model Integration by Structure Awareness
The development of efficient exact and approximate algorithms for probabilistic inference is a long-standing goal of artificial intelligence research. Whereas substantial progress has been made in dealing with purely discrete or purely continuous domains, adapting the developed solutions to tackle hybrid domains, characterised by discrete and continuous variables and their relationships, is highly non-trivial. Weighted Model Integration (WMI) recently emerged as a unifying formalism for probabilistic inference in hybrid domains. Despite a considerable amount of recent work, allowing WMI algorithms to scale with the complexity of the hybrid problem is still a challenge. In this paper we highlight some substantial limitations of existing state-of-the-art solutions, and develop an algorithm that combines SMT-based enumeration, an efficient technique in formal verification, with an effective encoding of the problem structure. This allows our algorithm to avoid generating redundant models, resulting in drastic computational savings. Additionally, we show how SMT-based approaches can seamlessly deal with different integration techniques, both exact and approximate, significantly expanding the set of problems that can be tackled by WMI technology. An extensive experimental evaluation on both synthetic and real-world datasets confirms the substantial advantage of the proposed solution over existing alternatives. The application potential of this technology is further showcased on a prototypical task aimed at verifying the fairness of probabilistic programs.
Giuseppe Spallitta, Gabriele Masina, Paolo Morettin, Andrea Passerini, Roberto Sebastiani
2023-02-13T08:55:12Z
http://arxiv.org/abs/2302.06188v2
# Enhancing SMT-based Weighted Model Integration ###### Abstract The development of efficient exact and approximate algorithms for probabilistic inference is a long-standing goal of artificial intelligence research. Whereas substantial progress has been made in dealing with purely discrete or purely continuous domains, adapting the developed solutions to tackle hybrid domains, characterised by discrete and continuous variables and their relationships, is highly non-trivial. Weighted Model Integration (WMI) recently emerged as a unifying formalism for probabilistic inference in hybrid domains. Despite a considerable amount of recent work, allowing WMI algorithms to scale with the complexity of the hybrid problem is still a challenge. In this paper we highlight some substantial limitations of existing state-of-the-art solutions, and develop an algorithm that combines SMT-based enumeration, an efficient technique in formal verification, with an effective encoding of the problem structure. This allows our algorithm to avoid generating redundant models, resulting in drastic computational savings. Additionally, we show how SMT-based approaches can seamlessly deal with different integration techniques, both exact and approximate, significantly expanding the set of problems that can be tackled by WMI technology. An extensive experimental evaluation on both synthetic and real-world datasets confirms the substantial advantage of the proposed solution over existing alternatives. The application potential of this technology is further showcased on a prototypical task aimed at verifying the fairness of probabilistic programs. keywords: Hybrid Probabilistic Inference, Weighted Model Integration, + ## 1 Introduction There is a growing interest in the artificial intelligence community in extending probabilistic reasoning approaches to deal with hybrid domains, characterized by both continuous and discrete variables and their relationships. Indeed, hybrid domains are extremely common in real-world scenarios, from transport modelling [2] to probabilistic robotics [3] and cyber-physical systems [4]. Weighted Model Integration (WMI) [5; 6; 7] recently emerged as a unifying formalism for probabilistic inference in hybrid domains. The paradigm extends Weighted Model Counting (WMC) [8], which is the task of computing the weighted sum of the set of satisfying assignments of a propositional formula, to deal with SMT formulas (e.g., [9]) consisting of combinations of Boolean atoms and connectives with symbols from a background theory, like linear arithmetic over rationals (\(\mathcal{LRA}\)). Although WMC can be made extremely efficient by leveraging component caching techniques [10; 11], these strategies are difficult to apply for WMI due to the tight coupling induced by arithmetic constraints. Indeed, existing component caching approaches for WMI are restricted to fully factorized densities with few dependencies among continuous variables [12]. Another direction specifically targets acyclic [13; 14] or loopy [15] pairwise models. Approximations with guarantees can be computed for problems in disjunctive normal form [16] or, in restricted cases, conjunctive normal form [17]. Exact solutions for more general classes of densities and constraints mainly take advantage of advances in SMT technology [9] or knowledge compilation (KC) [18]. WMI-PA [6; 19] relies on SMT-based Predicate Abstraction (PA) [20] to both reduce the number of models to be generated and integrated over and efficiently enumerate them, and was shown to achieve substantial improvements over previous solutions. Nevertheless, in this paper we show how WMI-PA has the major drawback of ignoring the conditional structure of the weight function, which prevents from pruning away lots of redundant models. The use of KC for hybrid probabilistic inference was pioneered by [21] and further refined in a series of later works [22; 23; 24; 25]. By compiling a formula into an algebraic circuit, KC techniques can exploit the structure of the problem to reduce the size of the resulting circuit, and are at the core of many state-of-the-art approaches for WMC [8]. Nevertheless, in this paper we show that even the most recent KC solutions for WMI [23; 24] have serious troubles in dealing with densely coupled problems, resulting in exponentially large circuits that are impossible to store or prohibitively expensive to evaluate. To address these problems, in this paper we introduce SA-WMI-PA-SK, a novel algorithm for WMI that aims to combine the best of both worlds, by introducing weight-structure awareness into PA-based WMI. The main idea is to build a formula, which we call the _conditional skeleton_, which mimics the conditional structure of the weight function, to drive the SMT-based enumeration algorithm preventing it from generating redundant models. An extensive experimental evaluation on synthetic and real-world datasets shows substantial computational advantages of the proposed solution over existing alternatives for the most challenging settings. Parallel to the introduction of weight-structure awareness, we highlight how PA-based algorithms are _agnostic_ to the integration algorithm chosen to compute the volume of each enumerated assignment. To this extent, we extend SA-WMI-PA-SK to support both _exact numerical integration_ (the one implicitly embedded in [6; 19]) and _approximate integration_. The advantages of using approximate integration are twofold: (i) it positively affects scalability of SA-WMI-PA-SK with complex instances when integration is the major bottleneck; (ii) it allows applying SA-WMI-PA-SK to problems with non-polynomial weight functions (e.g., Gaussian densities), increasing the applicability of WMI to real-world scenarios. Our main contributions can be summarized as follows: * We analyse existing state-of-the-art of WMI and we identify major efficiency issues for both PA and KC based approaches. * We introduce SA-WMI-PA-SK, a novel WMI algorithm that combines PA with weight-structure awareness by generating explicitly a _conditional skeleton_ of the weight function to drive the search of assignments. * We show how SA-WMI-PA-SK achieves substantial improvements over existing solutions in both synthetic and real-world settings. * We demonstrate how PA-based approaches can deal with different integration techniques, both exact and approximate, substantially expanding the set of problems that can be tackled by WMI technology. * We present a prototypical application of SA-WMI-PA-SK to verifying the fairness of probabilistic programs. The rest of the paper is organized as follows. We start by presenting the related work in SS2. In SS3 we introduce the background, focusing on the formulation of Weighted Model Integration. In SS4 we analyse the WMI approaches based on Knowledge-Compilation and on Predicate Abstraction, identifying some weaknesses for either of them. In SS5 we address the gaps mentioned in the previous section, showing theoretical ideas to make WMI-PA structure aware and their implementation into our novel algorithm, SA-WMI-PA-SK. SS6 is devoted to an empirical evaluation of SA-WMI-PA-SK with respect to the other existing approaches, considering both synthetic and real-world settings. In SS7, we highlight how PA-based approaches are agnostic to the integrator chosen to compute the volume of the polytopes and propose an adaptation of the algorithm to deal with approximated integration. Our conclusion and final considerations are presented in SS8. ## 2 Related work Traditionally, inference algorithms in hybrid graphical models either dissolved complex algebraic and logical constraints [26; 27; 28], or solved the problem approximately [29; 30]. The use of piecewise polynomial densities when dealing with algebraic constraints was pioneered by Sanner and Abbasnejad [21]. The algorithm, called Symbolic Variable Elimination (SVE), first used a compact representation called eXtended Algebraic Decision Diagrams (XADD) [31] in probabilistic inference tasks. As described in detail in SS4, these representations lie at the core of modern compilation-based WMI techniques. Probabilistic inference modulo theories [32] used similar representations together with a modified DPLL procedure. The resulting algorithm, called PRAiSE, showed superior performance with respect to both SVE and the original WMI [5] solver, which didn't exploit any SMT enumeration technique. Follow-up works on SMT-based WMI capitalized on the advanced features of SMT solvers, obtaining state-of-the-art results in many benchmarks [6; 19]. The substantial efficiency gains obtained by caching partial results in WMC [10; 11] motivated their applications in the hybrid case. When the dependencies induced by the algebraic constraints are limited, similar ideas can be applied to WMI [12]. Alternatively to SMT- or compilation-based approaches, WMI can be reduced to Belief Propagation when the algebraic dependencies involve at most two variables. Then, an efficient message passing algorithm on an acyclic pairwise factor graph can solve the problem exactly [14]. This idea was later extended for approximating problems with circular dependencies [15]. In contrast with the work above, this paper focuses on the general WMI problem, where the above assumptions cannot be leveraged. Approximate WMI algorithms exploit both SMT-based [17] and knowledge compilation-based [23] approaches. While generally applicable, the latter does not provide any guarantee on the approximation error. The former provides practical statistical guarantees in limited scenarios when constraints are expressed in conjunctive form. Instead, if the constraints can be expressed in disjunctive form, WMI admits a fully polynomial randomized approximation scheme (FPRAS) under mild assumptions [16]. ## 3 Background In this section, we introduce the theoretical background which is needed for a fully-detailed comprehension of this paper. Notice that this section contains several SMT-related technicalities which are needed only to understand some technicalities in the encodings, and as such could be skipped in a first reading. ### SMT, AllSMT and Projected AllSMT We assume the reader is familiar with the basic syntax, semantics, and results of propositional and first-order logic. We adopt the notation and definitions in [19] --including some terminology and concepts from Satisfiability Modulo Theories (SMT)-- which we summarize below. Notation and terminologySatisfiability Modulo Theories (SMT) (see [9] for details) consists in deciding the satisfiability of first-order formulas over some given theory. For the context of this paper, we will refer to quantifier-free SMT formulas over linear real arithmetic (\(\mathcal{LRA}\)), possibly combined with uninterpreted function symbols (\(\mathcal{LRA}\cup\mathcal{EUF}\)). We use \(\mathbb{B}\stackrel{{\mathrm{def}}}{{=}}\{\top,\bot\}\) to indicate the set of Boolean values and \(\mathbb{R}\) to indicate the set of real values. We use the sets \(\mathbf{A}\stackrel{{\mathrm{def}}}{{=}}\{A_{i}\}_{i}\), \(\mathbf{B}\stackrel{{\mathrm{def}}}{{=}}\{B_{i}\}_{i}\) to denote Boolean atoms and the sets \(\mathbf{x}\stackrel{{\mathrm{def}}}{{=}}\{x_{i}\}_{i}\), \(\mathbf{y}\stackrel{{\mathrm{def}}}{{=}}\{y_{i}\}_{i}\) to denote real variables. \(Atoms(\varphi)\) denotes the set of atomic formulas occurring in \(\varphi\), both Boolean and \(\mathcal{LRA}\cup\mathcal{EUF}\) ones. SMT(\(\mathcal{LRA}\)) formulas combines Boolean atoms \(A_{i}\in\mathbb{B}\) and \(\mathcal{LRA}\) atoms in the form (\(\sum_{i}c_{i}x_{i}\ \bowtie c\)) (where \(c_{i}\), \(c\) are rational values, \(x_{i}\) are real variables in and \(\bowtie\) is one of the standard arithmetic operators \(\{=,\neq,<,>,\leq,\geq\}\)) by using standard Boolean operators \(\{\neg,\wedge,\vee,\rightarrow,\leftrightarrow\}\). In SMT(\(\mathcal{LRA}\cup\mathcal{EUF}\)), \(\mathcal{LRA}\) terms can be interleaved with uninterpreted function symbols. Hereafter, unless otherwise stated, we implicitly refer to a background theory \(\mathcal{T}\in\{\mathcal{LRA},\mathcal{LRA}\cup\mathcal{EUF}\}\). A _literal_ is either an atom (a _positive literal_) or its negation (a _negative literal_). A _clause_\(\bigvee_{j=1}^{K}l_{j}\) is a disjunction of literals. A formula is in _Conjunctive Normal Form (CNF)_ if it is written as a conjunction of clauses \(\bigwedge_{i=1}^{L}\bigvee_{j_{i}=1}^{K_{i}}l_{j_{i}}\). Some shortcuts of frequently-used expressions (marked as "\([\![\ldots]\!]\)") are provided to simplify the reading: the formula \((x_{i}\geq l)\wedge(x_{i}\leq u)\) is shortened into \([\![x_{i}\in[l,u]\!]]\); if \(\phi\stackrel{{\mbox{\tiny def}}}{{=}}\bigwedge_{i}C_{i}\) is a CNF formula and \(C\stackrel{{\mbox{\tiny def}}}{{=}}\bigvee_{j}l_{j}\) is a clause, then the formula \(\bigwedge_{i}(C\lor C_{i})\) is shortened into \([\![C\lor\phi]\!]\). _Semantics._ "\(\models\)\(\mathcal{T}\)" denotes entailment in \(\mathcal{T}\) (e.g.,\((x\geq 2)\models\)\(\mathcal{cRA}\) (\(x\geq 1\))), whereas "\(\models_{\mbox{\tiny B}}\)" denotes tautological entailment (e.g. \(A_{1}\wedge(x\geq 2)\models_{\mbox{\tiny B}}((A_{1}\vee(x\leq 1)))\wedge(\neg A_{1} \vee(x\geq 2))\).) Note that \(\models_{\mbox{\tiny B}}\) is strictly stronger than \(\models\)\(\mathcal{T}\), that is, if \(\varphi_{1}\models_{\mbox{\tiny B}}\varphi_{2}\) then \(\varphi_{1}\models_{\mathcal{T}}\varphi_{2}\), but not vice versa. We say that \(\varphi_{1},\varphi_{2}\) are \(\mathcal{T}\)-equivalent, written \(\varphi_{1}\Leftrightarrow_{\mathcal{T}}\varphi_{2}\), iff both \(\varphi_{1}\models_{\mathcal{T}}\varphi_{2}\) and \(\varphi_{2}\models_{\mbox{\tiny T}}\varphi_{1}\), and _tautologically equivalent_, written \(\varphi_{1}\Leftrightarrow_{\mbox{\tiny B}}\varphi_{2}\), iff both \(\varphi_{1}\models_{\mbox{\tiny B}}\varphi_{2}\) and \(\varphi_{2}\models_{\mbox{\tiny B}}\varphi_{1}\). Given a set of \(\mathcal{T}\) formulas \(\mathbf{\Psi}\stackrel{{\mbox{\tiny def}}}{{=}}\{\psi_{ 1},\ldots,\psi_{K}\}\), we call a _total [resp. partial] truth assignment_\(\mu\) for \(\Psi\) any total [resp. partial] map from \(\Psi\) to B. With a little abuse of notation, we represent \(\mu\) either as a set or a conjunction of literals: \(\mu\stackrel{{\mbox{\tiny def}}}{{=}}\{\psi\mid\psi\in\mathbf{\Psi},\ \mu(\psi)=\top\}\cup\{\neg\psi\mid\psi\in\mathbf{\Psi},\ \mu(\psi)=\bot\}\), or \(\mu\stackrel{{\mbox{\tiny def}}}{{=}}\bigwedge_{\psi\in\mathbf{\Psi},\mu(\psi)=\top}\ \ \psi\wedge\bigwedge_{\psi\in\mathbf{\Psi},\mu(\psi)=\bot}\neg\psi\), and we write "\(\psi_{i}\in\mu_{1}\)" and "\(\mu_{1}\subseteq\mu_{2}\)" as if \(\mu_{1},\mu_{2}\) were represented as sets (i.e., we write "\(\psi_{1}\wedge\neg\psi_{2}\subseteq\psi_{1}\wedge\neg\psi_{2}\wedge\psi_{3}\)" meaning "\(\{\psi_{1},\neg\psi_{2}\}\subseteq\{\psi_{1},\neg\psi_{2},\psi_{3}\}\)"). In the latter case, we say that \(\mu_{1}\) is a _sub-assignment_ of \(\mu_{2}\), and that \(\mu_{2}\) is a _super-assignment_ of \(\mu_{1}\). We denote by \(\mathbb{B}^{K}\) the set of all total truth assignments over \(\Psi\). Given a (partial) truth assignment \(\mu\) to \(Atoms(\varphi)\), we call the _residual_ of \(\varphi\) w.r.t. \(\mu\), written "\(\varphi_{[\mu]}\)", the formula obtained from \(\varphi\) by substituting all the atoms assigned in \(\mu\) with their respective truth value, and by recursively propagating truth values through Boolean operators in the usual way. 1 (Notice that if \(\varphi_{[\mu]}=\top\), then \(\mu\models_{\mbox{\tiny B}}\varphi\), but not vice versa, see [33, 34] for details.) Footnote 1: E.g., \(\neg\top\Longrightarrow\bot\), \(\varphi\wedge\top\Longrightarrow\varphi\), \(\varphi\vee\top\Longrightarrow\top\), \(\varphi\leftrightarrow\top\Longrightarrow\varphi\), \(\varphi\leftrightarrow\bot\Longrightarrow\neg\varphi\), \(\ldots\) Let \(\mathbf{x}\stackrel{{\mbox{\tiny def}}}{{=}}\{x_{1},\ldots,x_{N}\} \in\mathbb{R}^{N}\) and \(\mathbf{A}\stackrel{{\mbox{\tiny def}}}{{=}}\{A_{1},\ldots,A_{M}\} \in\mathbb{B}^{M}\) for some \(N\) and \(M\). Consider a generic \(\mathcal{T}\) formula \(\varphi\) on (subsets of)\(\mathbf{x}\) and \(\mathbf{A}\), and let \(\mathbf{\Psi}\stackrel{{\mbox{\tiny def}}}{{=}}Atoms(\varphi)\). Given a truth assignment \(\mu\) for \(Atoms(\varphi)\), we denote by \(\mu^{\mathbf{A}}\) and \(\mu^{\mathcal{T}}\) its two components on the Boolean atoms in \(\mathbf{A}\) and on the \(\mathcal{T}\) atoms, respectively, so that \(\mu\stackrel{{\mbox{\tiny def}}}{{=}}\mu^{\mathbf{A}}\wedge\mu^{ \mathcal{T}}\). (For example, if \(\mu\stackrel{{\mbox{\tiny def}}}{{=}}A_{1}\wedge\neg A_{2}\wedge( x\geq 1)\wedge\neg(x\geq 3)\), then \(\mu^{\mathbf{A}}\stackrel{{\mbox{\tiny def}}}{{=}}A_{1}\wedge \neg A_{2}\) and \(\mu^{\mathcal{LRA}}\stackrel{{\mbox{\tiny def}}}{{=}}(x\geq 1) \wedge\neg(x\geq 3)\)). Importantly, and unlike pure propositional logic, \(\mu\) can be \(\mathcal{T}\)-unsatisfiable due to its \(\mu^{\mathcal{T}}\) component (e.g., \(\mu\stackrel{{\mbox{\tiny def}}}{{=}}\neg A_{1}\wedge(x_{1}+x_{2} =3)\wedge\neg(x_{1}+x_{2}\geq 2)\)). A (partial) truth assignment \(\mu\)_propositionally satisfies \(\varphi\)_ iff \(\mu\models_{\mathbb{B}}\varphi\). Thus, SMT(\(\mathcal{T}\)) reduces to checking the existence of a \(\mathcal{T}\)-satisfiable assignment \(\mu\) s.t. \(\mu\models_{\mathbb{B}}\varphi\). (Hereafter, we use "\(\models\)" rather than "\(\models_{\mathbb{B}}\)" and "\(\models_{\mathcal{T}}\)" when the distinction is clear from the context.) Given \(\varphi(\mathbf{x},\mathbf{A}\cup\mathbf{B})\), we say that a Boolean (partial) assignment \(\mu^{\mathbf{A}}\) satisfies \(\exists\mathbf{x}.\exists\mathbf{B}.\varphi\) (written "\(\mu^{\mathbf{A}}\models\exists\mathbf{x}.\exists\mathbf{B}.\varphi\)") iff \(\mu^{\mathbf{A}}\wedge\mu^{\mathbf{B}}\wedge\mu^{\mathcal{T}}\models_{\mathbb{B}}\varphi\) for some assignment \(\mu^{\mathbf{B}}\) on \(\mathbf{B}\) and some \(\mathcal{T}\)-satisfiable assignment \(\mu^{\mathcal{T}}\) on the \(\mathcal{T}\)-atoms of \(\varphi\). Notice that, if \(\mu^{\mathbf{A}}\) is partial, then \(\mu^{\mathbf{A}}\models\exists\mathbf{x}.\exists\mathbf{B}.\varphi(\mathbf{x},\mathbf{A}\cup\mathbf{B})\) iff \(\eta_{\mathbf{A}}\models\exists\mathbf{x}.\exists\mathbf{B}.\varphi(\mathbf{x},\mathbf{A}\cup\mathbf{B})\) for every total assignment \(\eta_{\mathbf{A}}\) extending \(\mu^{\mathbf{A}}\). (The definitions of "\(\mu^{\mathbf{A}}\models\exists\mathbf{x}.\varphi(\mathbf{x},\mathbf{A})\)" and of "\(\mu^{\mathbf{A}}\models\exists\mathbf{B}.\varphi(\mathbf{A}\cup\mathbf{B})\)" follow with \(\mathbf{B}=\emptyset,\mu^{\mathbf{B}}=\top\) and \(\mathbf{x}=\emptyset,\mu^{\mathcal{T}}=\top\) respectively.) _CNF-ization._\(\varphi(\mathbf{x},\mathbf{A})\) can be converted into CNF as follows: implications and bi-implications are rewritten by applying the rewrite rules \((\alpha\rightarrow\beta)\Longrightarrow(\neg\alpha\vee\beta)\) and \((\alpha\leftrightarrow\beta)\Longrightarrow(\neg\alpha\vee\beta)\wedge( \neg\beta\vee\alpha)\); negations are pushed down to the literal level by recursively applying the rewrite rules \(\neg(\alpha\wedge\beta)\Longrightarrow(\neg\alpha\vee\neg\beta)\), \(\neg(\alpha\vee\beta)\Longrightarrow(\neg\alpha\wedge\neg\beta)\), and \(\neg\neg\alpha\Longrightarrow\alpha\). Then we have a few alternatives: _Classic CNF-ization_ ("\(\mathsf{CNF}(\varphi)\)") consists in applying recursively the DeMorgan rewrite rule \(\alpha\vee(\beta\wedge\gamma)\Longrightarrow(\alpha\vee\gamma)\wedge(\beta \vee\gamma)\) until the result is in CNF. The resulting formula \(\phi(\mathbf{x},\mathbf{A})\) is tautologically equivalent to \(\varphi\), but its size can be exponential w.r.t. that of \(\varphi\); _Tseitin CNF-ization_[35] ("\(\mathsf{CNF}_{\mathsf{ts}}(\varphi)\)") consists in applying recursively the "labeling" rewrite rule \(\varphi\Longrightarrow\varphi[\psi|B]\wedge(B\leftrightarrow\psi)\) --\(\varphi[\psi|B]\) being the results of substituting all occurrences of a subformula \(\psi\) with a fresh Boolean atom \(B\)-- until all conjuncts can be CNF-ized classically without space blow-up. Alternatively, one can apply the rule \(\varphi\Longrightarrow\varphi[\psi|B]\wedge(B\rightarrow\psi)\)[36] ("\(\mathsf{CNF}_{\mathsf{pg}}(\varphi)\)"). With both cases, the resulting formula \(\phi(\mathbf{x},\mathbf{A}\cup\mathbf{B})\) is s.t. \(\varphi(\mathbf{x},\mathbf{A})\) is tautologically equivalent to \(\exists\mathbf{B}.\phi(\mathbf{x},\mathbf{A}\cup\mathbf{B})\)2, \(\mathbf{B}\) being the set of fresh atoms introduced, and the size of \(\phi\) is linear w.r.t. that of \(\varphi\). Footnote 2: that is, \(\mathcal{M}\models\varphi\) iff there exists a total truth assignment \(\eta\) on \(\mathbf{B}\) s.t. \(\mathcal{M}\cup\eta\models\phi\). _Assignment Enumeration._ Given a theory \(\mathcal{T}\in\{\mathcal{LRA},\mathcal{LRA}\cup\mathcal{EUF}\}\), \(\mathcal{TIA}(\varphi)\stackrel{{\mathrm{def}}}{{=}}\{\mu_{1}, \ldots,\mu_{j}\}\) denotes the set of \(\mathcal{T}\)-satisfiable _total_ assignments over both propositional and \(\mathcal{T}\)-atoms that propositionally satisfy \(\varphi\); \(\mathcal{TA}(\varphi)\stackrel{{\mathrm{def}}}{{=}}\{\mu_{1}, \ldots,\mu_{j}\}\) represents one set of \(\mathcal{T}\)-satisfiable _partial_ assignments over both propositional and \(\mathcal{T}\) atoms that propositionally satisfy \(\varphi\), s.t. (i) every total assignment in \(\mathcal{TIA}(\varphi)\) is a super-assignment of some partial ones in \(\mathcal{TA}(\varphi)\) and (ii) every pair \(\mu_{i},\mu_{j}\in\mathcal{TA}(\varphi)\) assigns opposite truth value to at least one atom. We remark that \(\mathcal{TIA}(\varphi)\) is unique (modulo reordering), whereas multiple \(\mathcal{TA}(\varphi)\)s are admissible for the same formula \(\varphi\) (including \(\mathcal{TIA}(\varphi)\)). The disjunction of the truth assignments in \(\mathcal{TIA}(\varphi)\), and that of \(\mathcal{TA}(\varphi)\), are \(\mathcal{T}\)-equivalent to \(\varphi\). Thus, given \(\varphi(\mathbf{x},\mathbf{A}\cup\mathbf{B})\), \(\mathcal{TIA}(\exists\mathbf{x}.\exists\mathbf{B}.\varphi)\) denotes the set of all total truth assignment \(\mu^{\mathbf{A}}\) on \(\mathbf{A}\) s.t. \(\mu^{\mathbf{A}}\models\exists\mathbf{x}.\exists\mathbf{B}.\varphi\), and \(\mathcal{TA}(\exists\mathbf{x}.\exists\mathbf{B}.\varphi)\) denotes one set of partial assignments \(\mu^{\mathbf{A}}\) on \(\mathbf{A}\) s.t. \(\mu^{\mathbf{A}}\models\exists\mathbf{x}.\exists\mathbf{B}.\varphi\) complying with conditions (i) and (ii) above where \(\varphi\) is replaced by \(\exists\mathbf{x}.\exists\mathbf{B}.\varphi\). The disjunction of the assignments in \(\mathcal{TIA}(\exists\mathbf{x}.\exists\mathbf{B}.\varphi)\) and that of \(\mathcal{TA}(\exists\mathbf{x}.\exists\mathbf{B}.\varphi)\) are \(\mathcal{T}\)-equivalent to \(\exists\mathbf{x}.\exists\mathbf{B}.\varphi\). (As above, the definitions of \(\mathcal{TIA}(\exists\mathbf{x}.\varphi(\mathbf{x},\mathbf{A}))/\mathcal{TA}( \exists\mathbf{x}.\varphi(\mathbf{x},\mathbf{A}))\) and of \(\mathcal{TIA}(\exists\mathbf{B}.\varphi(\mathbf{A}\cup\mathbf{B}))/\mathcal{TA }(\exists\mathbf{B}.\varphi(\mathbf{A}\cup\mathbf{B}))\) follow with \(\mathbf{B}=\emptyset\) and \(\mathbf{x}=\emptyset\) respectively.) Notice that, if \(\varphi(\mathbf{x},\mathbf{A}\cup\mathbf{B})\) is the result of applying one of the "labelling" CNF-izations [35, 36] to \(\phi(\mathbf{x},\mathbf{A})\), then \(\mathcal{TIA}(\phi)\), \(\mathcal{TA}(\phi)\), \(\mathcal{TIA}(\exists\mathbf{x}.\phi)\) and \(\mathcal{TA}(\exists\mathbf{x}.\phi)\) can be computed as \(\mathcal{TIA}(\exists\mathbf{B}.\varphi)\), \(\mathcal{TA}(\exists\mathbf{B}.\varphi)\), \(\mathcal{TIA}(\exists\mathbf{x}.\exists\mathbf{B}.\varphi)\) and \(\mathcal{TA}(\exists\mathbf{x}.\exists\mathbf{B}.\varphi)\) respectively. \(\mathcal{TIA}(\ldots)/\mathcal{TA}(\ldots)\) can be computed efficiently by means of _Projected AllSMT_, a technique used in formal verification to compute _Predicate Abstraction_[20]. All these functionalities are provided by the SMT solver MathSAT [37]. In a nutshell, MathSAT works as follows. Given \(\varphi\) and a subset of "relevant" atoms \(\mathbf{\Psi}\subseteq Atoms(\varphi)\), \(\mathcal{TA}(\ldots)\) generates one-by-one \(\mathcal{T}\)-satisfiable partial assignments \(\mu_{i}\stackrel{{\mathrm{def}}}{{=}}\mu_{i}^{\overline{\Psi}} \wedge\mu_{i}^{\Psi}\), s.t. \(\mu_{i}^{\overline{\mathbf{\Psi}}}\) is a total assignment on \(Atoms(\varphi)\!\setminus\!\mathbf{\Psi}\) and \(\mu_{i}^{\mathbf{\Psi}}\) is a _minimal_3 partial assignment on \(\mathbf{\Psi}\) s.t. (i) \(\varphi_{[\mu_{i}]}=\top\), and (ii) for every \(j\in\{1,\ldots,i-1\}\), \(\mu_{j}^{\mathbf{\Psi}},\mu_{i}^{\mathbf{\Psi}}\) assign opposite truth values to at least one atom in \(\mathbf{\Psi}\). Finally, the set \(\{\mu_{i}^{\mathbf{\Psi}}\}_{i}\) is returned. (We say that the enumeration is _projected over \(\mathbf{\Psi}\)_.) Thus, \(\mathcal{TA}(\varphi(\mathbf{x},\mathbf{A}\cup\mathbf{B}))\) and \(\mathcal{TA}(\exists\mathbf{x}.\exists\mathbf{B}.\varphi(\mathbf{x},\mathbf{A} \cup\mathbf{B}))\), possibly with \(\mathbf{x}=\emptyset\) and/or \(\mathbf{B}=\emptyset\), are computed by setting \(\mathbf{\Psi}\stackrel{{\mathrm{def}}}{{=}}Atoms(\varphi)\) and \(\mathbf{\Psi}\stackrel{{\mathrm{def}}}{{=}}\mathbf{A}\) respectively. \(\mathcal{TIA}(\ldots)\) works in the same way, but forcing the \(\mu_{i}\)s to be total. Footnote 3: i.e., no literal can be further dropped from \(\mu_{i}^{\mathbf{\Psi}}\) without losing properties (i) and (ii). ### Weighted Model Integration (WMI) Let \(\mathbf{x}\stackrel{{\mathrm{\tiny def}}}{{=}}\{x_{1},\dots,x_{N}\} \in\mathbb{R}^{N}\) and \(\mathbf{A}\stackrel{{\mathrm{\tiny def}}}{{=}}\{A_{1},\dots,A_{M}\} \in\mathbb{B}^{M}\) for some integers \(N\) and \(M\), and let \(w(\mathbf{x},\mathbf{A})\) denote a non-negative weight function s.t. \(w(\mathbf{x},\mathbf{A}):\mathbb{R}^{N}\times\mathbb{B}^{M}\longmapsto \mathbb{R}^{+}\). Intuitively, \(w\) encodes a (possibly unnormalized) density function over \(\mathbf{A}\cup\mathbf{x}\). Given a total assignment \(\mu^{\mathbf{A}}\) over \(\mathbf{A}\), \(w_{[\mu^{\mathbf{A}}]}(\mathbf{x})\stackrel{{\mathrm{\tiny def}}}{{=}}w (\mathbf{x},\mu^{\mathbf{A}})\) is \(w\) restricted to the truth values of \(\mu^{\mathbf{A}}\). The **Weighted Model Integral** of \(w(\mathbf{x},\mathbf{A})\) over \(\varphi(\mathbf{x},\mathbf{A})\) is defined as [19]: \[\mathsf{WMI}(\varphi,w|\mathbf{x},\mathbf{A}) \stackrel{{\mathrm{\tiny def}}}{{=}} \sum_{\mu^{\mathbf{A}}\in\mathcal{T}\mathcal{A}(\exists\mathbf{x},\varphi)}\mathsf{WMI}_{\mathsf{nb}}(\varphi_{[\mu^{\mathbf{A}}]},w_{[\mu^{ \mathbf{A}}]}|\mathbf{x}) \tag{1}\] \[\mathsf{WMI}_{\mathsf{nb}}(\varphi,w|\mathbf{x}) \stackrel{{\mathrm{\tiny def}}}{{=}} \int_{\varphi(\mathbf{x})}w(\mathbf{x})\ \mathrm{d}\mathbf{x},\] (2) \[=\sum_{\mu^{\mathcal{LR}}\in\mathcal{TA}(\varphi)}\int_{\mu^{ \mathcal{LR}}\in\mathcal{RA}}w(\mathbf{x})\ \mathrm{d}\mathbf{x}, \tag{3}\] where the \(\mu^{\mathbf{A}}\)s are total truth assignments on \(\mathbf{A}\), \(\mathsf{WMI}_{\mathsf{nb}}(\varphi,w|\mathbf{x})\) is the integral of \(w(\mathbf{x})\) over the set \(\{\mathbf{x}\ |\ \varphi(\mathbf{x})\)_is true_} ("\(\,\)"\(\,\)" means "no-Booleans"). We call a _support_ of a weight function \(w(\mathbf{x},\mathbf{A})\) any subset of \(\mathbb{R}^{N}\times\mathbb{B}^{M}\) out of which \(w(\mathbf{x},\mathbf{A})=0\) and we represent it as a \(\mathcal{LRA}\)-formula \(\chi(\mathbf{x},\mathbf{A})\). We recall that, consequently, \(\mathsf{WMI}(\varphi\wedge\chi,w|\mathbf{x},\mathbf{A})=\mathsf{WMI}(\varphi, w|\mathbf{x},\mathbf{A})\). We consider the class of _feasibly integrable on \(\mathcal{LRA}\) (\(\mathsf{FI}^{\mathcal{ERA}}\))_ functions \(w(\mathbf{x})\), which contain no conditional component and for which there exists some procedure able to compute \(\mathsf{WMI}_{\mathsf{nb}}(\mu^{\mathcal{CRA}},w|\mathbf{x})\) for every set of \(\mathcal{LRA}\) literals on \(\mathbf{x}\). (E.g., polynomials are \(\mathsf{FI}^{\mathcal{ERA}}\).) Then we call a weight function \(w(\mathbf{x},\mathbf{A})\), _feasibly integrable under \(\mathcal{LRA}\) conditions (\(\mathsf{FIUC}^{\mathcal{ERA}}\))_ iff it can be described in terms of a support \(\mathcal{LRA}\)-formula \(\chi(\mathbf{x},\mathbf{A})\) (\(\top\) if not provided) and a set \(\boldsymbol{\Psi}\stackrel{{\mathrm{\tiny def}}}{{=}}\{\psi_{i}( \mathbf{x},\mathbf{A})\}_{i=1}^{K}\) of \(\mathcal{LRA}\)_conditions_, in such a way that, for every total truth assignment \(\mu^{\boldsymbol{\Psi}}\) to \(\boldsymbol{\Psi}\), \(w_{[\mu^{\boldsymbol{\Psi}}]}\) (\(\mathbf{x}\)) is total and \(\mathsf{FI}^{\mathcal{ERA}}\) in the domain given by the values of \(\langle\mathbf{x},\mathbf{A}\rangle\) which satisfy \((\chi\wedge\mu^{\boldsymbol{\Psi}})\).4 Intuitively, each \(\mu^{\boldsymbol{\Psi}}\) describes a portion of the domain of \(w\) inside which \(w_{[\mu^{\boldsymbol{\Psi}}]}\) (\(\mathbf{x}\)) is \(\mathsf{FI}^{\mathcal{ERA}}\), and we say that \(\mu^{\boldsymbol{\Psi}}\)_identifies_\(w_{[\mu^{\boldsymbol{\Psi}}]}\) in \(w\). In practice, we assume w.l.o.g. that \(\mathsf{FIUC}^{\mathcal{ERA}}\) functions are described as combinations of constants, variables, standard arithmetic operators \(+,-,\cdot\), condition-less real-valued functions (e.g., \(\exp(.),\dots\)), conditional expressions in the form (\(\mathsf{If}\ \psi_{i}\ \mathsf{Then}\ t_{1i}\ \mathsf{Else}\ t_{2i}\)) whose conditions \(\psi_{i}\) are \(\mathcal{LRA}\) formulas and terms \(t_{1i},t_{2i}\) are \(\mathsf{FIUC}^{\mathcal{ERA}}\). Footnote 4: Notice that the conditions in \(\boldsymbol{\Psi}\) can be non-atomic, and can contain atoms in \(\mathbf{A}\). ### WMI via Projected AllSMT WMI-PA is an efficient WMI algorithm presented in [6; 19] which exploits SMT-based predicate abstraction. Let \(w(\mathbf{x},\mathbf{A})\) be a \(\mathsf{FUC}^{\mathcal{LRA}}\) function as above. WMI-PA is based on the fact that \[\mathsf{WMI}(\varphi,w|\mathbf{x},\mathbf{A})= \sum_{\mu^{\mathbf{A}^{*}}\in\mathcal{TIA}(\exists\mathbf{x}. \varphi^{*})}\mathsf{WMI}_{\mathsf{nb}}(\varphi^{*}_{[\mu^{\mathbf{A}^{*}}]},w^ {*}_{[\mu^{\mathbf{A}^{*}}]}|\mathbf{x}) \tag{4}\] \[\varphi^{*} \stackrel{{\mathrm{def}}}{{=}} \varphi\wedge\chi\wedge\bigwedge_{k=1}^{K}(B_{k}\leftrightarrow \psi_{k}) \tag{5}\] s.t. \(\mathbf{A}^{*}\stackrel{{\mathrm{def}}}{{=}}\mathbf{A}\cup \mathbf{B}\) s.t. \(\mathbf{B}\stackrel{{\mathrm{def}}}{{=}}\{B_{1},\ldots,B_{K}\}\) are fresh propositional atoms and \(w^{*}(\mathbf{x},\mathbf{A}\cup\mathbf{B})\) is the weight function obtained by substituting in \(w(\mathbf{x},\mathbf{A})\) each condition \(\psi_{k}\) in \(\mathbf{\Psi}\) with \(B_{k}\).5 Footnote 5: except for conditions in \(\mathbf{\Psi}\) which are already Boolean literals on \(\mathbf{A}\). The pseudocode of WMI-PA is reported in Algorithm 1. First, the problem is transformed (if needed) by labelling all conditions in \(\mathbf{\Psi}\) occurring in \(w(\mathbf{x},\mathbf{A})\) with fresh Boolean atoms \(\mathbf{B}\), as above. After this preprocessing stage, the set \(\mathcal{M}^{\mathbf{A}^{*}}\stackrel{{\mathrm{def}}}{{=}} \mathcal{TIA}(\exists\mathbf{x}.\varphi^{*})\) is computed by projected AllSMT [20]. Then, the algorithm iterates over each Boolean assignment \(\mu^{\mathbf{A}^{*}}\) in \(\mathcal{M}^{\mathbf{A}^{*}}\). \(\varphi^{*}_{[\mu^{\mathbf{A}^{*}}]}\) is simplified by the Simplify procedure, which propagates truth values and applies logical and arithmetical simplifications. Then, if \(\varphi^{*}_{[\mu^{\mathbf{A}^{*}}]}\) is already a conjunction of literals, the algorithm directly computes its contribution to the volume by calling \(\mathsf{WMI}_{\mathsf{nb}}(\varphi^{*}_{[\mu^{\mathbf{A}^{*}}]},w^{*}_{[\mu^{ \mathbf{A}^{*}}]}|\mathbf{x})\). Otherwise, \(\mathcal{TA}(\varphi^{*}_{[\mu^{\mathbf{A}^{*}}]})\) is computed by AllSMT to produce partial assignments, and the algorithm iteratively computes contributions to the volume for each \(\mu^{\mathcal{LRA}}\). See [19] for more details. ``` 1:\(\mathcal{M}^{\mathbf{A}^{*}}\), \(\mathcal{M}^{\mathbf{A}^{*}}\), \(\mathcal{M}^{\mathcal{ ``` 1:\(\langle\varphi^{*},w^{*},\mathbf{A}^{*}\rangle\leftarrow\mathsf{LabelConditions}( \varphi,w,\mathbf{x},\mathbf{A})\) 2:\(\mathcal{M}^{\mathbf{A}^{*}}\leftarrow\mathcal{TTA}(\exists\mathbf{x}.\varphi^{*})\) 3:\(vol\gets 0\) 4:for\(\mu^{\mathbf{A}^{*}}\in\mathcal{M}^{\mathbf{A}^{*}}\)do 5:\(\mathsf{Simplify}(\varphi^{*}_{[\mu^{\mathbf{A}^{*}}]})\) 6:if\(\mathsf{LiteralConjunction}(\varphi^{*}_{[\mu^{\mathbf{A}^{*}}]})\)then 7:\(vol\gets vol+\mathsf{WMI}_{\mathsf{nb}}(\varphi^{*}_{[\mu^{\mathbf{A}^{*}}] },w^{*}_{[\mu^{\mathbf{A}^{*}}]}|\mathbf{x})\) 8:else 9:\(\mathcal{M}^{\mathcal{ERA}}\leftarrow\mathcal{TA}(\varphi^{*}_{[\mu^{\mathbf{ A}^{*}}]})\) 10:for\(\mu^{\mathcal{ERA}}\in\mathcal{M}^{\mathcal{ERA}}\)do 11:\(vol\gets vol+\mathsf{WMI}_{\mathsf{nb}}(\mu^{\mathcal{ERA}},w^{*}_{[\mu^{ \mathbf{A}^{*}}]}|\mathbf{x})\) 12:return\(vol\) ``` **Algorithm 1**\(\mathsf{WMI-PA}(\varphi,\,w\,\,,\mathbf{x},\mathbf{A})\) ### Hybrid Probabilistic Inference via WMI The main application of WMI is marginal inference on weighted SMT theories. Similarly to WMC, inference can be reduced to the computation of two weighted model integrals: \[\Pr(\Delta\,|\,\chi,w)=\frac{\mathsf{WMI}(\Delta\wedge\chi,w)}{\mathsf{WMI}( \chi,w)} \tag{6}\] The denominator above is akin to computing the partition function on probabilistic models with unnormalized factors. Crucially, the formulas \(\Delta\) and \(\chi\) are arbitrary, possibly encoding complex non-convex regions of a hybrid space. This goes beyond the kind of queries that are supported by more traditional algorithms like Belief Propagation, being particularly beneficial in contexts where it is necessary to compute probabilities of complex properties, like those arising in (probabilistic) formal verification domains. Furthermore, the use of specialized software to deal with constraints yields state-of-the-art results on standard queries when the support of the distribution is highly structured, as shown by the competitive results obtained by reducing inference on discrete models to WMC [38]. ## 4 Analysis of State-Of-The-Art WMI Techniques We start by presenting an analysis of the KC and WMI-PA approaches, which represent the current state-of-the-art in WMI. ### Knowledge Compilation We start our analysis by noticing a major problem with existing KC approaches for WMI [23; 24], in that they can easily blow up in space even with simple weight functions. Consider, e.g., the case in which \[w(\mathbf{x},\mathbf{A})\stackrel{{\mbox{\tiny def}}}{{=}}\prod_{i =1}^{N}(\mbox{\sf{If }}\psi_{i}\mbox{ \sf{Then}}w_{i1}(\mathbf{x})\mbox{ \sf{Else}}w_{i2}(\mathbf{x})) \tag{7}\] where the \(\psi_{i}\)s are \(\mathcal{LRA}\) conditions on \(\{\mathbf{x},\mathbf{A}\}\) and the \(w_{i1},w_{i2}\) are generic functions on \(\mathbf{x}\). First, the decision diagrams do not interleave arithmetic and conditional operators; rather, they push all the arithmetic operators below the conditional ones. Thus, with (7) the resulting decision diagrams consist of \(2^{N}\) branches on the \(\psi_{i}\)s, each corresponding to a distinct unconditioned weight function of the form \(\prod_{i=1}^{N}w_{ij_{i}}(\mathbf{x})\) s.t. \(j_{i}\in\{1,2\}\). See Figure 1 for an example for \(N=3\) and \(\psi_{i}\stackrel{{\mbox{\tiny def}}}{{=}}A_{i}\), \(i\in\{1,2,3\}\). Second, the decision diagrams are built on the Boolean abstraction of \(w(\mathbf{x},\mathbf{A})\), s.t. they do not eliminate a priori the useless branches consisting of \(\mathcal{LRA}\)-unsatisfiable combinations of \(\psi_{i}\)s, which can be up to exponentially many. With WMI-PA, instead, the representation of (7) does not grow in size, because \(\mathsf{FIVC}^{\mathcal{LRA}}\) functions allow for interleaving arithmetic and conditional operators. Also, the SMT-based enumeration algorithm does not generate \(\mathcal{LRA}\)-unsatisfiable assignments on the \(\psi_{i}\)s. This fact has been empirically confirmed and is shown in Figure 2, where we plot in logarithmic scale the number of nodes of a KC-based encoding of (7) using XADD for problems of increasing size, compared with the size of the encoding used by WMI-PA. This graph clearly shows the exponential blow-up of XADD size, whereas the size of the encoding used by WMI-PA grows linearly. We stress the fact that (7) is not an artificial scenario: rather, e.g., this is the case of the real-world logistics problems in [19]. ### Wmi-Pa We continue our analysis by noticing a major problem also for the WMI-PA algorithm: unlike with the KC approaches, _it fails to leverage the conditional structure of the weight function to prune the set of models to integrate over_. We illustrate the issue by means of a simple example (see Figure 3). **Example 1**: _Let \(\varphi\stackrel{{\mbox{\tiny def}}}{{=}}\top\), \(\chi\stackrel{{\mbox{\tiny def}}}{{=}}\llbracket x_{1}\!\in[0,2] \rrbracket\wedge\llbracket x_{2}\!\in[0,3]\rrbracket\) (Figure 3(a)) and let \(w(\mathbf{x},\mathbf{A})\) be a tree-structured weight function defined as in Figure 3(b). To Figure 1: Example highlighting the efficiency issues of the knowledge compilation algorithms for WMI. (**a**) definition of a weight function consisting of a product of conditional statements. (**b**) decision diagram generated by knowledge compilation approaches over the weight function. Round nodes indicate if-then-else conditions, with true and false cases on the right and left outgoing edges respectively. Squared nodes indicate \(\mathsf{FI}^{\mathcal{LRA}}\) weight functions. Note how the diagram has a number of branches which is exponential in the number of conditions, and no compression is achieved. (**c**) definition of the weight functions at the leaves of the diagram. In naming weight functions, we use \(\bar{A}\) for \(\neg A\) for the sake of compactness. Figure 2: Size of XADD diagram (number of nodes) and WMI-PA formula as the problem size increases, in logarithmic scale. Whereas the size of the diagram grows exponentially, WMI-PA encodes the problem into a formula of linear size. compute \(\mathsf{WMI}(\varphi\wedge\chi,w|\mathbf{x},\mathbf{A})\), only six integrals need to be computed:_ \(x_{1}^{2}x_{2}\) _on \(\llbracket x_{1}\!\in[1,2]\rrbracket\wedge\llbracket x_{2}\!\in[1,3]\rrbracket\) (if \(A_{1}=\top\))_ \(x_{1}^{3}x_{2}\) _on \(\llbracket x_{1}\!\in[1,2]\rrbracket\wedge\llbracket x_{2}\!\in[0,1]\rrbracket\) (if \(A_{1}=\top\))_ \(x_{1}x_{2}^{2}\) _on \(\llbracket x_{1}\!\in[0,1]\rrbracket\wedge\llbracket x_{2}\!\in[2,3]\rrbracket\) (if \(A_{1}=\top\))_ \(x_{1}x_{2}^{3}\) _on \(\llbracket x_{1}\!\in[0,1]\rrbracket\wedge\llbracket x_{2}\!\in[0,2]\rrbracket\) (if \(A_{1}=\top\))_ \(x_{1}x_{2}^{3}\) _on \(\llbracket x_{1}\!\in[0,1]\rrbracket\wedge\llbracket x_{2}\!\in[0,2]\rrbracket\) (if \(A_{1}=\top\))_ \(2x_{1}x_{2}\) _on \(\llbracket x_{1}\!\in[0,2]\rrbracket\wedge\llbracket x_{2}\!\in[0,3]\rrbracket\) (if \(A_{1}=\bot,A_{2}=\top\))_ \(3x_{1}x_{2}\) _on \(\llbracket x_{1}\!\in[0,2]\rrbracket\wedge\llbracket x_{2}\!\in[0,3]\rrbracket\) (if \(A_{1}=\bot,A_{2}=\bot\))_ _When WMI-PA is used (Algorithm 1), applying \(\mathsf{LabelConditions}(\ldots)\) we obtain Figure 3: Example highlighting the efficiency issues of the WMI-PA algorithm. (**a**) definition of formula \(\varphi\) (trivially true) and support \(\chi\). (**b**) definition of the weight function \(w(\mathbf{x},\mathbf{A})\). Round nodes indicate if-then-else conditions, with true and false cases on the right and left outgoing edges respectively. Squared nodes indicate \(\mathsf{FI}^{\mathcal{CRA}}\) weight functions. (**c**) novel version of the formula \(\varphi^{*}(\mathbf{x},\mathbf{A}\cup\mathbf{B})\) after the application of the \(\mathsf{LabelConditions}(\ldots)\) step of WMI-PA. (**d**) novel version of the weight function \(w^{*}(\mathbf{x},\mathbf{A}\cup\mathbf{B})\), where all \(\mathcal{CRA}\) conditions have been replaced with the fresh Boolean atoms introduced in \(\varphi^{*}(\mathbf{x},\mathbf{A}\cup\mathbf{B})\). (**e**) List of assignments obtained by WMI-PA on \(\mathbf{A}\cup\mathbf{B}\) (split on two columns for the sake of compactness). Notice the amount of assignments sharing the same \(\mathsf{FI}^{\mathcal{CRA}}\) weight function. \(\varphi^{*}({\bf x},{\bf A}\cup{\bf B})\stackrel{{\mbox{\tiny def}}}{{=}} \varphi\wedge\chi\wedge B_{1}\leftrightarrow(x_{1}\geq 1)\wedge B_{2} \leftrightarrow(x_{2}\geq 1)\wedge B_{3}\leftrightarrow(x_{2}\geq 2)\) (Figure 3(c)), and the weight function \(w^{*}({\bf x},{\bf A}\cup{\bf B})\) shown in Figure 3(d). Then, by applying \({\cal T}{\cal T}{\cal A}(\exists{\bf x}.\varphi^{*})\) (line 2) we obtain 24 total assignments \({\cal M}^{{\bf A}^{*}}\) on \({\bf A}\cup{\bf B}\), as shown in Figure 3(e). (Notice that all assignments containing \(\{\neg B_{2},B_{3}\}\), and hence \(\{\neg(x_{2}\geq 1),(x_{2}\geq 2)\}\), are missing because they are \({\cal L}{\cal R}{\cal A}\)-unsatisfiable.) As a result, WMI-PA computes 24 integrals instead of 6. In particular, WMI-PA computes twice each of the first 6 integrals, for \(\{A_{1},A_{2},\ldots\}\) and \(\{A_{1},\neg A_{2},\ldots\}\); also, it splits into 2 parts each the integrals of \(x_{1}^{2}x_{2}\) and \(x_{1}x_{2}^{3}\) (on the irrelevant truth values of \(B_{3}\) and \(B_{2}\) respectively) and into 6 parts each the integrals of \(2x_{1}x_{2}\) and \(3x_{1}x_{2}\) (on the 8 irrelevant truth values of \(B_{1},B_{2},B_{3}\) minus the 2 \({\cal L}{\cal R}{\cal A}\)-unsatisfiable ones containing \(\{\neg B_{2},B_{3}\}\)). \(\diamond\)_ The key issue about WMI-PA is that the enumeration of \({\cal T}{\cal T}{\cal A}(\exists{\bf x}.\varphi^{*})\) in (4) (line 2 in Algorithm 1) _is not aware of the conditional structure of the weight function \(w^{*}\)_, in particular, it is not aware of the fact that often _partial_ assignments to the set of conditions in \(w^{*}\) (both in \({\bf A}\) and \({\bf B}\), i.e., both original Boolean and \({\cal L}{\cal R}{\cal A}\) conditions in \(w\)) are sufficient to identify the value of \(w^{*}\), so that it is forced to enumerate all \({\cal L}{\cal R}{\cal A}\)-satisfiable total assignments extending them. (E.g., in Example 1, \(\{A_{1},B_{1},B_{2}\}\) suffices to identify \(x_{1}^{2}x_{2}\), regardless the values of \(A_{2}\) and \(B_{3}\), but WMI-PA enumerates \(\{A_{1},A_{2},B_{1},B_{2},B_{3}\}\), \(\{A_{1},A_{2},B_{1},B_{2},\neg B_{3}\}\), \(\{A_{1},\neg A_{2},B_{1},B_{2},B_{3}\}\), \(\{A_{1},\neg A_{2},B_{1},B_{2},\neg B_{3}\}\).) This has two consequences. First, to make the enumerator split also on the conditions in \({\bf\Psi}\), WMI-PA renames them with fresh Boolean atoms \({\bf B}\), conjoining \(\bigwedge_{k=1}^{K}(B_{k}\leftrightarrow\psi_{k})\) to \(\varphi\wedge\chi\) (line 1). Unfortunately, the above equivalences force the enumerator to assign a truth value to _all_ the \(B_{i}\)s in \({\bf B}\) (hence, to all \({\cal L}{\cal R}{\cal A}\)-conditions \(\psi_{i}\)s in \({\bf\Psi}\setminus{\bf A}\)) in every assignment, even when the conditional structure of \(w\) does not need it. This is what forces the unnecessary partitioning of integrals into subregions. (E.g., in Example 1, the integral of \(x_{1}^{2}x_{2}\) is unnecessarily split into two integrals for \((x_{2}\geq 2)\) and \(\neg(x_{2}\geq 2)\) due to the unnecessary branch on \(\{B_{3},\neg B_{3}\}\).) Second, not knowing \(w^{*}\), the enumerator is forced to always assign also _all_ original Boolean atoms in \({\bf A}\), even when the combination \(\mu^{{\bf A}}\cup\mu^{{\bf B}}\) of a _partial_ assignment \(\mu^{{\bf A}}\) on \({\bf A}\) and a total assignment \(\mu^{{\bf B}}\) on \({\bf B}\) suffices to satisfy \(\varphi^{*}\) and to make \(w^{*}\)\({\sf FI}^{{\cal L}{\cal R}{\cal A}}\), that is, \({\cal T}{\cal T}{\cal A}(\exists{\bf x}.\varphi^{*})\) is used instead of \({\cal T}{\cal A}(\exists{\bf x}.\varphi^{*})\) in (4) and in line 2 of Algorithm 1. This is what causes the unnecessary duplication of integrals. (E.g., in Example 1, each of the first 6 integrals is computed twice, for \(\{A_{1},A_{2},\ldots\}\) and \(\{A_{1},\neg A_{2},\ldots\}\), due to the unnecessary branching on \(\{A_{2},\neg A_{2}\}\)). Although the latter problem could in principle be fixed by caching all the values of the integrals, this could be expensive in terms of both time and memory. To cope with these issue, we need to modify WMI-PA to make it aware of the conditional structure of \(w\). (One further issue, dealing with the fact that the conditions in \(\boldsymbol{\Psi}\) could not be literals, will be addressed in SS5.3.2.) ## 5 Making WMI-PA Weight-Structure Aware In order to address the issues described in SS4, we aim to combine the best of KC approaches --i.e., weight-structure awareness-- with the best of PA-based approaches --i.e., SMT-based enumeration-- by introducing weight-structure awareness into PA-based WMI. In SS5.1 we present the general idea for making WMI-PA weight-structure aware, by introducing and exploiting the notion of _conditional skeleton_ for \(w\). In SS5.2, for the sake of compactness, we only summarize the first preliminary approach and algorithm (namely "SA-WMI-PA"), based on an implicit enumeration of a skeleton, which we presented in the conference version of this paper [1] and which is described in full detail in Appendix A. In SS5.3 we propose our novel and current best approach and algorithm (namely "SA-WMI-PA-SK"), which is both much simpler and more effective than that from [1], in which the skeleton is generated explicitly. ### General Idea: Exploiting a Conditional Skeleton of \(w\) We start by introducing the following concept. **Definition 1** (Conditional skeleton of \(w\), \(\mathsf{sk}(w)\)): _Let \(\varphi\) and \(\chi\) be as above; let \(w(\mathbf{x},\mathbf{A})\) be \(\mathsf{FINC}^{\mathcal{LRA}}\) on the set of conditions \(\boldsymbol{\Psi}\). We call a_ **Conditional Skeleton of \(w\)**_, written \(\mathsf{sk}(w)\), any \(\mathcal{LRA}\) formula s.t.:_ 1. _its atoms are all and only those occurring in the conditions in_ \(\boldsymbol{\Psi}\)_;_ 2. \(\mathsf{sk}(w)\) _is_ \(\mathcal{LRA}\)_-valid, so that_ \(\varphi\wedge\chi\) _is equivalent to_ \(\varphi\wedge\chi\wedge\mathsf{sk}(w)\)_;_ 3. _every_ partial _truth value assignment_ \(\mu\) _to (the atoms occurring in) the conditions_ \(\boldsymbol{\Psi}\) _which makes_ \(\mathsf{sk}(w)\) _true is such that_ \(w_{[\mu]}\) _is_ \(\mathsf{FI}^{\mathcal{LRA}}\)_._ **Example 2**: _Consider the problem in Example 1. Then the following formula is a conditional skeleton \(\mathsf{sk}(w)\) for \(w(\mathbf{x},\mathbf{A})\):_ \(A_{1}\vee\neg A_{1}\)\(\neg A_{1}\vee\quad(x_{1}\geq 1)\vee\neg(x_{1}\geq 1)\)\(\neg A_{1}\vee\quad(x_{1}\geq 1)\vee\neg(x_{2}\geq 1)\vee\neg(x_{2}\geq 1)\)\(\neg A_{1}\vee\quad(x_{1}\geq 1)\vee\quad(x_{2}\geq 2)\vee\neg(x_{2}\geq 2)\)\(\neg A_{1}\vee\quad(x_{1}\geq 1)\vee\quad(x_{2}\geq 2)\vee\neg(x_{2}\geq 2)\)\(\neg A_{1}\vee\quad A_{2}\vee\neg A_{2}\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\)\(\quad **Example 3**.: _Consider the problem in Example 1 and the corresponding \(\mathsf{sk}(w)\) formula in Example 2. Let \(\varphi^{***}\stackrel{{\mbox{\tiny def}}}{{=}}\varphi\wedge\chi \wedge\mathsf{sk}(w)\), as in (9). We show how \(\mathcal{TA}(\varphi^{***})\) in (8) can be produced by the SMT solver. Assume nondeterministic choices are picked following the order \(A_{1},A_{2},(x_{1}\geq 1),(x_{2}\geq 1),(x_{2}\geq 2)\), assigning positive values first. Then in the first branch the following satisfying total truth assignment is generated:_ \[\{(x_{1}\geq 0),(x_{1}\leq 2),(x_{2}\geq 0),(x_{2}\leq 3)\}\cup\{A_{1},A_{2},(x_ {1}\geq 1),(x_{2}\geq 1),(x_{2}\geq 2)\}\] _(where \(\{(x_{1}\geq 0),(x_{1}\leq 2),(x_{2}\geq 0),(x_{2}\leq 3)\}\) is assigned deterministically to satisfy the \(\varphi\wedge\chi\) part) from which the solver extracts the minimal partial truth assignment which evaluates \(\mathsf{sk}(w)\) to true:_ \[\mu_{1}\stackrel{{\mbox{\tiny def}}}{{=}}\{(x_{1}\geq 0),(x_{1} \leq 2),(x_{2}\geq 0),(x_{2}\leq 3)\}\cup\{A_{1},(x_{1}\geq 1),(x_{2}\geq 1)\}.\] _In the next branch, the following minimal partial assignment is produced:_ \[\mu_{2}\stackrel{{\mbox{\tiny def}}}{{=}}\{(x_{1}\geq 0),(x_{1} \leq 2),(x_{2}\geq 0),(x_{2}\leq 3)\}\cup\{A_{1},(x_{1}\geq 1),\neg(x_{2}\geq 1)\}.\] _s.t. \(\mu_{1},\mu_{2}\) assign opposite truth value to (at least) one atom. Overall, the algorithm enumerates the following collection of minimal partial assignments:_ \[\{(x_{1}\geq 0),(x_{1}\leq 2),(x_{2}\geq 0),(x_{2}\leq 3)\}\cup\{\ \ A_{1},\ \ (x_{1}\geq 1),\ \ (x_{2}\geq 1)\},\] \[\{(x_{1}\geq 0),(x_{1}\leq 2),(x_{2}\geq 0),(x_{2}\leq 3)\}\cup\{\ \ A_{1},\ \ (x_{1}\geq 1),\neg(x_{2}\geq 1)\},\] \[\{(x_{1}\geq 0),(x_{1}\leq 2),(x_{2}\geq 0),(x_{2}\leq 3)\}\cup\{\ \ A_{1},\neg(x_{1}\geq 1),\ \ (x_{2}\geq 2)\},\] \[\{(x_{1}\geq 0),(x_{1}\leq 2),(x_{2}\geq 0),(x_{2}\leq 3)\}\cup\{\ \ A_{1},\neg(x_{1}\geq 1),\neg(x_{2}\geq 2)\},\] \[\{(x_{1}\geq 0),(x_{1}\leq 2),(x_{2}\geq 0),(x_{2}\leq 3)\}\cup\{ \neg A_{1},\ \ A_{2}\},\] \[\{(x_{1}\geq 0),(x_{1}\leq 2),(x_{2}\geq 0),(x_{2}\leq 3)\}\cup\{ \neg A_{1},\neg A_{2}\}\] _which correspond to the six integrals of Example 1. Notice that according to (8) the first four integrals have to be multiplied by 2, because the partial assignment \(\{A_{1}\}\) covers two total assignments \(\{A_{1},A_{2}\}\) and \(\{A_{1},\neg A_{2}\}\). \(\diamond\)_ Notice that logic-wise \(\mathsf{sk}(w)\) is non-informative because it is a \(\mathcal{LRA}\)-valid formula (condition \((b)\) in Definition 1). Nevertheless, the role of \(\mathsf{sk}(w)\) is not only to "make the enumerator aware of the presence of the conditions \(\boldsymbol{\Psi}^{*}\) --like the \(\bigwedge_{k=1}^{K}(B_{k}\leftrightarrow\psi_{k})\) in WMI-PA-- but also to mimic the conditional structure of \(w\), forcing every assignment \(\mu\) to assign truth values to all and only those conditions in \(\boldsymbol{\Psi}\) which are necessary to make \(w_{[\mu]}\mathsf{FI}^{\mathcal{LRA}}\), and hence make \(\mathsf{WMI_{nb}}(\varphi^{***}_{[\mu]},w_{[\mu]}|\mathbf{x})\) directly computable, without further partitioning. In principle, we could use as \(\mathsf{sk}(w)\) (a \(\mathcal{LRA}\) formula encoding the conditional structure of) an XADD or (F)XSDD, which do not suffer from the lack of structure awareness of WMI-PA. However, this would cause a blow up in size, as discussed in SS4.1. To this extent, avoiding \(\mathsf{sk}(w)\) blow up in size is a key issue in our approach, which we will discuss in the following steps. ### Implicit Generation of a Conditional Skeleton Here we briefly summarize the first preliminary approach we proposed in the conference paper [1]. A fully-detailed description can be found in Appendix A. EncodingThe conditional skeleton \(\mathsf{sk}(w)\) we used in [1] is not generated explicitly. Rather, we have defined it as an existentially-quantified formula \(\mathsf{sk}(w)\stackrel{{\mathrm{def}}}{{=}}\exists\mathbf{y}. \llbracket y=w\rrbracket_{\mathcal{EUXF}}\), where \(\llbracket y=w\rrbracket_{\mathcal{EUXF}}\) is a \(\mathcal{LRA}\cup\mathcal{EUXF}\)-formula on \(\mathbf{A}\), \(\mathbf{x}\), \(\mathbf{y}\) s.t. \(\mathbf{y}\stackrel{{\mathrm{def}}}{{=}}\{y,y_{1},\ldots,y_{k}\}\) is a set of fresh \(\mathcal{LRA}\) variables. Thus, we can compute \(\mathcal{TA}(\varphi\wedge\chi\wedge\exists\mathbf{y}.\llbracket y=w \rrbracket_{\mathcal{EUXF}})\) as \(\mathcal{TA}(\exists\mathbf{y}.(\varphi\wedge\chi\wedge\llbracket y=w \rrbracket_{\mathcal{EUXF}}))\) because the \(\mathbf{y}\) do not occur in \(\varphi\wedge\chi\), with no need to generate \(\mathsf{sk}(w)\) explicitly. In a nutshell, \(\llbracket y=w\rrbracket_{\mathcal{EUXF}}\) is obtained by taking \((y=w)\), s.t. \(y\) is fresh, and recursively substituting bottom-up every conditional term (If \(\psi_{i}\) Then \(t_{i1}\) Else \(t_{i2}\)) in it with a fresh variable \(y_{i}\in\mathbf{y}\), adding the definition of \((y_{i}=(\)If \(\psi_{i}\) Then \(t_{i1}\) Else \(t_{i2}))\) as \((\neg\psi_{i}\lor y_{i}=t_{i1})\wedge(\psi_{i}\lor y_{i}=t_{i2})\). Terms representing non-linear arithmetic operators (e.g., multiplication) or functions (e.g., \(\sin,\exp\)) are then substituted with uninterpreted functions, in order to avoid non-linear terms in the formula. Intuitively, each partial assignment satisfying \(\exists\mathbf{y}.\llbracket y=w\rrbracket_{\mathcal{EUXF}}\) represents a branch of the weight function, since it allows to uniquely identify a value for \(\mathbf{y}\). (See Appendix A for an example and more details.) Notice that the idea of renaming if-then-else terms with fresh quantified variables is what avoids the space blow-up of KC approaches, s.t. \(\llbracket y=w\rrbracket_{\mathcal{EUXF}}\) grows linearly in size w.r.t. the size of \(w\). E.g., with (7), \(\llbracket y=w\rrbracket_{\mathcal{EUXF}}\) is \((y=\prod_{i=1}^{N}y_{i})\wedge\bigwedge_{i=1}^{N}((\neg\psi_{i}\lor y_{i}=w_{ i1}(\mathbf{x}))\wedge(\psi_{i}\lor y_{i}=w_{i2}(\mathbf{x})))\) --where the products and other non-linear functions are abstracted into uninterpreted functions-- whose size grows linearly rather than exponentially w.r.t. \(N\). The \(\mathrm{SA}\)-WMI-PAProcedureWe briefly summarize how the enumeration procedure used by \(\mathrm{SA}\)-WMI-PA works (see Algorithm 5 in Appendix A for details). We first extend \(\varphi\wedge\chi\) into \(\varphi^{**}\stackrel{{\mathrm{def}}}{{=}}\varphi\wedge\chi\wedge \llbracket y=w\rrbracket_{\mathcal{EUXF}}\). Then, as with WMI-PA, we enumerate the assignments in two main steps: 1. We first generate a set \(\mathcal{M}^{\mathbf{A}^{*}}\) of _partial_ assignments \(\mu^{\mathbf{A}^{*}}\) over the Boolean atoms \(\mathbf{A}\), s.t. \(\varphi^{**}_{[\mu^{\mathbf{A}^{*}}]}\) is \(\mathcal{LRA}\)-satisfiable and does not contain Boolean atoms anymore. (About assignment enumeration, recall Remark 1.) Unlike with WMI-PA, these assignments are generated in two steps. We first enumerate partial Boolean assignments by invoking \(\mathcal{TA}(\exists\mathbf{x}.\exists\mathbf{y}.\varphi^{**})\) and, for each assignment \(\mu^{\mathbf{A}}\), we build the (simplified) residual \(\varphi^{**}_{[\mu^{\mathbf{A}}]}\). Since \(\mu^{\mathbf{A}}\) is partial, however, \(\varphi^{**}_{[\mu^{\mathbf{A}}]}\) is not guaranteed to be free of Boolean atoms \(\mathbf{A}\). 6 If this is the case, we simply add \(\mu^{\mathbf{A}}\) to \(\mathcal{M}^{\mathbf{A}^{*}}\), otherwise we further invoke \(\mathcal{T}\mathcal{TA}(\exists\mathbf{x}.\exists\mathbf{y}.\varphi^{**}_{[\mu ^{\mathbf{A}}]})\) to assign the remaining Boolean atoms, ensuring that the residual now contains only \(\mathcal{L}\mathcal{R}\mathcal{A}\) atoms. Each assignment \(\mu^{\mathbf{A}}_{residual}\) to the remaining atoms is then conjoined to \(\mu^{\mathbf{A}}\), and \(\mu^{\mathbf{A}}\wedge\mu^{\mathbf{A}}_{residual}\) is added to \(\mathcal{M}^{\mathbf{A}^{*}}\). Footnote 6: see e.g. Example 8 in Appendix A if interested. 2. The second step is nearly identical to the "for" loop in WMI-PA. For each \(\mu^{\mathbf{A}^{*}}\) in \(\mathcal{M}^{\mathbf{A}^{*}}\) we enumerate a set \(\mathcal{M}^{\mathcal{L}\mathcal{R}}\stackrel{{\mathrm{def}}}{{=}} \mathcal{TA}(\exists\mathbf{y}.\varphi^{**}_{[\mu^{\mathbf{A}^{*}}]})\) of \(\mathcal{L}\mathcal{R}\mathcal{A}\)-satisfiable partial assignments satisfying \(\varphi^{**}_{[\mu^{\mathbf{A}^{*}}]}\). (Like in WMI-PA, if \(\varphi^{**}_{[\mu^{\mathbf{A}^{*}}]}\) is a conjunction of literals, then we have only one \(\mu^{\mathcal{L}\mathcal{R}\mathcal{A}}\stackrel{{\mathrm{def}}}{{= }}\varphi^{**}_{[\mu^{\mathbf{A}^{*}}]}\).) For each \(\mu^{\mathcal{L}\mathcal{R}\mathcal{A}}\in\mathcal{M}^{\mathcal{L}\mathcal{R}}\) we compute the integral \(\mathsf{WMI}_{\mathsf{nb}}(\mu^{\mathcal{L}\mathcal{R}},w_{[\mu^{\mathbf{A}^{ *}}]}|\mathbf{x})\). Unlike with WMI-PA, since \(\mu^{\mathbf{A}^{*}}\) can be partial and thus represent \(2^{|\mathbf{A}\setminus\mu^{\mathbf{A}^{*}}|}\) total assignments, we multiply the integral by a \(2^{|\mathbf{A}\setminus\mu^{\mathbf{A}^{*}}|}\) factor (as in (8)) and add it to the result. ### Encoding a Conditional Skeleton Explicitly We propose a novel approach, which is both much simpler and more effective than that from [1], in which the skeleton is generated explicitly. #### 5.3.1 Encoding Algorithm 2 shows the procedure for building explicitly a formula \(\mathsf{sk}(w)\). Intuitively, the output consists of a conjunction of formulas, each stating "_if all the conditions \(\psi_{1},..,\psi_{i}\) of the current sub-branch in \(w\) hold, then split on the next condition \(\psi_{i+1}\) in the branch_". For example, for the problem in Example 1, Algorithm 2 produces as \(\mathsf{sk}(w)\) the formula reported in Example 2, whose satisfying partial assignments can be enumerated as in Example 3. The recursive procedure \(\mathsf{Convert}_{\mathcal{SK}}(w_{i},\mathsf{conds})\) takes as input the current subtree \(w_{i}\) of the weight function \(w\) and the set of literals \(\mathsf{conds}\) representing the conditions under whose scope \(w_{i}\) occurs, and returns the CNF representation of \(\bigwedge_{\psi_{i}\in\mathsf{conds}}\psi_{i}\to\mathsf{sk}(w_{i})\) (i.e., it returns \([\![\bigvee_{\psi_{i}\in\mathsf{conds}}\neg\psi_{i}\vee\mathsf{sk}(w_{i})]\!]\)). Hence, \(\mathsf{Convert}_{\mathcal{SK}}(w,\,\emptyset)\) returns \(\mathsf{sk}(w)\). Notice that the set \(\mathsf{conds}\) contains the conditions that need to be true in order to activate the current branch. We first focus on lines 1-12, in which we assume that the weight conditions \(\psi_{i}\) are literals (the behaviour in case of non-literal conditions, lines 14-18, will be explained in SS5.3.2). The recursive encoding of \(\mathsf{sk}(w)\) works as follows: (Lines 1-2): A constant or a polynomial does not contain weight conditions. Thus, we can simply encode it with \(\top\). Notice that here \(\mathsf{conds}\) has no role, because \((\bigwedge_{\psi_{i}\in\mathsf{conds}}\psi_{i})\to\top\) reduces to \(\top\). (Lines 3-4): A term representing an arithmetic operator \(w_{1}\bowtie w_{2}\), with \(\bowtie\in\{+,-,\cdot\}\) must ensure that the conditions in both branches \(w_{1},w_{2}\) are enumerated. Thus, we encode it by conjoining the results of the conversion procedure on \(w_{1}\) and \(w_{2}\). (Lines 5-6): Similarly, a term representing an unconditioned function \(g(w_{1},\ldots,w_{k})\), must ensure that the conditions in all the branches \(w_{1}\ldots w_{k}\) are enumerated. Thus, we encode it by conjoining the results of the conversion procedure on \(w_{1},\ldots,w_{k}\). (Lines 7-12): When encoding a conditional term in the form (If \(\psi\)Then\(w_{1}\)Else\(w_{2}\)), if \(\psi\) is a literal, we have the following: (Line 9): When all \(\mathsf{conds}\) are true, then \(\psi\) must be split upon. This fact is encoded by the valid clause: \[\mathsf{V}_{\psi_{i}\in\mathsf{conds}}\neg\psi_{i}\vee\psi\vee\neg\psi. \tag{10}\] (Line 10): When (all \(\mathsf{conds}\) and are true and) \(\psi\) is true, all the branches of \(w_{1}\) must be enumerated. This is encoded by recursively calling the conversion procedure on \(w_{1}\) adding \(\psi\) to the conditions \(\mathsf{conds}\) that need to be true to activate the branch of \(w_{1}\), which returns \(\llbracket\mathsf{V}_{\psi_{i}\in\mathsf{conds}}\neg\psi_{i}\vee\neg\psi\lor \mathsf{sk}(w_{1})\rrbracket\). (Line 11): Similarly, when (all \(\mathsf{conds}\) and are true and) \(\psi\) is false, all the branches of \(w_{2}\) must be enumerated. This is encoded by recursively calling the conversion procedure on \(w_{2}\) adding \(\neg\psi\) to \(\mathsf{conds}\), which returns \(\llbracket\mathsf{V}_{\psi_{i}\in\mathsf{conds}}\neg\psi_{i}\vee\psi\lor \mathsf{sk}(w_{1})\rrbracket\). As with \(\exists\mathbf{y}.\llbracket y=w\rrbracket_{\mathcal{EUTF}}\) and unlike with KC approaches, it is easy to see that \(\mathsf{sk}(w)\) grows linearly in size w.r.t. \(w\). E.g., with (7), \(\mathsf{sk}(w)\) is \(\bigwedge_{i=1}^{N}(\psi_{i}\vee\neg\psi_{i})\), whose size grows linearly rather than exponentially w.r.t. \(N\). #### 5.3.2 Dealing with non-literal conditions The above description of the skeleton algorithm assumes that the conditions in \(\mathbf{\Psi}\) are single literals, either Boolean or \(\mathcal{LRA}\) ones. In general, this is not necessarily the case, for instance it is common to have _range conditions_ in the form \((x_{i}\geq l)\wedge(x_{i}\leq u)\) or even more complex conditions. In these cases, the skeleton formula is not in CNF form. In fact, \(\mathsf{CNF}(\bigwedge_{i}\psi_{i}\to\varphi)\) can be straightforwardly computed out of \(\mathsf{CNF}(\varphi)\) only if the \(\psi_{i}\)s are literals, because this reduces to augment each clause in \(\mathsf{CNF}(\varphi)\) with \(\bigvee_{i}\neg\psi_{i}\). If this is not the case, then the CNF-ization either may cause a blow-up in size if equivalence-preserving CNF-ization is applied, or it may require some variant of Tseitin CNF-ization [35]. SMT-solvers typically rely on the second option for satisfiability checks. For what enumeration is concerned, however, introducing new atomic labels that define complex sub-formulas can force the enumeration procedure to produce more partial assignments than necessary. This is what happens with WMI-PA, and also with SA-WMI-PA, when conditions in \(\mathbf{\Psi}\) are non-literals. We illustrate this problem in Example 4. Our proposed solution will be eventually illustrated in Example 5. **Example 4**.: _Consider the following WMI problem:_ \[\varphi=\top\] \[\chi=(x_{1}\geq 0)\wedge(x_{1}\leq 6)\wedge(x_{2}\geq 0)\wedge(x_{2} \leq 3)\] \[w=\mathsf{If}\ (x_{1}\leq 4)\ \mathsf{Then}\] \[(\mathsf{If}\ (x_{1}\leq 2)\vee((x_{1}\leq 3)\wedge(x_{2}>1))\ \mathsf{ Then}\ f_{1}(\mathbf{x})\ \mathsf{Else}\ f_{2}(\mathbf{x}))\ \mathsf{Else}\] \[(\mathsf{If}\ (x_{2}>2)\vee((x_{1}>5)\wedge(x_{2}>\tfrac{3}{2}))\ \mathsf{ Then}\ f_{3}(\mathbf{x})\ \mathsf{Else}\ f_{4}(\mathbf{x}))\] Figure 4: Graphical representation of the effectiveness of SA-WMI-PA-SK as compared to the alternatives: (a) WMI problem from Example 4, with 4 areas of \(\mathsf{FI}^{\mathcal{LRA}}\) weights. Notice that the weight function includes non-literal conditions; (b) WMI-PA computes _20 distinct integrals_; (c) SA-WMI-PA, using both the implicit and explicit non-CNF skeleton versions, computes _11 distinct integrals_; (d) SA-WMI-PA-SK, that uses the “local” CNF explicit skeleton, computes _8 distinct integrals_, the minimum number of integrals that is needed to retain convexity of their areas (i.e., \(\mathsf{FI}^{\mathcal{LRA}}\) weights). s.t. the \(f_{i}(\mathbf{x})\)s are condition-less functions, which is graphically represented in Figure 4a. Notice that some conditions of the weight function are not literals. Thus, if we wrote \(\varphi^{***}\stackrel{{\text{\tiny def}}}{{=}}\varphi\wedge\chi \wedge\mathsf{sk}(w)\) without concerning about this fact, then we would obtain the following non-CNF formula to feed to the SMT solver:_ \[\begin{array}{l}(x_{1}\geq 0)\wedge(x_{1}\leq 6)\wedge(x_{2}\geq 0)\wedge(x_{ 2}\leq 3)\wedge\\ ((x_{1}\leq 4)\vee\neg(x_{1}\leq 4))\wedge\\ (\neg(x_{1}\leq 4)\vee\psi_{1}\vee\neg\psi_{1})\wedge\\ (\ \ (x_{1}\leq 4)\vee\psi_{2}\vee\neg\psi_{2})\end{array}\] _where_ \[\psi_{1}\stackrel{{\text{\tiny def}}}{{=}}\overbrace{(x_{1} \leq 2)\vee\underbrace{((x_{1}\leq 3)\wedge(x_{2}>1))}_{B_{3}}}^{B_{1}},\,\psi_{2} \stackrel{{\text{\tiny def}}}{{=}}\overbrace{(x_{2}>2)\vee \underbrace{((x_{1}>5)\wedge(x_{2}>\frac{3}{2}))}_{B_{4}}}^{B_{2}}\] _The solver would apply Tseitin CNF-ization, labelling the subformulas by fresh Boolean atoms \(\{B_{1},B_{2},B_{3},B_{4}\}\) as above, producing the CNF formula \(\phi\)7:_ Footnote 7: Here we colour the labelling clauses introduced by \(\mathsf{CNF_{ts}}(\dots)\) as the corresponding regions in Figure 4. We recall that \(\mathsf{CNF_{ts}}(B_{i}\leftrightarrow\psi_{i})\) is \(\mathsf{CNF_{ts}}(B_{i}\to\psi_{i})\wedge\mathsf{CNF_{ts}}(B_{i}\leftarrow\psi _{i})\). \[\begin{array}{l}(x_{1}\geq 0)\wedge(x_{1}\leq 6)\wedge(x_{2}\geq 0) \wedge(x_{2}\leq 3)\wedge\\ \left(\begin{array}{c}(x_{1}\leq 4)\vee\neg(x_{1}\leq 4)\end{array}\right) \wedge\\ \left(\neg(x_{1}\leq 4)\vee\ \ B_{1}\vee\neg B_{1}\right)\wedge\\ \left(\neg B_{1}\vee\ \ (x_{1}\leq 2)\vee\ \ B_{3}\right)\wedge\\ \left(\neg B_{3}\vee\ \ (x_{1}\leq 3)\right)\wedge\left(\neg B_{3}\vee\ \ (x_{2}>1) \right)\wedge\\ \left(\begin{array}{c}B_{3}\vee\neg(x_{1}\leq 3)\vee\neg(x_{2}>1) \end{array}\right)\wedge\\ \left(\begin{array}{c}B_{1}\vee\neg(x_{1}\leq 2)\end{array}\right)\wedge\left(\begin{array}{c}B_{1} \vee\neg B_{3}\end{array}\right)\wedge\\ \left(\begin{array}{c}(x_{1}\leq 4)\vee\ \ B_{2}\vee\neg B_{2}\end{array}\right) \wedge\\ \left(\neg B_{2}\vee\ \ (x_{2}>2)\vee\ \ B_{4}\right)\wedge\\ \left(\neg B_{4}\vee\ \ (x_{1}>5)\right)\wedge\left(\neg B_{4}\vee\ \ (x_{2}>\frac{3}{2}) \right)\wedge\\ \left(\begin{array}{c}B_{4}\vee\neg(x_{1}>5)\vee\neg(x_{2}>\frac{3}{2}) \end{array}\right)\wedge\\ \left(\begin{array}{c}B_{2}\vee\neg(x_{2}>2)\end{array}\right)\wedge\left( \begin{array}{c}B_{2}\vee\neg B_{4}\end{array}\right)\end{array}\] _and then compute \(\mathcal{TA}(\varphi^{***})\) in (8) as \(\mathcal{TA}(\exists\mathbf{B}.\phi)\), s.t. the \(B_{i}\)s are not in the relevant set of atoms for the projected enumeration (see SS3.1)._ _Consider, e.g., the partial truth assignment:_ \[\mu\stackrel{{\mbox{\tiny def}}}{{=}} \{(x_{1}\geq 0),\ \ (x_{1}\leq 6),\ \ (x_{2}\geq 0),\ \ (x_{2}\leq 3)\}\ \ \cup\] \[\{(x_{1}\leq 4),\neg(x_{1}\leq 2),\neg(x_{1}\leq 3),\neg B_{1}, \neg B_{3}\}\] _which would be sufficient to make \(w_{[\mu]}\)\(\mathsf{FI}^{\mathcal{CRA}}\) because it suffices to identify \(f_{2}(\mathbf{x})\). Unfortunately, \(\mu\) does not suffice to evaluate (11) to true because of the last six clauses, which represent \(B_{2}\leftrightarrow((x_{2}>2)\vee((x_{1}>5)\wedge(x_{2}>\frac{3}{2})))\). Thus, when computing \(\mathcal{TA}(\exists\mathbf{B}.\phi)\), the solver is forced to assign truth values also to other atoms, in particular to \(B_{2}\) and \(B_{4}\). Whereas \(B_{4}\) can be consistently assigned only to false due to the clause \((\neg B_{4}\vee(x_{1}>5))\) and the fact that \((x_{1}\leq 4)\) is true, \(B_{2}\) can assume consistently both truth values, so that the solver is forced to branch on \(B_{2}\) and \(\neg B_{2}\), which respectively force \((x_{2}>2)\) and \(\neg(x_{2}>2)\), causing the unnecessary generation of two distinct integrals on \(f_{2}(\mathbf{x})\), for \(x_{1}\in[3,4],x_{2}\in[0,2]\) and \(x_{1}\in[3,4],x_{2}\in]2,3]\) respectively, instead of only one on \(x_{1}\in[3,4],x_{2}\in[0,3]\).(Notice that the atom \((x_{2}>2)\) belongs to a condition in the "\((x_{1}\leq 4)=\bot\)" branch of \(w\), whereas with \(\mu\) we are actually considering the "\((x_{1}\leq 4)=\top\)" branch.)_ _This issue is graphically represented in Figure 4c, where we show the regions enumerated by the \(\mathrm{SA}\)-WMI-PA procedure on the above problem, by using either the implicit version of \(\mathsf{sk}(w)\) as in the original algorithm or the non-CNF explicit version of \(\mathsf{sk}(w)\): 11 convex regions are enumerated. Notice that the brown, red and grey areas are each split into tree regions, whereas it is possible to split each into two convex regions only. \(\diamond\)_ The source of the problem is that, with Tseitin CNF-ization, each "labelling definition" \(\mathsf{CNF_{ts}}(B_{i}\leftrightarrow\psi_{i})\) is conjoined to the rest of the formula, so that the SMT solver is forced to enumerate the different ways to satisfy it _even when the rest of the assignment selects a branch which does not involve the condition \(\psi_{i}\)_. Notice that the fact that \(B_{i}\) is implicitly treated as existentially-quantified, so that it is not in the relevant-atom set for the projected enumerator (see SS3.1) does not help, because \(\psi_{i}\) and the atoms in it are in the relevant-atom set, s.t. the enumerator is forced to split on their truth values anyway. (E.g., in Example 4, although \(B_{2}\) is implicitly existentially quantified, the enumerator is forced to split on \((x_{2}>2)\) anyway.) To cope with this problem, we propose a variant of the Tseitin CNF-ization for the skeleton where each "labelling definition" \(\mathsf{CNF_{ts}}(B\leftrightarrow\psi)\) and all occurrences of \(B\) are "local", that is, they occur in the formula only implied by the conditions in \(\mathsf{conds}\), so that they are simplified to true unless the assignment selects a branch which satisfies all conditions in \(\mathsf{conds}\). Thus, in Algorithm 2, lines 14-18, we substitute each non-literal condition \(\psi\) in \(\mathsf{branch}\) and \(\mathsf{defs}_{i}\) from lines 9-12 with a fresh Boolean atom \(B\) and we add \(\llbracket\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(B_{3},B_{4}\) are labels introduced by \(\mathsf{CNF}_{\mathsf{pg}}(\ldots)\), which do not produce valid clauses in the form \((\ldots\lor B_{i}\lor\neg B_{i})\).) For instance, the assignment \(\mu\) in (12) evaluates to true all the clauses of \(\varphi^{\mathtt{***}}\), so that the solver considers it as satisfying assignment for the formula. In Figure 4d we show the regions enumerated by the \(\textsc{SA}\)-WMI-PA-SK procedure (SS5.3.3), using as skeleton the "local" CNF version of \(\mathsf{sk}(w)\). We see that the regions enumerated are 8, instead of the 11 of Figure 4c. In this case 8 is also the minimum number of convex areas into which we can split the problem, because the 4 distinct coloured areas in Figure 4a are non-convex, and as such their integral cannot be computed with single integrations. \(\diamond\) We stress the importance of the locality of labelling clauses in the enumeration of truth assignments: the SMT enumerator needs branching on \(B\) (hence, on \(\psi\)) _only in those branches in which all \(\mathsf{conds}\) are true_, that is, only in those branches where a truth value for condition \(\psi\) is needed to define a branch of \(w\). Notice that, if \(\mathsf{CNF}_{\mathsf{pg}}\) introduces some labelling definitions, then also the latter are active only when all \(\mathsf{conds}\) are true. #### 5.3.3 The SA-PA-WMI-SK Procedure We devised a new algorithm, namely SA-WMI-PA-SK, that _explicitly_ encodes the weight function as \(\mathsf{sk}(w)\) (SS5.3.1), handles non-literal conditions effectively (SS5.3.2), and introduces some further improvements in the enumeration, which we describe below. The outline of the procedure is shown in Algorithm 3. (To simplify the narration, here we assume that the encoding of \(\mathsf{sk}(w)\) did not cause the introduction of fresh Boolean variables \(\mathbf{B}\), see SS5.3.2; if not so, then \(\mathcal{TA}(\exists\mathbf{x}.\varphi^{***})\) and \(\mathcal{TA}(\varphi^{***}_{[\mu\mathbf{A}]})\) in Algorithm 3 should be replaced with \(\mathcal{TA}(\exists\mathbf{x}.\exists\mathbf{B}.\varphi^{***})\) and \(\mathcal{TA}(\exists\mathbf{B}.\varphi^{***}_{[\mu\mathbf{A}]})\) respectively.) * (Lines 1-3) After producing \(\mathsf{sk}(w)\) by Algorithm 2, we first enumerate by projected AllSMT a set \(\mathcal{M}^{\mathbf{A}}\) of _partial_ truth assignments \(\mu^{\mathbf{A}}\) over \(\mathbf{A}\), s.t. \(\mathcal{M}^{\mathbf{A}}\stackrel{{\mathrm{def}}}{{=}}\mathcal{ TA}(\exists\mathbf{x}.(\varphi\wedge\chi\wedge\mathsf{sk}(w)))\). * (Lines 5-14) For each \(\mu^{\mathbf{A}}\in\mathcal{M}^{\mathbf{A}}\) we compute and simplify the residual formula \(\varphi^{***}_{[\mu\mathbf{A}]}\). As with WMI-PA and SA-WMI-PA, if \(\varphi^{***}_{[\mu^{\mathbf{A}}]}\) is already a conjunction of literals, then we can simply compute and return its integral, which is multiplied by a \(2^{k}\) factor, \(k\) being the number of unassigned Boolean atoms in \(\mu^{\mathbf{A}}\). 11 If this is not the case, we proceed by enumerating a set \(\mathcal{M}\) of _partial_ truth assignments \(\mu=\mu^{\mathbf{A}\prime}\wedge\mu^{\mathcal{LRA}}\) over the remaining atoms of the formula, s.t. \(\mathcal{M}\stackrel{{\mathrm{def}}}{{=}}\mathcal{TA}(\exists \mathbf{x}.\varphi^{***}_{[\mu^{\mathbf{A}}]})\). Notice that, since \(\mu^{\mathbf{A}}\) is partial, \(\varphi^{***}_{[\mu^{\mathbf{A}}]}\) could still contain Boolean atoms (see Examples 6 and 8), so that \(\mu\) could contain some non-empty Boolean component \(\mu^{\mathbf{A}\prime}\). We then compute the integral of the \(\mathsf{FI}^{\mathcal{ERA}}\) function \(w_{[\mu^{\mathbf{A}}\wedge\mu]}\) over the convex area \(\mu^{\mathcal{ERA}}\). As above, we need to multiply this integral by a \(2^{k^{\prime}}\) factor, \(k^{\prime}\stackrel{{\mathrm{def}}}{{=}}k-|\mu^{\mathbf{A}\prime}|\) being the number of unassigned Boolean atoms in \(\mu^{\mathbf{A}}\wedge\mu\). Footnote 11: Notice that in this case the conjunction of literals \(\varphi^{***}_{[\mu^{\mathbf{A}}]}\) does not contain Boolean literals, because they would have already been assigned by \(\mu^{\mathbf{A}}\). As with WMI-PA and SA-WMI-PA, in the implementation (Lines 3-5 and 10-12) the sets \(\mathcal{M}^{\mathbf{A}}\) and \(\mathcal{M}\) are not generated explicitly; rather, their elements are generated, integrated and then dropped one-by-one, so as to avoid space blow-up (see Remark 1). We remark that producing smaller partial assignments over the Boolean atoms, as done with SA-WMI-PA-SK, has a positive effect regardless of the structure of the weight function. In Example 6 we give an intuition of the benefits introduced by the improvements of the enumeration phase in SA-WMI-PA-SK. **Example 6**.: _Consider the following WMI problem:_ \[\varphi =(A_{1}\lor A_{2}\lor A_{3})\wedge(\neg A_{1}\lor(x\geq 1))\wedge( \neg A_{2}\lor(x\geq 2))\wedge(\neg A_{3}\lor(x\leq 3))\] \[\chi =(x\geq 0)\wedge(x\leq 4)\] \[w =1.0\] _(\(w\) is constant and \(\boldsymbol{\Psi}=\emptyset\), thus \(\bigwedge_{i}B_{i}\leftrightarrow\psi_{i},\ \exists\mathbf{y}.[\![y=w]\!]_{\mathcal{E} \mathcal{U}\mathcal{F}},\ \mathsf{sk}(w)\) reduce to \(\top\) and \(\varphi^{*}=\varphi^{**}=\varphi^{***}\stackrel{{\text{def}}}{{=}} \varphi\wedge\chi\), so that the kind of conditional skeleton used is irrelevant.) Both WMI-PA and SA-WMI-PA enumerate the 7 assignments listed in Table 0(a), whereas with SA-WMI-PA-SK the number of assignments enumerated shrinks to 5, as shown in Table 0(b)._ _With WMI-PA, \(\mathcal{T}\mathcal{A}(\exists\mathbf{x}.\varphi^{*})\) directly generates in one step the 7 total truth assignments \(\mu^{\mathbf{A}}\wedge\mu^{\mathbf{A}}_{residual}\) in Table 0(a)._ \begin{table} \end{table} Table 1: Assignments enumerated by WMI-PA and SA-WMI-PA (a) and by SA-WMI-PA-SK (b) for the problem in Example 6. (We omit the \(\mathcal{L}\mathcal{R}\mathcal{A}\)-literals which are implied by the others and do not contribute to the integral —e.g., \((x\geq 0)\) when \((x\geq 2)\) is true.) _With both SA-WMI-PA and SA-WMI-PA-SK, \(\mathcal{TA}(\exists\mathbf{x}.\varphi^{**})\) and \(\mathcal{TA}(\exists\mathbf{x}.\varphi^{***})\) produce the partial assignment \(\mu^{\mathbf{A}}=\{A_{2}\}\), so that \(\varphi^{**}_{[\mu^{\mathbf{A}}]}\) and \(\varphi^{***}_{[\mu^{\mathbf{A}}]}\) reduce to \((x\geq 0)\wedge(x\leq 4)\wedge(\neg A_{1}\vee(x\geq 1))\wedge(x\geq 2)\wedge( \neg A_{3}\vee(x\leq 3))\). Then:_ _with SA-WMI-PA, \(\mathcal{TIA}(\exists\mathbf{x}.\varphi^{**}_{[\mu^{\mathbf{A}}]})\) enumerates all 4 total residual assignments on \(A_{1},A_{3}\),12 duplicating the two integrals on \(x\in[2,3]\) and \(x\in[2,4]\) by uselessly case-splitting on \(A_{1}\) and \(\neg A_{1}\), as in the last four rows in Table (a)a;_ Footnote 12: Here \(\exists\mathbf{x}.\varphi^{**}_{[\mu^{\mathbf{A}}]}\) corresponds to valid Boolean formula on \(\{A_{1},A_{3}\}\). _with SA-WMI-PA-SK, \(\mathcal{TA}(\varphi^{***}_{[\mu^{\mathbf{A}}]})\) enumerates only the two partial assignments:13 \(\{\;\;A_{3},(x\geq 0),(x\leq 4),(x\geq 1),(x\geq 2),(x\leq 3)\}\),_ Footnote 13: Here the truth value of \(A_{1}\) is irrelevant because (\(x\geq 1\)) is forced to be true by (\(x\geq 2\)). \(\{\neg A_{3},(x\geq 0),(x\leq 4),(x\geq 1),(x\geq 2)\}\)_,_ _as reported in the last two rows in Table (b)b. The two corresponding integrals are multiplied by \(2^{(3-|\{A_{2}\}|-|\{A_{3}\}|)}=2\) and \(2^{(3-|\{A_{2}\}|-|\{\neg A_{3}\}|)}=2\) respectively. \(\diamond\)_ The benefit becomes more evident as the number of Boolean atoms increases, since if we avoid computing the \(\mathcal{TIA}(\exists\mathbf{x}.\varphi^{**}_{[\mu^{\mathbf{A}}]})\) over the residual Boolean atoms, then we are potentially largely cutting the number of integrals. ## 6 Experimental Evaluation In the following experiments, we evaluate the performance gain of SA-WMI-PA-SK over the current state of the art in WMI. Specifically, we explore the effect of the novel structure encoding using the same setting of our recent conference paper [1]. To this extent, we compare the proposed algorithms, SA-WMI-PA and SA-WMI-PA-SK, with the previous SMT-based approach WMI-PA [19], and with the state-of-the-art KC-based approaches: XADD [22], XSDD and FXSDD [24]. For WMI-PA, SA-WMI-PA and SA-WMI-PA-SK, we use'a slightly-modified version of MathSAT [37]14 for SMT enumeration.15. Footnote 14: We have added one option to MathSAT allowing to disable the simplification of valid clauses, which would otherwise eliminate all the clauses “\((...\vee\psi_{i}\vee\neg\psi_{i})\)” from \(\mathsf{sk}(w)\). The binary file integrating these modifications is attached to the code of SA-WMI-PA-SK. Convex integration is handled by LattE Integrale[39] in the SMT-based approaches, whereas KC-based solvers use PSI Algebra for both integration and inconsistency checking [40]. This difference is not penalizing the KC-based approaches, which were shown to achieve worse results when using LattE[24]. All experiments are performed on an Intel Xeon Gold 6238R @ 2.20GHz 28 Core machine with 128 GB of RAM and running Ubuntu Linux 20.04. The code of both SA-WMI-PA and SA-WMI-PA-SK is freely available at [https://github.com/unitn-sml/wmi-pa](https://github.com/unitn-sml/wmi-pa), and the benchmarks are available at [https://github.com/weighted-model-integration/wmi-benchmarks](https://github.com/weighted-model-integration/wmi-benchmarks) in pywmi [41] format. We report our findings using cactus plots. Each tick on the x-axis corresponds to a single instance, and results on the y-axis are sorted independently for each method. Steeper slopes mean lower efficiency. ### Synthetic experiments Dataset descriptionAs in our previous paper, random formulas are obtained using the generator introduced by Morettin et al. [19]. In contrast with other recent works [22; 24], these synthetic benchmarks do not contain strong structural regularities, offering a more neutral perspective on how the different techniques are expected to perform on average. We fix to 3 the number of Boolean and real variables, while varying the depth of the weights in the range [4; 7]. Timeout is set to 3600 seconds. ResultsConsistently with previous papers, in this setting SMT-based solvers obtain superior performance with respect to KC-based ones (Figure 5 left), which suffer from the complex algebraic dependencies between continuous variables. Our structure-aware algorithms, SA-WMI-PA and SA-WMI-PA-SK, further drastically improve over WMI-PA in terms of time and number of computed integrals (Figure 5). These results are aligned with our expectations. Indeed, the advantage of structure-awareness given by the introduction of either form of skeleton is eye-catching. Additionally, SA-WMI-PA-SK is able to gain some advantage over its previous version SA-WMI-PA, although the order of magnitude remains quite the same. Notice that in this setting, the number of Boolean atoms is limited to 3, so that the revised enumeration technique of SS5.3.3 for the Boolean component of the formula does not show its full potential. The benefits of the novel algorithm SA-WMI-PA-SK with respect to SA-WMI-PA (and WMI-PA) will be evident in the next experiment, where more-complex logical structures are involved. ### Inference on Density Estimation Trees Dataset descriptionWe then consider the problem of computing arbitrary algebraic queries over _Density Estimation Trees_ (DETs) [42]. DETs are the density estimation equivalent of decision or regression trees, encoding piecewise constant distributions over mixed continuous/discrete domains. Having only univariate conditions in the internal nodes, DETs support efficient density estimation and marginal inference over single variables. Answering algebraic queries over multiple variables, like \(\Pr(X\leq Y)\), requires instead marginalizing over an oblique constraint, which is reduced to: \[\Pr(X\leq Y)=\frac{\mathsf{WMI}((X\leq Y)\wedge\chi_{DET},w_{DET})}{\mathsf{ WMI}(\chi_{DET},w_{DET})}. \tag{13}\] This type of query commonly arises when applying WMI-based inference to probabilistic formal verification tasks, when properties involve multiple continuous variables. The support \(\chi_{DET}\) is obtained by conjoining the lower and upper bounds of each continuous variable, while \(w_{DET}\) encodes the density estimate by means of nested if-then-elses and non-negative constant leaves. Consistently with our previous conference paper [1], we train DETs on a selection of hybrid datasets from the UCI repository [43]. The datasets and Figure 5: Cactus plots of the synthetic experiments reporting execution time (left) for all the methods; number of integrals (right) for WMI-PA, SA-WMI-PA and SA-WMI-PA-SK. resulting models are summarized in SSAppendix B. Following the approach of [44], discrete numerical features are relaxed into continuous variables, while \(n\)-ary categorical features are one-hot encoded with \(n\) binary variables. DETs are trained on each dataset using the standard greedy procedure [42] with bounds on the number of instances for each leaf set to \([100,200]\). We investigate the effect of algebraic dependencies by generating increasingly complex queries involving a number of variables \(k=\max(1,\lfloor H\cdot|\mathbf{x}|\rfloor)\), where \(H\in[0,1]\) is the ratio of continuous variables appearing in the inequality. For each value of \(H\), 5 random inequalities involving \(k\) variables are generated, for a total of 25 problem instances for each dataset. Given the larger number of problems in this setting, the timeout has been decreased to 1200 seconds. ResultsFigure 6 depicts the runtime of the algorithms for \(H\in\{0.25,0.5,0.75,1\}\). Thanks to the absence of arithmetic operations in the internal nodes, DETs are more suited for KC compared to the weight functions used in 6.1. While in the simplest inference cases (\(H\leq 0.5\)) substantial factorization of the integrals is possible, when the coupling between variables increases this advantage is overweighted by the combinatorial reasoning capabilities of SMT-based approaches. The efficacy of SA-WMI-PA-SK with respect to both SA-WMI-PA and WMI-PA is particularly evident on this batch of problems. Analyzing the number of integrals computed by PA-based algorithms (Figure 7) we notice a drastic reduction in the number of integrals generated by SA-WMI-PA-SK. This behaviour is expected: the number of Boolean atoms is typically very high, so that we are able to drastically reduce the number of partial assignment over the Boolean component with respect to SA-WMI-PA thanks to the revised enumeration strategy. Crucially, in contrast with the recent SA-WMI-PA, the novel algorithm is able to compete or even outperform KC-based solvers even when the oblique queries involve a low number of variables. ## 7 Extending WMI-PA with different integration approaches A key feature of SMT-based solvers like WMI-PA, which has been inherited by our structure-aware procedures, is that of _being agnostic of the integration procedure used as backend_. Besides developing a novel structure encoding, we implemented a modular approach to PA-based solvers, allowing the use of different integration strategies, including approximate ones. This provides multiple advantages. First, the general integration procedure can be substituted with a specialized one, targeting specific weight families such as piecewise-constants or leveraging structural properties like full factorizability. Second, approximate integration procedures can be applied when the number of continuous variables makes exact integration of \(\mathsf{FIUC}^{\mathcal{LRA}}\) functions infeasible. Noticeably, the enumeration of convex polytopes is still exact, with the advantage that the approximation error does not grow multiplicatively. Third, it broadens the use of these solvers beyond the \(\mathsf{FIUC}^{\mathcal{LRA}}\) family, to Figure 6: Cactus plots representing average query execution times and standard deviation in seconds on a set of DET problems with \(H\in\{0.25,0.5,0.75,1\}\). functions whose integral over convex polytopes can be approximated with a finite sample, like multivariate Gaussian distributions or even the output of a simulation or any other black-box function. We explored the practical benefits of this approach with a variant of SA-WMI-PA-SK using Monte Carlo (MC) approximate integration [45]: \[\int_{\mu\mathcal{L}\mathcal{R}\mathcal{A}}w(\mathbf{x})\ \mathrm{d}\mathbf{x} \approx\overbrace{\int_{\mu\mathcal{L}\mathcal{R}\mathcal{A}}^{\mathsf{ Vol}(\mu\mathcal{L}\mathcal{R}\mathcal{A})}}^{\mathsf{Vol}(\mu\mathcal{L}\mathcal{R} \mathcal{A})}\cdot\mathbb{E}_{x\sim\mathcal{U}(\mu\mathcal{L}\mathcal{R} \mathcal{A})}[w(x)] \tag{14}\] Figure 7: Cactus plots representing the number of computed integrals on a set of DET problems with \(H\in\{0.25,0.5,0.75,1\}\). In practice, our MC integration procedure is implemented via VolEsti[46], a library for volume approximation and sampling in convex polytopes. VolEsti is used for both approximating \(\mathsf{Vol}(\mu^{\mathcal{ERA}})\) and for sampling \(w\). We showcase the resulting solver, dubbed SA-WMI-PA-SK (VolEsti), in two scenarios: (i) scaling up weighted model integration with piecewise polynomial weights and (ii) solving problems with Gaussian weights. For the first scenario, we consider the problems described in SS6.1. Figure 8 displays the computational gain obtained with the above _approximate_ MC integrator with respect to LattE's exact _numerical_ approach (the one we used in SS6). In all problems where an exact solution could be computed, the approximated version SA-WMI-PA-SK(VolEsti) consistently returned an approximation with less than 10% relative error at a fraction of the query execution time. We additionally equipped SA-WMI-PA-SK with an exact _symbolic_ integrator. Noticeably, whereas symbolic approaches were shown to be preferable in conjunction with KC-based algorithms (see Figure 3 in [24]) and do not assume convex integration bounds, numerical integration performs substantially better with our approach. For the second scenario, we consider the task of verifying fairness properties of probabilistic programs, borrowing the setting and examples from FairSquare [47]. In their setting, a (deterministic) _decision program_\(\mathsf{dec}\) has to decide whether to \(\mathsf{hire}\) a candidate or not, according to some features. The input of \(\mathsf{dec}\) is distributed according to a Gaussian probabilistic program, Figure 8: Cactus plots comparing the execution time of SA-WMI-PA-SK using different integration strategies in the synthetic datasets. called the _population model_ pop, which additionally models sensitive conditions that are not accessed by dec, such as whether an individual belongs to a minority group. The goal is verifying if dec satisfies a _group fairness_ criterion under the distribution pop, that is, if the probability of a decision being based on the sensitive condition is low enough: \[\frac{\Pr(\mathsf{hire}\mid\mathsf{minority})}{\Pr(\mathsf{hire}\mid\neg \mathsf{minority})}>1-\varepsilon \tag{15}\] Similarly to our approach, FairSquare encodes the verification problem in SMT(\(\mathcal{LRA}\)) and reduces probabilistic inference to (unweighted) volume computations. In contrast with our approach, which directly approximates the ratio in (15) by sampling from Gaussians, FairSquare iteratively computes tighter lower and upper bounds by computing increasingly accurate axis-aligned approximations of the true distribution until convergence. We report our results on the same introductory examples used in the Fairsquare paper, motivating future research efforts in the WMI-based verification of probabilistic programs. The detailed description of the programs considered here is reported in Appendix C. With SA-WMI-PA-SK (VoLEsti), we could indeed show that the initial dec does not respect the fairness criterion in (15) with \(\varepsilon=0.1\). This problem was addressed by making dec less sensible to a feature that was directly influenced by the sensitive condition. This second version of the program is fair with high probability, as reported in Figure 9. Computing the ratios with SA-WMI-PA-SK (VoLEsti) took negligible time (\(<3s\)). An advantage of the WMI-based approach with respect to FairSquare is that it is not limited to Gaussian priors on pop. Moreover, SMT-based approaches are a natural fit for highly combinatorial problems, such as those arising when dec is a complex classifier learned from data. A current limitation of this approach is that VolEsti does not directly offer statistical guarantees on the relative error. In problems like volume approximation [48] and sampling [49], practical statistical bounds on the error were obtained using diagnostics like the empirical sample size (ESS) or the potential scale reduction factor (PSRF) [50]. The adoption of similar mechanisms in our MC integrator, a fundamental step towards the application of WMI in formal verification, and the evaluation of WMI-based verifiers on more realistic programs, is left for future work. ## 8 Conclusion The difficulty of dealing with densely coupled problems has been a major obstacle to a wider adoption of hybrid probabilistic inference technology beyond academia. In this paper, we made a significant step towards removing this obstacle. Starting from the identification of the limitations of existing solutions for WMI, we proposed a novel algorithm combining predicate abstraction with weight-structure awareness, thus unleashing the full potential of SMT-based technology. An extensive experimental evaluation showed the advantage of the proposed algorithm over current state-of-the-art solutions on both synthetic and real-world problems. These include inference tasks over constrained density functions learned from data, which highlight the potential of the approach as a flexible solution for probabilistic inference in hybrid domains. Whereas the improvements described in this work drastically reduce the number of integrals required to perform WMI, the integration itself can be a bottleneck for scalability. To deal with this issue, we showed how SMT-based approaches can seamlessly incorporate multiple integration strategies, including symbolic, numeric and approximate ones, allowing to choose the most appropriate strategy for the problem at hand. In particular, e.g., using approximate MC integration techniques could drastically improve Figure 9: The ratio in Eq. 15 computed for the initial (unfair) dec and the modified one. Average and standard deviation over 5 runs are reported, confirming the results reported in the FairSquare paper with high confidence. performances at the cost of limited losses in terms of precision. We further showcased the potential utility of our approach on a prototypical application aimed at verifying the fairness of probabilistic programs, as a first step towards the application of WMI technology to the probabilistic verification of hybrid systems and machine learning models.
2304.02060
Spin-Dependent Interactions and Heavy-Quark Transport in the QGP
We extend a previously constructed T-matrix approach to the quark-gluon plasma (QGP) to include the effects of spin-dependent interactions between partons. Following earlier work within the relativistic quark model, the spin-dependent interactions figure as relativistic corrections to the Cornell potential. When applied to the vacuum spectroscopy of quarkonia, in particular their mass splittings in S- and P-wave states, the issue of the Lorentz structure of the confining potential arises. We confirm that a significant admixture of a vector interaction (to the previously assumed scalar interaction) improves the description of the experimental mass splittings. The temperature corrections to the in-medium potential are constrained by results from thermal lattice-QCD for the equation of state (EoS) and heavy-quark (HQ) free energy in a selfconsistent set-up for heavy- and light-parton spectral functions in the QGP. We then deploy the refined in-medium heavy-light T-matrix to compute the charm-quark transport coefficients in the QGP. The vector component of the confining potential, through its relativistic corrections, enhances the friction coefficient for charm quarks in the QGP over previous calculations by tens of percent at low momenta and temperatures, and more at higher momenta. Our results are promising for improving the current phenomenology of open heavy-flavor observables at Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC).
Zhanduo Tang, Ralf Rapp
2023-04-04T18:19:46Z
http://arxiv.org/abs/2304.02060v3
# Spin-Dependent Interactions and Heavy-Quark Transport in the QGP ###### Abstract We extend a previously constructed \(T\)-matrix approach to the quark-gluon plasma (QGP) to include the effects of spin-dependent interactions between partons. Following earlier work within the relativistic quark model, the spin-dependent interactions figure as relativistic corrections to the Cornell potential. When applied to the vacuum spectroscopy of quarkonia, in particular their mass splittings in \(S\)- and \(P\)-wave states, the issue of the Lorentz structure of the confining potential arises. We confirm that a significant admixture of a vector interaction (to the previously assumed scalar interaction) improves the description of the experimental mass splittings. The temperature corrections to the in-medium potential are constrained by results from thermal lattice-QCD for the equation of state (EoS) and heavy-quark (HQ) free energy in a selfconsistent set-up for heavy- and light-parton spectral functions in the QGP. We then deploy the refined in-medium heavy-light \(T\)-matrix to compute the charm-quark transport coefficients in the QGP. The vector component of the confining potential, through its relativistic corrections, enhances the friction coefficient for charm quarks in the QGP over previous calculations by tens of percent at low momenta and temperatures, and more at higher momenta. Our results are promising for improving the current phenomenology of open heavy-flavor observables at Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). \(T\)-matrix, heavy-quarkonium spectroscopy, spin-dependent interaction, mixed confining potential, heavy-quark transport pacs: ## I Introduction The exploration of hadron properties in vacuum and the properties of the quark-gluon plasma (QGP) are usually regarded as rather independent areas in the study of Quantum Chromodynamics (QCD). However, in both areas the basic building block are soft parton interactions rooted in the non-perturbative sector of the theory, albeit in different environments. Of particular interest are heavy quarks: heavy-quarkonium spectroscopy in vacuum has provided deep insights into potential between a heavy (charm or bottom) quark (\(Q=c,b\)) and its antiquark (\(\bar{Q}\)). The Cornell potential and its refinements remain a phenomenologically successful tool in the description of the pertinent bound states, taking advantage of expansion in the inverse heavy-quark (HQ) mass, \(1/M_{Q}\)[1]. The long-range (linear) part of the potential, which by now is also well established in lattice QCD (lQCD), is arguably one of the most direct manifestations of the confining force in QCD. In the context of high-temperature QCD and its study in ultrarelativistic heavy-ion collisions (URHICs), this led to the idea of utilizing quarkonium production as a probe of deconfinement, although the originally proposed suppression signature has evolved considerably over the last three decades [2; 3; 4]. Specifically, transport approaches have been developed toward the more general objective of deducing the in-medium QCD force from quarkonium observables, by implementing it into the transport coefficients that govern both suppression and regeneration reactions in the evolving fireball of a heavy-ion collision, see, _e.g._, Ref. [5]. This effort is critically aided by ample information from lQCD on the in-medium properties of quarkonia through HQ free energies and Euclidean correlation functions [6; 7; 8; 9], which constrain calculations of spectral functions that can serve as an interface to phenomenological applications [10; 11; 12]. Open heavy-flavor (HF) particles have emerged as an excellent probe of the transport properties of the QCD medium in URHICs [13; 14; 15]. Produced in initial hard processes, low-momentum heavy quarks exert a Brownian motion through the QGP characterized by a spatial diffusion coefficient, hadronize in different HF hadrons and subsequently are further transported through the hadronic medium. The large HQ mass implies the dominance of elastic interactions with small energy transfer amenable to potential approximations, and the final HF baryon spectra carry a memory of their interaction history due to a thermalization time being comparable or larger than the fireball lifetime. The present work builds on previous efforts to develop a quantum many-body theory to describe the spectral and transport properties of open and hidden HF particles in a strongly coupled QGP [16; 17], including the 1- and 2-body Green's functions of thermal partons for obtaining the equation of state (EoS) in a selfconsistent Brueckner scheme [18; 19]. The basic ingredient to this framework is the 2-body interaction kernel for the in-medium \(T\)-matrix for which we employ an ansatz using a Cornell potential, whose temperature corrections are constrained by lQCD data for the HQ free energy. A salient feature of this approach is that it recovers basic features of vacuum spectroscopy (such as masses of quarkonia, \(D\) and \(B\) mesons, and non-Goldstone light hadrons), pro viding a baseline for the calculation of medium effects. In the spirit of a \(1/M_{Q}\) expansion, spin-orbit and spin-spin interactions were not included thus far. In the present paper, we take the next step by including the latter by benchmarking them against the hyper-/fine mass splittings of quarkonia in vacuum. Our study raises the question of the Lorentz structure of the confining potential. Historically, a default assumption of a purely scalar interaction has been employed [20; 21], implying a vanishing long-range magnetic contribution that nevertheless could reproduce the empirical fine structure for heavy quarkonium [22]. However, studies of the Wilson loop suggest that the confining potential cannot be a purely scalar kernel [23; 24], and the latter also causes problems in constructing a stable vacuum of QCD [25]. In the relativistic quark model [26; 27] a mixing of scalar and vector structures in the confining potential has been found to yield a quarkonium spectroscopy in good overall agreement with experimental data. The approach employed in our work is close in spirit to these works, and therefore we will incorporate the possibility of a mixed confining Lorentz structure in the \(T\)-matrix kernel; the pertinent relativistic corrections will turn out to have significant ramifications for the HQ diffusion coefficient. This article is organized as follows. In Sec. II, we briefly recollect the main elements of the thermodynamic \(T\)-matrix approach. In Sec. III we implement spin-dependent interactions as well as a vector component of the confining force into the potential. In Sec. IV we compute heavy-quarkonium spectral functions from the \(T\)-matrix and discuss the charmonium and bottomonium spectroscopy in vacuum. In Sec. V we lay out our constraints on the in-medium corrections to the potential using lQCD data for static HQ free energies (Sec. V.1) and the QGP equation of state (Sec. V.2), and discuss the pertinent numerical results (Sec. V.3). In Sec. VI we outline the calculation of the HQ transport coefficients and highlight the implications of the vector component in the confining interaction on the numerical results for charm quarks. We summarize and conclude in Sec. VII. ## II T-matrix approach The thermodynamic \(T\)-matrix is a 2-particle irreducible (PI) quantum many-body scheme that selfconsistently solves the 1- and 2-body Green's functions and is thus suitable for strongly interacting systems. In Refs. [17; 18] it has been initially developed to study the properties of HF flavor particles in the QGP allowing for a reduction of the 4-dimensional (4D) Bethe-Salpeter 2-body scattering equation to a 3D one which allows for tractable numerical solutions. Subsequently, it has also been extended to the light-parton sector [28], based on the notion that the effective masses of the QGP's constituents are typically large compared to temperatures not too far above the pseudocircal one of \(T_{\rm pc}\simeq 160\) MeV. The starting point can be formulated in terms of an effective Hamiltonian with a relativistic potential, \[H= \sum\varepsilon_{i}(\mathbf{p})\psi_{i}^{\dagger}(\mathbf{p}) \psi_{i}(\mathbf{p})+\frac{1}{2}\psi_{i}^{\dagger}\left(\frac{\mathbf{p}}{2} -\mathbf{p}\right) \tag{1}\] \[\times\psi_{j}^{\dagger}\left(\frac{\mathbf{p}}{2}+\mathbf{p} \right)V_{ij}^{a}\psi_{j}\left(\frac{\mathbf{p}}{2}+\mathbf{p}^{\prime} \right)\psi_{i}\left(\frac{\mathbf{p}}{2}-\mathbf{p}^{\prime}\right)\] which emphasizes the implementation of unitarity through resummations of the propagators (also referred to as a Dyson-Schwinger set-up). Here, \(\mathbf{p}\) and \(\mathbf{p}^{\prime}\) denote the relative momentum of the incoming and outgoing states, and \(\mathbf{P}\) the total momentum of the two-body system. Furthermore, \(\varepsilon_{i}(\mathbf{p})=\sqrt{M_{i}^{2}+\mathbf{p}^{2}}\) is the dispersion relation of a parton with mass \(M_{i}\), and the \(V_{ij}^{a}\) are the potentials between particles \(i\) and \(j\) in a color channel \(a\). The summation includes momentum, spin, color and flavor for quarks and gluons. The infinite series of ladder diagrams generated by the Hamiltonian in Eq. (1) straightforwardly results in the \(T\)-matrix equation, depicted in Fig. 1. In the center-of-mass (CM) frame, one has \[T_{ij}^{a}\left(z,\mathbf{p},\mathbf{p}^{\prime}\right) = V_{ij}^{a}\left(\mathbf{p},\mathbf{p}^{\prime}\right)+\int_{- \infty}^{\infty}\frac{d^{3}\mathbf{k}}{\left(2\pi\right)^{3}}V_{ij}^{a}\left( \mathbf{p},\mathbf{k}\right) \tag{2}\] \[\times G_{ij}^{0}\left(z,\mathbf{k}\right)T_{ij}^{a}\left(z, \mathbf{k},\mathbf{p}^{\prime}\right)\,\] where \(G_{ij}^{0}\) is the two-body propagator, \(z=E\pm i\epsilon\) the analytical energy variable, and \(\mathbf{p}\) and \(\mathbf{p}^{\prime}\) are the incoming and outgoing 3-momenta in the CM frame, respectively. The reduction scheme from 4D to 3D is not unique [29] but its specific choice has minor impact on the results; we choose the Thompson scheme following our previous studies [17; 28]. In this scheme, the two-body propagator in spectral representation can be written as \[G_{ij}^{0}(z,\mathbf{k}) = \int_{-\infty}^{\infty}d\omega_{1}d\omega_{2}\frac{\left[1\pm n_{ i}\left(\omega_{1}\right)\pm n_{j}\left(\omega_{2}\right)\right]}{z-\omega_{1}- \omega_{2}} \tag{3}\] \[\times\rho_{i}\left(\omega_{1},\mathbf{k}\right)\rho_{j}\left( \omega_{2},\mathbf{k}\right)\,\] with the single-particle propagator \[G_{i}(z)=\frac{1}{\left[G_{i}^{0}(z,k)\right]^{-1}-\Sigma_{i}(z,k)}=\frac{1}{z -\varepsilon_{i}(k)-\Sigma_{i}(z,k)} \tag{4}\] and the single-particle spectral function \[\rho_{i}\left(\omega,\mathbf{k}\right)=-\frac{1}{\pi}\operatorname{Im}G_{i}( \omega+i\epsilon). \tag{5}\] The \(\pm\) signs in Eq. (3) correspond to bosons (upper) or fermions (lower)1, and \(n_{i}\) is the Bose or Fermi distribu Figure 1: \(T\)-matrices resummation for ladder diagrams. tion function for parton \(i\). In quasi-particle approximation Eq. (3) reduces to2 Footnote 2: This differs from Refs. [17; 29] by a factor of \(m_{ij}(\mathbf{k})=\frac{M_{i}M_{j}}{\varepsilon_{i}(\mathbf{k})\varepsilon_{j}( \mathbf{k})}\); here we keep the convention of Ref. [28] where \(m_{ij}(\mathbf{k})\) is absorbed into the relativistic corrections to the potential which will be elaborated in Sec. III.2. \[G_{ij}^{0}(z,\mathbf{k}) = \frac{1}{z-\varepsilon_{i}(\mathbf{k})-\varepsilon_{j}(\mathbf{ k})-\Sigma_{i}(\mathbf{k})-\Sigma_{j}(\mathbf{k})}. \tag{6}\] The single-particle selfenergies in the QGP, \(\Sigma_{i}(\mathbf{k})\), are obtained by closing the \(T\)-matrix with an in-medium single-parton propagator from the heat bath; its spectral representation is \[\Sigma_{i}\left(z,\mathbf{p}_{1}\right) = \frac{1}{d_{i}}\int\frac{d^{3}\mathbf{p}_{2}}{(2\pi)^{3}}\int_{- \infty}^{\infty}d\omega_{2}\frac{dE}{\pi}\frac{-1}{z+\omega_{2}-E} \tag{7}\] \[\times\sum_{a,j}d_{s}^{ij}d_{a}^{ij}\operatorname{Im}T_{ij}^{a} \left(E,\mathbf{p}_{1},\mathbf{p}_{2}\mid\mathbf{p}_{1},\mathbf{p}_{2}\right)\] \[\times\rho_{j}\left(\omega_{2},\mathbf{p}_{2}\right)\left[n_{j} \left(\omega_{2}\right)\mp n_{ij}(E)\right],\] with \(T\left(E,\mathbf{p}_{1},\mathbf{p}_{2}\mid\mathbf{p}_{1},\mathbf{p}_{2}\right)\) the forward-scattering \(T\)-matrix, i.e., \(\mathbf{p}_{1}^{\prime}=\mathbf{p}_{1}\) and \(\mathbf{p}_{2}^{\prime}=\mathbf{p}_{2}\), where \(\mathbf{p}_{1,2}\) and \(\mathbf{p}_{1,2}^{\prime}\) are the incoming and outgoing momenta for particle 1 and 2 respectively, defined in the thermal frame. The \(n_{ij}\) refers to the thermal distribution for the two-body state \(ij\), while \(\mp\) refers to the bosonic/fermionic single-parton state \(i\). The \(d_{a,s}^{ij}\) are color and spin degeneracies of the two-body system, and \(d_{i}\) is the spin-color degeneracy of the single parton \(i\). We also need to add the purely real thermal Fock term [30], \[\Sigma_{i}\left(\mathbf{p}_{1}\right) = \mp\int\frac{d^{3}\mathbf{p}_{2}}{(2\pi)^{3}}\int_{-\infty}^{ \infty}d\omega_{2}V_{ii}^{a=1}\left(\mathbf{p}_{1}-\mathbf{p}_{2}\right)\rho_{ i}\left(\omega_{2},\mathbf{p}_{2}\right) \tag{8}\] \[\times n_{i}\left(\omega_{2}\right)\,\] which is not part of the selfenergy in Eq. (7). The \(V_{ii}^{a=1}\) refers to the color-singlet potential between particle and antiparticle. The selfenergy can be solved self-consistently by iterating Eqs. (2), (7) and (8) numerically. In doing so, the \(T\)-matrix in the thermal frame, \(T_{ij}^{a}\left(\omega_{1}+\omega_{2},\mathbf{p}_{1},\mathbf{p}_{2}\mid \mathbf{p}_{1}^{\prime},\mathbf{p}_{2}^{\prime}\right)\) needs to be transformed into the the CM frame, \(T_{ij}^{a}\left(E_{\rm cm},p_{\rm cm}^{\prime},\rm cos(\theta_{\rm cm})\right)\). This is accomplished by \[E_{\rm cm} = \sqrt{\left(\omega_{1}+\omega_{2}\right)^{2}-\left(\mathbf{p}_{1} +\mathbf{p}_{2}\right)^{2}}\] \[s_{\rm on} = \left(\varepsilon_{1}\left(\mathbf{p}_{1}\right)+\varepsilon_{2} \left(\mathbf{p}_{2}\right)\right)^{2}-\left(\mathbf{p}_{1}+\mathbf{p}_{2} \right)^{2}\] \[p_{\rm cm} = \sqrt{\frac{\left(s_{\rm on}-M_{i}^{2}-M_{j}^{2}\right)^{2}-4M_{i} ^{2}M_{j}^{2}}{4s_{\rm on}}}\] \[\cos\left(\theta_{\rm cm}\right) = \frac{\mathbf{p}_{\rm cm}\cdot\mathbf{p}_{\rm cm}^{\prime}}{p_{ \rm cm}p_{\rm cm}^{\prime}}, \tag{9}\] where \(\cos(\theta_{\rm cm})\) is the angle between the incoming and outgoing momenta in the CM frame, and \(p_{\rm cm}^{\prime}\) can be obtained by substituting \(s_{\rm on}(\mathbf{p}_{1},\mathbf{p}_{2})\) with \(s_{\rm on}(\mathbf{p}_{1}^{\prime},\mathbf{p}_{2}^{\prime})\). As discussed in Ref. [28], the reason for using the on-shell value, \(s_{on}\), for \(p_{\rm cm}\) is to preserve the analytical properties of the \(T\)-matrix after the transformation into the CM frame. The 3D \(T\)-matrix integral equation can be further reduced to a 1D one by applying the partial-wave expansion in the CM frame (from hereon the subscript "cm" is suppressed for simplicity), \[X(\mathbf{p},\mathbf{p}^{\prime})=4\pi\sum_{L}(2L+1)X^{L}(p,p^{\prime})P_{L}[ \cos(\theta)]\, \tag{10}\] where \(X\) denotes \(V\) or \(T\), \(L\) the angular-momentum quantum number, and \(p\) and \(p^{\prime}\) are the moduli of \(\mathbf{p}\) and \(\mathbf{p}^{\prime}\). The 1D \(T\)-matrix equation then takes the form \[T_{ij}^{L,a}(z,p,p^{\prime}) = V_{ij}^{L,a}(p,p^{\prime})+\frac{2}{\pi}\int_{-\infty}^{\infty}k^ {2}dkV_{ij}^{L,a}(p,k) \tag{11}\] \[\times G_{ij}^{0}(z,k)T_{ij}^{L,a}(z,k,p^{\prime}).\] Equation (11) can be solved by discretizing the momenta to convert it into a matrix equation and solve it by matrix inversion. ## III Two-body potentials in vacuum In this section we first discuss the static potentials in Sec. III.1, then introduce the relativistic corrections to the potential between particles \(i\) and \(j\), and construct the confining potential with mixed Lorentz structures in Sec. III.2. The potential is generalized to different color channels at the end of this section. For simplicity, we suppress the color factors indices until the end of this section. ### Static Potential The kernel of the \(T\)-matrix equation (2) is based on the Cornell potential, with a color-Coulomb potential, \(V_{C}\), plus a confining potential ("string" term), \(V_{S}\). In coordinate space the common ansatz is \[\widetilde{V}(r)=\widetilde{V}_{\mathcal{C}}(r)+\widetilde{V}_{\mathcal{S}}(r)=- \frac{4}{3}\frac{\alpha_{s}}{r}+\sigma r\, \tag{12}\] where \(\alpha_{s}\) and \(\sigma\) are the perturbative coupling constant and nonperturbative string tension, respectively. To obtain the momentum-space potentials, \(V_{C/\mathcal{S}}(\mathbf{k})\), depending on the momentum transfer \(\mathbf{k}=\mathbf{p}-\mathbf{p}^{\prime}\), we use the subtracted quantities \(V_{C/\mathcal{S}}(r)=\widetilde{V}_{C/\mathcal{S}}(r)-\widetilde{V}_{C/\mathcal{S }}(\infty)\) to ensure the convergence of the Fourier transforms. A running coupling is implemented in the Coulomb potential for off-shell scattering in momentum space as [17]\(F_{run}(p,p^{\prime})=\ln\left[\frac{\Delta^{2}}{\Lambda^{2}}\right]/\ln\left[ \frac{(p-p^{\prime})^{2}+\Delta^{2}}{\Lambda^{2}}\right]\). For the confining potential, we enforce a flat potential above a string breaking scale of about \(r_{SB}=1\,\)fm to account for string breaking. We employ the same potential parameters as in previous studies [28], i.e., \(\alpha_{s}=0.27\), \(\sigma=0.225\) GeV\({}^{2}\), \(\Delta=1\) GeV and \(\Lambda=0.2\) GeV, which are fitted to the lQCD data of the vacuum free energy [31; 32; 33; 34; 35] (equivalent to the vacuum potential ) as shown in Fig. 2. ### Relativistic Corrections and Spin-Dependent Interactions Relativistic effects in the one-gluon exchange amplitude are well known, containing spin-independent and spin-dependent ones. For the pertinent vector potential (denoted as \(V^{vec}\)), the spin-independent correction amounts to multiplying the momentum-space potential by a factor \[\mathcal{R}_{ij} = \sqrt{\frac{1}{m_{ij}(p)}}\sqrt{1+\frac{p^{2}}{\varepsilon_{i}(p) \varepsilon_{j}(p)}} \tag{13}\] \[\times\sqrt{\frac{1}{m_{ij}(p^{\prime})}}\sqrt{1+\frac{p^{\prime 2 }}{\varepsilon_{i}(p^{\prime})\varepsilon_{j}(p^{\prime})}}\,\] which is known as the Breit correction, representing magnetic effects. For scalar potentials, denoted as \(V^{sca}\), no relativistic correction arises to leading order in \(1/M_{Q}\), see Ref. [17]. We write the total spin-independent potential in momentum space as \[V_{ij}\left(\mathbf{p},\mathbf{p}^{\prime}\right) = \mathcal{R}_{ij}V^{vec}\left(\mathbf{p}-\mathbf{p}^{\prime}\right) +V^{sca}\left(\mathbf{p}-\mathbf{p}^{\prime}\right). \tag{14}\] To implement spin-dependent interactions, including spin-orbit (\(V^{LS}\)), spin-spin (\(V^{SS}\)) and tensor (\(V^{T}\)) channels, we follow Ref. [36] where the detailed procedure to derive the Fermi-Breit Hamiltonian is laid out. The pertinent corrections for vector and scalar potentials between two partons with equal masses (\(M_{i}=M_{j}\equiv M\)) in coordinate space are given by \[V^{LS}(r) = \frac{1}{2M^{2}r}\left\langle\mathbf{L}\cdot\mathbf{S}\right\rangle \left(3\frac{d}{dr}V^{vec}(r)-\frac{d}{dr}V^{sca}(r)\right),\] \[V^{SS}(r) = \frac{2}{3M^{2}}\left\langle\mathbf{S}_{1}\cdot\mathbf{S}_{2} \right\rangle\Delta V^{vec}(r),\] \[V^{T}(r) = \frac{1}{12M^{2}}\left\langle S_{12}\right\rangle\left(\frac{1}{ r}\frac{d}{dr}V^{vec}(r)-\frac{d^{2}}{dr^{2}}V^{vec}(r)\right)\,\] where \(\Delta\equiv\nabla^{2}\) in the \(SS\) interaction is the Laplace operator. Note that the scalar interactions do not contribute to the \(SS\) and \(T\) corrections. We note that the vector potential, \(V^{vec}\), in the spin-dependent potentials above do not receive the spin-independent Breit correction, \(\mathcal{R}\), introduced in Eq. (13). Following Ref. [37], we smear the Dirac delta function \(\delta\) in the \(SS\) part by a Gaussian, \(\tilde{\delta}(r)=\left(\frac{b}{\sqrt{\pi}}\right)^{3}e^{-b^{2}r^{2}}\), to avoid the singularity. We take \(b=10\) in this work and have checked that for \(b>\)10 the \(SS\) interaction saturates in the quarkonium spectroscopy. The expectation values take the standard form (with \(L\), \(S\) and \(J\) denoting the orbital, spin and total angular-momentum quantum numbers, respectively): \(\left\langle\mathbf{L}\cdot\mathbf{S}\right\rangle=\frac{1}{2}[J(J+1)-L(L+1)- S(S+1)]\), \(\left\langle\mathbf{S}_{1}\cdot\mathbf{S}_{2}\right\rangle=\frac{4}{\left[S(S+1)- \frac{3}{2}\right]}\), and \(\left\langle S_{12}\right\rangle=\frac{4}{\left(2L+3\right)\left(2L-1\right) }\left[S(S+1)J(J+1)-\frac{3}{2}\left\langle\mathbf{L}\cdot\mathbf{S}\right\rangle -3(\left\langle\mathbf{L}\cdot\mathbf{S}\right\rangle)^{2}\right]\) for \(L\neq 0\) and \(S=1\), but \(\left\langle S_{12}\right\rangle\) vanishes for either \(L=0\) or \(S=0\). The total potential (with relativistic corrections) between, _e.g._, a heavy quark and anti-quark in momentum-space reads \[V_{Q\bar{Q}}\left(\mathbf{p},\mathbf{p}^{\prime}\right)= \mathcal{R}_{Q\bar{Q}}V^{vec}\left(\mathbf{p}-\mathbf{p}^{\prime}\right)+V^{ sca}(\mathbf{p}-\mathbf{p}^{\prime})\] \[+V^{LS}(\mathbf{p}-\mathbf{p}^{\prime})+V^{SS}(\mathbf{p}- \mathbf{p}^{\prime})+V^{T}(\mathbf{p}-\mathbf{p}^{\prime})\, \tag{16}\] where the spin-dependent terms in momentum-space are obtained through Fourier transform, \(V^{a}(\mathbf{k}=\mathbf{p}-\mathbf{p}^{\prime})=\int d^{3}\mathbf{r}e^{-i \mathbf{k}\cdot}V^{a}(\mathbf{r})\) with \(a=LS,SS,T\). We absorb \(m_{ij}(\mathbf{k})\) in the two-body propagator into the relativistic corrections for the potentials to keep the same convention as in Ref. [28], thus Eqs. (14) become \[V_{ij}\left(\mathbf{p},\mathbf{p}^{\prime}\right)\rightarrow\sqrt{ m_{ij}(p)}\sqrt{m_{ij}(p^{\prime})}V_{ij}\left(\mathbf{p},\mathbf{p}^{\prime}\right)\] \[=\mathcal{R}_{ij}^{vec}V^{vec}\left(\mathbf{p}-\mathbf{p}^{\prime }\right)+\mathcal{R}_{ij}^{sca}V^{sca}\left(\mathbf{p}-\mathbf{p}^{\prime}\right), \tag{17}\] with \[\mathcal{R}_{ij}^{vec} \equiv \sqrt{m_{ij}(p)}\sqrt{m_{ij}(p^{\prime})}\mathcal{R}_{ij}\] \[= \sqrt{1+\frac{p^{2}}{\varepsilon_{i}(p)\varepsilon j(p)}}\sqrt{1+ \frac{p^{\prime 2}}{\varepsilon_{i}(p^{\prime})\varepsilon_{j}(p^{\prime})}},\] \[\mathcal{R}_{ij}^{sca} \equiv \sqrt{m_{ij}(p)}\sqrt{m_{ij}(p^{\prime})} \tag{18}\] \[= \sqrt{\frac{M_{i}M_{j}}{\varepsilon_{i}(p)\varepsilon_{j}(p)}} \sqrt{\frac{M_{i}M_{j}}{\varepsilon_{i}(p^{\prime})\varepsilon_{j}(p^{\prime})}}\.\] Figure 2: The fitted vacuum potential versus the lQCD data. The blue line denotes the fitted vacuum potential, the colored dots the lQCD data from Refs. [31; 32; 33; 34; 35]. and Eq. (16) becomes \[V_{Q\bar{Q}}\left(\mathbf{p},\mathbf{p}^{\prime}\right)\rightarrow \sqrt{m_{ij}(p)}\sqrt{m_{ij}(p^{\prime})}V_{Q\bar{Q}}\left(\mathbf{p},\mathbf{p} ^{\prime}\right)\] \[\quad=\mathcal{R}_{QQ}^{vec}V^{vec}\left(\mathbf{p}-\mathbf{p}^{ \prime}\right)+\mathcal{R}_{QQ}^{sca}V^{sca}\left(\mathbf{p}-\mathbf{p}^{ \prime}\right)\] \[\quad+\mathcal{R}_{Q\bar{Q}}^{spin}[V^{LS}(\mathbf{p}-\mathbf{p}^ {\prime})+V^{SS}(\mathbf{p}-\mathbf{p}^{\prime})+V^{T}(\mathbf{p}-\mathbf{p}^{ \prime})] \tag{19}\] with \[\mathcal{R}_{ij}^{spin} \equiv \sqrt{m_{ij}(p)}\sqrt{m_{ij}(p^{\prime})} \tag{20}\] \[= \sqrt{\frac{M_{i}M_{j}}{\varepsilon_{i}(p)\varepsilon_{j}(p)}} \sqrt{\frac{M_{i}M_{j}}{\varepsilon_{i}(p^{\prime})\varepsilon_{j}(p^{\prime}) }}\.\] The Lorentz structure for Coulomb potential is entirely vector, and a common assumption for the confining one is to be entirely scalar, _i.e._, \(V^{vec}=V_{\bar{C}}\) and \(V^{sca}=V_{\bar{S}}\). As was mentioned in the introduction, there are reasons to believe that the confining potential is not a purely scalar one but a mixture of vector and scalar Lorentz structures, _i.e._, \(V^{vec}=V_{\bar{C}}+(1-\chi)V_{\bar{S}}\) and \(V^{sca}=\chi V_{\bar{S}}\). The key parameter is the mixing coefficient, \(\chi\), defined such that for \(\chi=1\) the interaction reduces to the case with a purely scalar confining potential, while values below one characterize a vector admixture. The potentials in the various color channels, \(V^{a}_{ij}\left(\mathbf{p},\mathbf{p}^{\prime}\right)\), are obtained by the substitutions \(V_{\bar{C}}\left(\mathbf{p}-\mathbf{p}^{\prime}\right)\rightarrow\mathcal{F}_ {a}^{\mathcal{C}}V_{\bar{C}}\left(\mathbf{p}-\mathbf{p}^{\prime}\right)\) and \(V_{\bar{S}}\left(\mathbf{p}-\mathbf{p}^{\prime}\right)\rightarrow\mathcal{F}_ {a}^{\mathcal{C}}V_{\bar{S}}\left(\mathbf{p}-\mathbf{p}^{\prime}\right)\). For the Coulomb interaction, \(\mathcal{F}_{a}^{\mathcal{C}}\) are the standard Casimir coefficients listed in Tab. 1 (together with the pertinent degeneracy factors), and we take the absolute values of the Casimir coefficients for the string interaction, \(\mathcal{F}_{a}^{\bar{S}}\), to ensure a positive definite string tension [28]. The parton masses are also related to the potential introduced above. The constituent masses of the heavy quarks, \(M_{Q}\), receives two contributions, the first one is calculated by the selfenergy from the color-singlet (\(a=1\)) potential (including the relativistic factors) and the second one is a "bare mass", \(M_{Q}^{0}\), which is associated with condensate contributions that we do not calculate explicitly in the present framework, \[M_{Q}=-\frac{1}{2}\int\frac{d^{3}\mathbf{p}}{(2\pi)^{3}}V_{Q\bar{Q}}^{a=1}( \mathbf{p})+M_{Q}^{0}, \tag{21}\] ## IV Heavy-quarkonium spectroscopy in vacuum In this section we introduce the correlation and spectral functions including their non-relativistic classifications in angular momentum (Sec. IV.1) and discuss our fits to the vacuum spectra including the spin-related interactions (Sec. IV.2). ### Correlation and Spectral Functions To evaluate the quarkonia spectra in both charm and bottom sectors, we compute the quark-antiquark spectral functions for different mesonic quantum-numbers channels using the pertinent \(T\)-matrices as described in Sec. II. The bound-state masses are then determined from the peak values of corresponding mesonic spectral functions. In the vacuum, we introduce a small width in the single-quark propagators which allows us to numerically resolve the bound-state mass while not distorting their masses. Since we only account for the \(Q\bar{Q}\) channels (off-shell) couplings to intermediate two-meson states (_e.g._, \(DD^{*}\) channels) are not accounted for, which could affect the masses near the \(D\bar{D}\) threshold somewhat. Table 2 lists the \(L\), \(S\) and \(J\) assignments in the scalar (S), pseudoscalar (PS), vector (V), axial-vector (AV) and tensor (T) mesonic channels. In practice, a cutoff \(r_{c}=0.01\) fm is introduced in the Fourier transform for the spin-dependent potentials to avoid ultraviolet divergences; we have checked that the results are not sensitive to variations in \(r_{c}\) by \(\pm 50\%\). \begin{table} \begin{tabular}{c c c c} \hline \hline Channels & \(L\) & \(S\) & \(J\) \\ \hline S & 1 & 1 & 0 \\ PS & 0 & 0 & 0 \\ V & 0 & 1 & 1 \\ AV\({}_{1}\) & 1 & 0 & 1 \\ AV\({}_{2}\) & 1 & 1 & 1 \\ T & 1 & 1 & 2 \\ \hline \hline \end{tabular} \end{table} Table 2: Non-relativistic classification of angular-momentum quantum numbers in different mesonic channels. Figure 3: Diagrammatic representation of the \(Q\bar{Q}\) correlation function. The dots denote meson current operators \(\Gamma_{M}\). The spectral functions are obtained form the correlation functions, \(G\), of the meson currents. The latter are obtained by closing the two incoming and outgoing legs of \(T\)-matrix (plus a non-interacting contribution) with the corresponding projection operator for the different quarkonium channels, see Fig. 3. Writing \[G=G_{0}+\Delta G\, \tag{22}\] the non-interacting part of correlation function in the CM frame is given by \[G_{0}(E,T) = N_{f}N_{c}\int\frac{d^{3}p}{(2\pi)^{3}}\mathcal{R}^{scq}_{Q\bar{Q}}\] \[\times\mathrm{Tr}\left\{\Gamma_{\alpha}\Lambda_{+}(\mathbf{p}) \Gamma_{\alpha}\Lambda_{-}(-\mathbf{p})\right\}G^{0}_{Q\bar{Q}}(E,p),\] where \(\Lambda_{\pm}(\mathbf{p})=[\varepsilon_{Q}(p)\gamma^{0}-(\mathbf{p}\cdot \boldsymbol{\gamma})\pm M_{Q}]/2M_{Q}\) are the positive/negative energy projectors for quark and anti-quark, respectively, and \(\Gamma_{\alpha}\in(1,i\gamma_{5},\gamma^{\mu},\gamma^{\mu}\gamma_{5},\frac{i }{2}[\gamma^{\mu},\gamma^{\nu}])\) the vertex operators for the mesonic S, PS, V, AV and T channels, respectively; the pertinent traces are listed in Tab. 3. Furthermore, \(N_{f}=1\) and \(N_{c}=3\) are the numbers of flavor and color for the heavy quark. The interacting part of correlation function in the CM frame is given by \[\Delta G(E,T) = \frac{N_{f}N_{c}}{8\pi^{4}}\int dpp^{2}G^{0}_{Q\bar{Q}}(E,p)\int dp ^{\prime}p^{\prime 2}G^{0}_{Q\bar{Q}}(E,p^{\prime}) \tag{24}\] \[\times\mathcal{R}^{scn}_{Q\bar{Q}}\mathcal{T}(\Gamma_{\alpha};E, p,p^{\prime}),\] with the scattering amplitude \[\mathcal{T}(\Gamma_{\alpha};E,p,p^{\prime}) =\int d(\cos\theta)\mathrm{Tr}(\Gamma_{\alpha};p,p^{\prime}, \theta)T_{Q\bar{Q}}(E,\mathbf{p},\mathbf{p}^{\prime})\] \[=8\pi[a_{0}(p,p^{\prime})T^{0}_{Q\bar{Q}}+a_{1}(p,p^{\prime})T^{ 1}_{Q\bar{Q}}]\.\] The \(T\)-matrix, \(T^{L}_{Q\bar{Q}}\), is calculated from Eq. (11) with the interaction kernel in Eq. (19). The \(a_{L}\) denote the coefficients of the orbital-angular momentum, \(L\), in the partial-wave expansion of the trace, \[\mathrm{Tr}(\Gamma_{\alpha};p,p^{\prime},\theta) = \mathrm{Tr}\left\{\Lambda_{+}(\mathbf{p})\Gamma_{\alpha}\Lambda_{ -}(-\mathbf{p})\Lambda_{-}(-\mathbf{p}^{\prime})\right.\] \[\times\Gamma_{\alpha}\Lambda_{+}(\mathbf{p}^{\prime})\}\] \[= a_{0}(p,p^{\prime})P_{0}(\cos\theta)+a_{1}(p,p^{\prime})P_{1}( \cos\theta)\ ;\] they are listed in Tab. 4. For the evaluation of the traces in the V, AV and T channels we focus on the spatial components. The correlation functions defined in Refs. [16] and [17] do not have the \(\mathcal{R}^{scn}_{Q\bar{Q}}\) factor due to the different definitions of potentials and two-body propagators, but they are equivalent to those in Refs. [28] and in this work. At higher orders in the \(1/M_{Q}\) expansion, the partial-wave expansion leads to a mixing between \(S\)- and \(P\)-wave components in a given meson channel. However, to keep with the (non-relativistic) classification of the meson channels with definite quantum numbers of \(L\), \(S\) and \(J\), we terminate the expansion when the "unnatural" partial waves admix [16]. The pertinent leading orders are collected in Tab. 5. From the correlation functions, the mesonic spectral functions follow from the imaginary part, \[\sigma_{\alpha}(E,T)=-\frac{1}{\pi}\,\mathrm{Im}\,G_{\alpha}(E+i\epsilon,T), \tag{27}\] where the subscript \(\alpha\) denotes the different meson channels. ### Heavy-Quarkonium Spectra in Vacuum We are now in position to investigate the quantitative consequences of the spin-dependent interactions and the scalar-vector mixing effect in the confining potential on \begin{table} \begin{tabular}{c c c} \hline \hline \(\Gamma_{\alpha}\) & \(a_{0}(p,p^{\prime})\) & \(a_{1}(p,p^{\prime})\) \\ \hline S & \(-\frac{p^{2}p^{\prime 2}}{M_{Q}^{2}}\) & \((1+\frac{\varepsilon(p)\varepsilon_{Q}(p^{\prime})}{M_{Q}^{2}})\frac{p^{ \prime 2}}{M_{Q}^{2}}\) \\ & \(1+\frac{p^{2}+p^{\prime 2}}{M_{Q}^{2}}\) & \(-\frac{\varepsilon_{Q}(p)\varepsilon_{Q}(p^{\prime})pp^{\prime}}{M_{Q}^{2}}\) \\ PS & \(+\frac{\varepsilon_{Q}(p)\varepsilon_{Q}(p^{\prime})}{M_{Q}^{2}}+\frac{p^{ \prime 2}}{M_{Q}^{2}}\) & \\ & \(3(1+\frac{\varepsilon_{Q}(p)\varepsilon_{Q}(p^{\prime})}{M_{Q}^{2}})\) & \(-(1+2\frac{\varepsilon_{Q}(p)\varepsilon_{Q}(p^{\prime})}{M_{Q}^{2}})\frac{p^{ \prime 2}}{M_{Q}^{2}}\) \\ V & \(+2\frac{p^{2}+p^{\prime 2}}{M_{Q}^{2}}+\frac{4}{3}\frac{p^{2}p^{\prime 2}}{M_{Q}^{2}}\) & \\ AV & \(-\frac{4}{3}\frac{p^{2}p^{\prime 2}}{M_{Q}^{2}}\) & \(2(1+\frac{\varepsilon_{Q}(p)\varepsilon_{Q}(p^{\prime})}{M_{Q}^{2}})\frac{p^{ \prime 2}}{M_{Q}^{2}}\) \\ T & \(-\frac{2}{3}\frac{p^{2}p^{\prime 2}}{M_{Q}^{2}}\) & \(2(1+\frac{\varepsilon_{Q}(p)\varepsilon_{Q}(p^{\prime})}{M_{Q}^{2}})\frac{p^{ \prime 2}}{M_{Q}^{2}}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Coefficients of orbital angular momentum (up to \(L=1\)) in different mesonic channels for the interacting part, \(G\), of the correlation functions. \begin{table} \begin{tabular}{c c} \hline \hline \(\Gamma_{\alpha}\) & \(G_{0}\) trace \\ \hline S & \(2\frac{p^{\prime 2}}{M_{Q}^{2}}\) \\ PS & \(2(1+\frac{p^{2}}{M_{Q}^{2}})\) \\ V & \(6+4\frac{p^{2}}{M_{Q}^{2}}\) \\ AV & \(4\frac{p^{2}}{M_{Q}^{2}}\) \\ T & \(4\frac{p^{2}}{M_{Q}^{2}}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Values for the trace coefficients in different mesonic channels for the non-interacting part, \(G_{0}\), of the correlation functions. the charmonium and bottomonium spectroscopy in vacuum. In practice, we adopt a value for the HQ width of \(\Gamma_{Q}=20\,\)MeV which is small enough to not affect the vacuum masses, but large enough to allow for straightforward numerical computations and plotting. Our fit procedure is as follows: For a given mixing coefficient, \(\chi\), we adjust the bare-quark masses \(M_{c}^{0}\) (\(M_{b}^{0}\)) to find the best fit for all the masses of charmonium (bottomonium) states given by the Particle Data Group (PDG) [38] using a \(\chi^{2}\) statistical test. In principle we could optimize the value for the mixing coefficient, \(\chi\), by strictly minimizing the variance of the fit. However, in practice we found that \(\chi=\)0.6 already provides most of the improvement in the mass splittings compared to \(\chi=1\), while for still smaller values the constituent quark masses become rather large producing uncomfortably large binding energies; in addition, a more precise evaluation of the quarkonium masses near the open heavy-flavor threshold would also require the inclusion of hadronic loop corrections. In particular, we do not pursue here more extreme scenarios, as proposed, _e.g._, in Refs. [26; 27] where optimized fits with \(\chi=-1\) were found (implying a negative string tension for the scalar term, counteracted by a twice larger vector component). We start by displaying the comparison of charmonium spectral functions between purely scalar (\(\chi=1\)) and mixed (\(\chi=0.6\)) confining potential for all states below the \(D\bar{D}\) threshold in Fig. 4 (we only plot the interacting parts of spectral functions since the free part does not affect the bound-state locations). The various peaks in each quantum-number channel are readily assigned as, S: \(\chi_{c0}(1P)\); PS: \(\eta_{c}(1S)\) and \(\eta_{c}(2S)\); V: \(J/\Psi(1S)\) and \(\Psi(2S)\); AV\({}_{1}\): \(h_{c}(1P)\); AV\({}_{2}\): \(\chi_{c1}(1P)\); T: \(\chi_{c2}(1P)\). The masses extracted from the pole positions are listed in Tab. 6 and compared to the experimental values. The potential with mixing effect generates more attraction from the additional relativistic corrections (recall Eq. (13)), thus requiring a larger HQ massfor \(\chi=0.6\) as quoted in Tab. 6. For \(\chi=1\), neither the \(S\)- nor \(P\)-wave mass splittings are well reproduced; the results are much improved by introducing the mixing effect with \(\chi=0.6\). To explain better understand the impact of the mixing effect, it is useful to summarize the expectation values for \(\langle{\bf L}\cdot{\bf S}\rangle\), \(\langle{\bf S_{1}}\cdot{\bf S_{2}}\rangle\) and \(\langle S_{12}\rangle\) in Tab. 7. We take the mass splitting between PS and V channels, where only spin-spin interaction are operative, as an example. For simplicity, we will use the Cornell potential in Eq. (12) to make the argument. According to Eq. (15), the spin-spin interaction is \(V^{SS}(r)\sim\langle{\bf S_{1}}\cdot{\bf S_{2}}\rangle\,\Delta V^{vec}(r)=0.6\). The \(\chi^{2}\)-spin interaction is \(V^{SS}(r)\sim\langle{\bf S_{1}}\cdot{\bf S_{2}}\rangle\,\Delta V^{vec}(r)=0. \(\langle\mathbf{S_{1}\cdot S_{2}}\rangle\left[\frac{16\pi\alpha_{s}}{3}\delta(r)+(1- \chi)\frac{2\sigma}{r}\right]\). Since the quantity in the bracket is positive, the negative (positive) value for \(\langle\mathbf{S_{1}\cdot S_{2}}\rangle\) gives a more attractive (repulsive) interactions, which is the origin of the \(J/\Psi\)-\(\eta_{c}\) (_i.e._, V-PS) splitting. The mixing with \(\chi<1\) obviously enhances this effect. Similar arguments can be made for the spin-orbit, \(V^{LS}(r)\sim\langle\mathbf{L\cdot S}\rangle\left[\frac{4\alpha_{s}}{r}+(3-4 \chi)\frac{\sigma}{r}\right]\), and tensor, \(V^{LS}(r)\sim\langle S_{12}\rangle\left[\frac{4\alpha_{s}}{r\chi}+(1-\chi) \frac{\sigma}{r}\right]\), interactions; _i.e._, the strength of \(V^{LS}\), \(V^{SS}\) and \(V^{T}\) are all enhanced by introducing a vector component in the confining potential, thereby improving the splittings in comparison to experiment. We have carried out a similar analysis for bottomonium spectral functions. In Fig. 5 bound-state spectral functions between \(\chi=1\) and \(0.6\) are compared. We identify the peaks in each channel as follows (not all of which have an experimental counterpart (yet): S: \(\chi_{b0}(1P)\) and \(\chi_{b0}(2P)\); PS: \(\eta_{b}(1S)\), \(\eta_{b}(2S)\) and \(\eta_{b}(3S)\); V: \(\Upsilon(1S)\), \(\Upsilon(2S)\) and \(\Upsilon(3S)\); AV\({}_{1}\): \(h_{b}(1P)\), \(h_{b}(2P)\) and \(h_{b}(3P)\); AV\({}_{2}\): \(\chi_{b1}(1P)\), \(\chi_{b1}(2P)\) and \(\chi_{b1}(3P)\); T: \(\chi_{b2}(1P)\), \(\chi_{b2}(2P)\) and \(\chi_{b2}(3P)\). The comparison between the experimental values and the masses extracted from spectral functions is compiled in Tab. 8. Also here an improvement in the mass splittings is found by introducing the vector confining potential, but it is not as significant as the charmonium sector, primarily due to the larger \(1/M_{b}\) suppression for the spin-induced forces. Finally, we have evaluated the spin-induced interactions in the heavy-light sector, which is the key ingredient to calculating the heavy-quark transport coefficients discussed in Sec. VI. Specifically, in the \(S\)-wave color-singlet \(D\)-meson channel, the mass splitting between the pseudoscalar \(D\)-meson and the vector \(D^{*}\)-meson improves from \(30\,\mathrm{MeV}\) for \(\chi\)=1 to \(120\,\mathrm{MeV}\) for \(\chi\)=0.6. ## V In-medium potential and selfconsistent QGP In this section, we briefly introduce (and carry out) the framework for determining the medium modifications to the potential and its application to the EoS and spectral functions of the QGP within a selfconsistent quantum Figure 5: The vacuum bottomonium spectral functions (interacting parts) in S, PS, V, AV and T channels with mixing coefficient \(\chi=1\) (upper panel) and \(\chi=0.6\) (lower panel). The spectral functions in S, AV and T channels are multiplied by a factor of 4 for better visibility. \begin{table} \begin{tabular}{c c c c} \hline \hline Channel & \(\langle\mathbf{L\cdot S}\rangle\) & \(\langle\mathbf{S_{1}\cdot S_{2}}\rangle\) & \(\langle S_{12}\rangle\) \\ \hline S & \(-2\) & \(1/4\) & \(-4\) \\ PS & \(0\) & \(-3/4\) & \(0\) \\ V & \(0\) & \(1/4\) & \(0\) \\ AV\({}_{1}\) & \(0\) & \(-3/4\) & \(0\) \\ AV\({}_{2}\) & \(-1\) & \(1/4\) & \(2\) \\ T & \(1\) & \(1/4\) & \(-2/5\) \\ \hline \hline \end{tabular} \end{table} Table 7: Couplings of \(LS\), \(SS\) and \(T\) interactions for different quarkonium channels. \begin{table} \begin{tabular}{c c c c c} \hline \hline & & & Th. & Th. \\ Channel & Particle & Exp. & \(\chi=1\) & \(\chi=0.6\) \\ & & & \(M_{b}^{0}=4.681\) & \(M_{b}^{0}=4.681\) \\ & & & \(M_{b}=5.247\) & \(M_{b}=5.266\) \\ \hline S & \(\chi_{b0}(1P)\) & 9.859 & 9.871 & 9.864 \\ & \(\chi_{b0}(2P)\) & 10.233 & 10.227 & 10.220 \\ PS & \(\eta_{b}(1S)\) & 9.399 & 9.496 & 9.470 \\ & \(\Upsilon(1S)\) & 9.460 & 9.520 & 9.500 \\ V & \(\Upsilon(2S)\) & 10.023 & 9.999 & 9.994 \\ & \(\Upsilon(3S)\) & 10.355 & 10.345 & 10.324 \\ AV\({}_{1}\) & \(h_{b}(1P)\) & 9.899 & 9.896 & 9.893 \\ & \(\chi_{b1}(1P)\) & 9.893 & 9.894 & 9.877 \\ AV\({}_{2}\) & \(\chi_{b1}(2P)\) & 10.255 & 10.248 & 10.243 \\ & \(\chi_{b1}(3P)\) & 10.513 & 10.520 & 10.500 \\ T & \(\chi_{b2}(1P)\) & 9.912 & 9.897 & 9.899 \\ & \(\chi_{b2}(2P)\) & 10.269 & 10.249 & 10.249 \\ \hline \hline \end{tabular} \end{table} Table 8: Experimental and theoretical bottomonium spectroscopy with the mixing coefficient \(\chi=1\) and \(\chi=0.6\) (in GeV). \(M_{b}^{0}\) and \(M_{b}\) are the bare and constituent masses for bottom quark, respectively. many-body approach [28]. In a nutshell the procedure consists of 2 selfconsistency loops as follows. First, the in-medium potential will be constrained through calculating the HQ free energies from the \(T\)-matrix and fitting it to pertinent lQCD data. The key fit parameters in this step are the screening masses, \(m_{d}\) and \(m_{s}\) of the color-Coulomb and string interactions. The in-medium potentials are then applied in a selfconsistent 2-PI scheme to compute the EoS of the QGP and fit those results to pertinent lQCD data as well. The main parameters in this step are the in-medium light-quark and gluon masses, but the EoS is computed including the full off-shell properties of the one-body spectral functions and two-body scattering amplitudes. Since the parton selfenergies are computed from their \(T\)-matrices, this forms a selfconsistency problem which is solved by numerical iteration. However, the calculation of the HQ free energy also requires the spectral functions (selfenergies) of the heavy quarks, calculated from heavy-light \(T\)-matrices closed off with thermal parton spectral functions. Thus, after constraining the light sector with the EoS, the in-medium HQ spectral functions are re-calculated and inserted into the computation of the HQ free energies. Refitting the screening masses to the lQCD data, a refinement of the in-medium two-body potential is obtained which is then reprocessed in a new fit to the EoS. This constitutes the second ("outer") iteration loop which is also iterated numerically. In the remainder of this section, we first introduce the the main equations to compute the HQ free energy (Sec. V.1) and the EoS (Sec. V.2), and then discuss the numerical results with the updated in-medium potential (Sec. V.3). ### Static HQ Free Energy Our starting point is an ansatz for the medium modifications of the potential; following previous studies [28] we employ \[\widetilde{V}_{\mathcal{C}}(r) = -\frac{4}{3}\alpha_{s}\frac{e^{-m_{d}r}}{r}-\frac{4}{3}\alpha_{s} m_{d}\] \[\widetilde{V}_{\mathcal{S}}(r) = -\frac{\sigma e^{-m_{s}r-(c_{b}m_{s}r)^{2}}}{m_{s}}+\frac{\sigma }{m_{s}}\, \tag{28}\] where \(m_{d}\) and \(m_{s}\) are the respective Debye screening masses for Coulomb and confining potentials, related by \(m_{s}=\left(c_{s}m_{d}^{2}\sigma/\alpha_{s}\right)^{1/4}\)[28]. The quadratic term in the exponential, \(-\left(c_{b}m_{s}r\right)^{2}\), accelerates the suppression of the long-range part of the confining potential to simulate string breaking. In the limit of vanishing screening masses the vacuum potential of Eq. (12) is recovered. The HQ free energy, \(F_{Q\bar{Q}}(r,T)\), is defined as the difference between the free energies of the QGP without and with a static quark and antiquark (not counting their infinite masses) separated by a distance \(r\) (see, _e.g._, Ref. [39]), \[F_{Q\bar{Q}}(r,T)=-\frac{1}{\beta}\ln\left[G_{Q\bar{Q}}^{>}(-i\beta,r)\right], \tag{29}\] where \(G_{Q\bar{Q}}^{>}(-i\tau,r)\) is the Euclidean time Green function and \(\beta=1/T\) the inverse temperature. In the vacuum, this simply corresponds to the potential between \(Q\) and \(\bar{Q}\), cf. Sec. III. In medium, the free energy and the potential are no longer identical to each other due to the presence of entropy contributions resulting from medium effects encoded in the HQ selfenergies (which we calculate from the in-medium heavy-light \(T\)-matrix) and the potential. In Ref. [28] a compact form of the free energy has been derived as \[F_{Q\bar{Q}}(r,T)= -\frac{1}{\beta}\ln\left[-\int_{-\infty}^{\infty}\frac{dE}{\pi}e ^{-\beta E}\right.\] \[\left.\times\operatorname{Im}\left[\frac{1}{E+i\epsilon-\widetilde {V}(r)-\Sigma_{Q\bar{Q}}(E+i\epsilon)}\right]\right], \tag{30}\] with the color-singlet potential \(\widetilde{V}(r)\) (color-flavor indices are suppressed for simplicity) from Eq. (28). The relationship between the two-body selfenergy, \(\Sigma_{Q\bar{Q}}(z)\), and the two-body propagator, \(G_{Q\bar{Q}}^{0}(z)\), is [28] \[\left[G_{Q\bar{Q}}^{0}(z)\right]^{-1}=z-2\Delta M_{Q}-\Sigma_{Q\bar{Q}}(z)\, \tag{31}\] with a Fock mass term for each quark, \(\Delta M_{Q}=\widetilde{V}(\infty)/2\). In the static limit, \(G_{Q\bar{Q}}^{0}(z)\) reduces to \[G_{Q\bar{Q}}^{0}(z)=\int_{-\infty}^{\infty}d\omega_{1}d\omega_{2}\frac{\rho_{ Q}\left(\omega_{1}\right)\rho_{\bar{Q}}\left(\omega_{2}\right)}{z-\omega_{1}- \omega_{2}}\, \tag{32}\] where \(\rho_{Q/\bar{Q}}(\omega)=-\frac{1}{x}\operatorname{Im}G_{Q/\bar{Q}}(\omega+i\epsilon)\) are the single-particle spectral functions with propagators \(G_{Q/\bar{Q}}(z)=\frac{1}{z-M_{Q/\bar{Q}}-\Sigma_{Q/\bar{Q}}(z)}\) in the static limit. Then the single-particle selfenergy, \(\Sigma_{Q}(z)\), can be solved selfconsistently by combining the \(T\)-matrix and the selfenergy equations. By taking the heavy-light \(T\)-matrix from Eq. (2) in the "half-static" limit, where the \(\mathbf{p_{1}}\) dependence is suppressed due to the infinite static-quark mass, Eq. (7) takes the form \[\begin{split}\Sigma_{Q}(z)=&\int\frac{d^{3}\mathbf{p}_{2}}{(2 \pi)^{3}}\int_{-\infty}^{\infty}d\omega_{2}\frac{dE}{\pi}\frac{-1}{z+\omega_{2 }-E}\frac{1}{d_{Q}}\sum_{a,J}d_{J}^{Qj}d_{a}^{Qj}\\ &\times T_{Qj}^{a}\left(E,\mathbf{p}_{2}\mid\mathbf{p}_{2}\right) \rho_{j}\left(\omega_{2},\mathbf{p}_{2}\right)n_{j}(\omega_{2})\.\end{split} \tag{33}\] The CM transformation in Eq. (9) reduces to \[E_{\rm cm}=\omega_{1}+\omega_{2},\quad p_{\rm cm}=p_{2},\quad\cos\left(\theta_ {\rm cm}\right)=\cos(\theta), \tag{34}\] with \(\omega_{1}+\omega_{2}\gg|\mathbf{p}_{1}+\mathbf{p}_{2}|\). The resulting single-particle selfenergy, \(\Sigma_{Q}(z)\), is inserted into Eq. (32) to obtain the \(Q\bar{Q}\) propagator, and Eq. (31) yields the two-body selfenergy, \(\Sigma_{Q\bar{Q}}(z)\). Interference effects lead to a suppression of the imaginary part of the two-body selfenergy (relative to the sum of the single-particle absorptive parts), sometimes referred to as "imaginary part of the potential" (it is, in fact, a suppression of the latter) [40]. In the \(T\)-matrix formalism this amounts to a 3-body contribution which is rather challenging to compute explicitly [28]. Instead, an \(r\)-dependent suppression factor of a functional form guided by perturbative results has been implemented [28] with a factorizable ansatz, \(\Sigma_{Q\bar{Q}}(z,r)=\Sigma_{Q\bar{Q}}(z)\phi(r)\), where the function \(\phi(r)\) will be part of the constraints from the lQCD data for static HQ free energies at each temperature. The interference effect is mostly relevant for deeply bound heavy quarkonia, where, in the color singlet channel, the imaginary part should vanish in the limit of \(r\to 0\) (corresponding to a color-neutral object). ### Equation of State The equation of state of a many-body system is encoded in the pressure, \(P(T,\mu)\), as a function of temperature and chemical potential. The EoS is driven by the dominant degrees of freedom in the medium, and is therefore sensitive to their spectral properties, including their masses. For a homogeneous grand canonical ensemble, the relationship between the EoS and the grand potential per unit volume is given by \(\Omega=-P\). We adopt the Luttinger-Ward-Baym (LWB) formalism which provides a diagrammatic and thermodynamically consistent quantum approach that allows to incorporate the off-shell dynamics of the one- and two-body correlation functions. Quantum effects are expected to become particularly important for a strongly coupled system with large scattering rates (widths) [41; 42; 43]. One has \[\Omega=\mp\frac{-1}{\beta}\sum_{n}\mathrm{Tr}\left\{\ln\left(-G^{-1}\right)+ \left[\left(G^{0}\right)^{-1}-G^{-1}\right]G\right\}\pm\Phi, \tag{35}\] where "\(\mathrm{Tr}\)" denotes the trace over spin, color, flavor and 3-momentum, \(\sum_{n}\) the Matsubara frequency sum, and the \(G^{0}\) and \(G\) are the free and fully dressed single-particle Green's function. The two-body interaction contribution is encoded in the Luttinger-Ward functional (LWF), \(\Phi=\sum_{v=1}^{\infty}\Phi_{v}\), where \[\Phi_{v}=\frac{-1}{\beta}\sum_{n}\mathrm{Tr}\left\{\frac{1}{2v}\left(\frac{-1 }{\beta}\right)^{v}\left[(-\beta)^{v}\Sigma_{v}(G)\right]G\right\} \tag{36}\] with \[\Sigma_{v}(G)=\int d\tilde{p}\left[VG_{(2)}^{0}VG_{(2)}^{0}\cdots V\right]G\;, \tag{37}\] using the notation \(\int d\tilde{p}\equiv-\beta^{-1}\sum_{n}\int d^{3}\mathbf{p}/(2\pi)^{3}\) with \(\tilde{p}\equiv(i\omega_{n},\mathbf{p})\). The \(\phi_{\nu}\) correspond to the "skeleton diagrams" of the \(\nu^{\mathrm{th}}\) order in the potential expansion. To account for possible bound-states formation and their contribution to the pressure, one has to resum the skeleton series. For non-separable interactions this has been achieved through a matrix-logarithm resummation technique in Refs. [44; 28; 45], resulting in a structure similar to the \(T\)-matrix resummation in Eq. (7): \[\begin{split}\Omega=&\sum_{j}\mp d_{j}\int d\tilde{p} \left\{\ln\left(-G_{j}(\tilde{p})^{-1}\right)\right.\\ &+\left[\Sigma_{j}(\tilde{p})-\frac{1}{2}\log\Sigma_{j}(\tilde{p} )\right]G_{j}(\tilde{p})\right\}\end{split} \tag{38}\] with \[\begin{split}\log\Sigma_{i}\left(z,\mathbf{p}_{1}\right)& =\int\frac{d^{3}\mathbf{p}_{2}}{(2\pi)^{3}}\int_{-\infty}^{ \infty}d\omega_{2}\frac{dE}{\pi}\frac{-1}{z+\omega_{2}-E}\\ &\times\frac{1}{d_{i}}\sum_{a,j}d_{j}^{ij}d_{a}^{ij}\operatorname{ Im}\left[\log T_{ij}^{\mathrm{a}}\left(E,\mathbf{p}_{1},\mathbf{p}_{2}\mid \mathbf{p}_{1},\mathbf{p}_{2}\right)\right]\\ &\times\rho_{j}\left(\omega_{2},\mathbf{p}_{2}\right)\left[n_{j} \left(\omega_{2}\right)\mp n_{ij}(E)\right].\end{split} \tag{39}\] The transformation of the \(T\)-matrices in Eq. (39) between the thermal and CM frame is given by Eq. (9). The grand potential can then be obtained after carrying out the sum over Matsubara frequencies in Eq. (38). ### Selfconsistent in-Medium Results We now turn to the selfconsistent in-medium results at four temperatures, \(T=0.194\) GeV, \(0.258\) GeV, \(0.320\) GeV and \(0.400\) GeV, constrained by the lQCD data for static HQ free energies (Sec. V.3.1) and QGP EoS (Sec. V.3.2). All in-medium calculations are carried out with the mixing coefficient for \(\chi=0.6\) and \(1\) in this study, but we do not yet incorporate the spin-dependent corrections. In particular in the light sector, _i.e._, for the QGP EoS, their effect can be rather significant and deserves a separate study (some compensatory effects are expected due to both attractive and repulsive contributions). Nevertheless, we want to ensure that the medium within which the heavy quarks are embedded satisfies basic constraints from lQCD. #### v.3.1 Static HQ Free Energies Recalling Eq. (30), the HQ free energy, \(F_{Q\bar{Q}}(r,T)\), is a functional of the potential, \(\widetilde{V}(r)\), and the two-body selfenergy, \(\Sigma_{Q\bar{Q}}(E+i\epsilon)\). Note that \(F_{Q\bar{Q}}(r,T)\) increases with increasing \(\widetilde{V}(r)\) but with decreasing \(|\Sigma_{Q}(E+i\epsilon)|\). A larger Debye screening mass, \(m_{d}\) and/or \(m_{s}\), suppresses \(\widetilde{V}(r)\) so that the partons become more weakly coupled, which in turn lowers \(F_{Q\bar{Q}}(r,T)\); at the same time, a larger \(m_{d}\) reduces \(|\Sigma_{Q}(E+i\epsilon)|\) in medium, which in turn enhances \(F_{Q\bar{Q}}(r,T)\). It is this competition between \(\widetilde{V}(r)\) and \(|\Sigma_{Q}(E+i\epsilon)|\) that leads to a non-monotonic behavior of \(F_{Q\bar{Q}}(r,T,m_{d})\) with \(m_{d}\). Since \(m_{d}\) is directly related to the free energy at infinite distance (cf. Sec. V.1), we define a function \(F_{trial}(r\to\infty,T,m_{d})\) calculated from our many-body approach which we require to be equal to the lQCD value, \(F_{lQCD}(r\to\infty,T)\). We typically find two solutions for \(m_{d}\) for any fixed parameter set provided the maximum of the trial free energy lies above the lQCD value. We denote the solutions with the smaller and the larger \(m_{d}\) as strongly coupled solution (SCS) and weakly coupled solution (WCS), respectively (in analogy to the two solutions which were found in Ref. [28]). Here, we only focus on the SCS which results in transport parameters in much better agreement with heavy-ion phenomenology (_i.e._, a liquid-like behavior with interaction energies comparable to the parton masses, as well as HQ transport parameters) than the WCS [28]. The resulting potentials and fits to lQCD results [46] for the HQ free energies with \(c_{b}=1.55\) (\(1.3\)) and \(c_{s}=0.06\) (\(0.01\)) for \(\chi=0.6\) (\(1\)) at different temperatures are shown in Fig. 6. As found in earlier studies, large HQ widths lead to a substantial enhancement of the potential over the free energies; in particular, at the lowest temperature of \(0.194\,\mathrm{GeV}\), the potential is close to the vacuum one, but becomes notably suppressed at higher \(T\). Consequently, the screening masses for Coulomb (\(m_{d}\)) and confining (\(m_{s}\)) potentials, shown in the right panel of Fig. 6, have rather small values at low \(T\), with the string interaction exhibiting a weaker screening with increasing \(T\). This implies that remnants of the confining force survive in the QGP well above the critical region. The potential with mixed confining interaction (\(\chi=0.6\)) is enhanced by the extra relativistic corrections (cf. Eq. (13)), requiring a stronger screening to fit the lQCD free-energy data. Therefore, the screening masses for confining potential for \(\chi=0.6\) are larger than those for \(\chi=1\), see the right panel of Fig. 6. However, note that the \(\chi=0.6\) solution generates a stronger force at relatively small distances, a feature that will figure importantly in the QGP structure and HQ transport properties. #### v.3.2 Equation of State In Fig. 7 we display the pressure together with the fitted light-parton masses for \(\chi=0.6\) and \(1\), which allow for a good reproduction of the lQCD data. However, while the parton masses are effective in achieving a quantitative agreement with the lQCD results, the underlying quark and gluon spectral functions for \(\chi=0.6\) and \(1\) both feature large selfenergies, especially imaginary parts which, at low momentum and temperatures, are comparable to, or even larger, than the parton masses, cf. the spectral function widths in Fig. 8 (first and second rows). The large scattering rates are mostly driven by dynamical resonance formation in the underlying \(T\)-matrices (which in turn are generated by resumming the strong potential). These resonances contribute through the resummed LWF functional \(\Phi\sim 1/2\mathrm{log}\Sigma G\) introduced in Sec. V.2, whose contribution for \(\chi=0.6\) and \(1\) is displayed in Fig. 7. The increasing proportion of LWF contribution with decreasing temperature indicates the onset of a change in the degrees of freedom. Specifically, the LWF parts make up more than \(70(50)\%\) of the pressure at \(T=0.194\,\mathrm{GeV}\) for \(\chi=0.6(1)\). While the spectral functions for \(\chi=0.6\) generally share the main features with those for \(\chi=1\) at low momenta, a notable quantitative difference is that the widths for \(\chi=0.6\) do not fall off with momentum as much as those for \(\chi=1\). In the former case, this is a consequence of the 3-momentum dependence of the confining interaction whose vector component, through relativistic effects, generates additional interaction strength and thus larger scattering rates at larger momenta relative to the \(\chi=1\) case. ## VI Charm-Quark Transport Coefficients With the parameters of the interaction potential and parton masses fixed with the aid of lQCD data, we can now investigate the effect of the mixed potential on charm-quark transport properties in the QGP. As elaborated in Ref. [19] it is important to account for the off-shell properties of both charm quarks and thermal partons in the evaluation of the transport coefficient, especially due to the formation of near-threshold bound states which only provide limited phase for quasiparticle (on-shell) scattering. This point is further corroborated upon inspecting the equilibrium spectral functions, \(\rho_{q,g,c}\), of the partons displayed in Fig. 8, exhibiting large widths of \(\sim\)0.6 GeV or so at low momentum. As already mentioned in Sec. V.3.2, the main difference between \(\chi=0.6\) and \(1\) is that the widths for \(\chi=0.6\) do not fall off with momentum as much as those for \(\chi=1\). This feature persists in the heavy-light scattering amplitudes, which are the main ingredient to the charm-quark transport coefficients discussed below, see Fig. 9. At \(T=0.194\,\mathrm{GeV}\), the peak value of the imaginary part of the \(S\)-wave color-singlet heavy-light scattering amplitude for \(\chi=0.6\) still shows a rather marked decrease with increasing center-of-mass mass momentum of the colliding partons, but it is significantly weaker than for \(\chi=1\) with a purely scalar confining potential; _e.g._, the peak reduction from the \(p_{cm}=0\) to \(p_{cm}=0.5\,\mathrm{GeV}\) case is almost a factor 3 for the latter but only \(\sim\)1.5 for \(\chi=0.6\). This trend continues to higher temperatures; at \(T=0.400\,\mathrm{GeV}\), the peak reduction from \(p_{cm}=0\) to \(p_{cm}=0.6\,\mathrm{GeV}\) is essentially absent for \(\chi=0.6\), while it is still a factor of 1.6 for the purely scalar confining potential. The \(T\)-matrix amplitudes for \(\chi=0.6\) are smaller than that for \(\chi=1\) at low momenta due to its stronger screening in confining potential (recall its larger Debye screening masses in the right panel of Fig. 6); however, they exceed the ones for \(\chi=1\) for \(p_{cm}\raisebox{-3.0pt}{$\,\stackrel{{>}}{{\sim}}\,$}0.5\, \mathrm{GeV}\) due to the harder 3-momentum dependence of confining potential through relativistic effects. The \(T\)-matrices for other partial waves and color channels share similar features, and thus we do not re produce them here. Turning now to the HQ transport coefficients in the QGP, we adopt their standard definition through a Fokker-Planck equation where they amount to the first and second momentum of the momentum transfer of the heavy-light scattering amplitude squared, integrated over the thermal-parton distributions. At this level, off-shell effects can be readily implemented by an additional energy convolution over the light-parton spectral functions. However, since also charm quarks acquire widths which are not small, the inclusion of their spectral width is also warranted. This has been worked out in Ref. [19] employing the Kadanoff-Baym equations, resulting in the following expression for the HQ friction coefficient (or relaxation rate): \[A(p)= \sum_{i}\frac{1}{2\varepsilon_{c}(p)}\int\frac{d\omega^{\prime}d^ {3}\mathbf{p}^{\prime}}{(2\pi)^{3}2\varepsilon_{c}\left(p^{\prime}\right)} \frac{d\nu d^{3}\mathbf{q}}{(2\pi)^{3}2\varepsilon_{i}(q)}\frac{d\nu^{\prime}d ^{3}\mathbf{q}^{\prime}}{(2\pi)^{3}2\varepsilon_{i}\left(q^{\prime}\right)} \tag{40}\] \[\times\delta^{(4)}\frac{(2\pi)^{4}}{d_{c}}\sum_{a,l,s}|M|^{2} \rho_{c}\left(\omega^{\prime},p^{\prime}\right)\rho_{i}(\nu,q)\rho_{i}\left( \nu^{\prime},q^{\prime}\right)\] \[\times\left[1-n_{c}\left(\omega^{\prime}\right)\right]n_{i}(\nu) \left[1\pm n_{i}\left(\nu^{\prime}\right)\right]\left(1-\frac{\mathbf{p}\cdot \mathbf{p}^{\prime}}{\mathbf{p}^{2}}\right)\,.\] As before (recall Sec. II), \(\varepsilon_{i/c}\), \(\varepsilon_{p/c}\) and \(n_{i/c}\) are the dispersion relations, spectral and thermal-distribution functions for partons \(i/c\), respectively, \(\delta^{(4)}\) is a short-hand notation for the energy-momentum conserving \(\delta\)-function in the 2\(\rightarrow\)2 scattering process, and \(d_{c}=6\) the spin-color degeneracy of charm quarks. The summation \(\sum_{i}\) is over all light-flavor quarks and gluons, \(i=u,\bar{u},d,\bar{d},s,\bar{s},g\), where the masses for light and strange quarks are assumed to be the same. In the above expression, the quasiparticle approximation is only applied to the incoming charm quark by assigning it a sharp energy \(\varepsilon_{c}(\mathbf{p})\) at momentum \(\mathbf{p}\), while all other partons are treated via off-shell integrations. The heavy-light scattering matrix elements, \(|M|^{2}\), in Eq. (40) are related to the \(T\)-matrix in the CM frame by \[\sum_{a,L,s}\left|M^{2}\right|=16\varepsilon_{c}\left(p_{\rm cm} \right)\varepsilon_{i}\left(p_{\rm cm}\right)\varepsilon_{c}\left(p^{\prime}_ {\rm cm}\right)\varepsilon_{i}\left(p^{\prime}_{\rm cm}\right)d_{s}^{ci} \tag{41}\] \[\times\sum_{a}d_{a}^{ci}\left|4\pi\sum_{L}(2L+1)T_{ci}^{a,L} \left(E_{\rm cm},p_{\rm cm},p^{\prime}_{\rm cm}\right)P_{L}(x)\right|^{2}\] with the color and spin degeneracies of the two-body system, \(d_{a,s}^{ci}\). The heavy-light \(T\)-matrix, \(T_{ci}^{a,L}(E_{\rm cm},p_{\rm cm},p^{\prime}_{\rm cm})\), is calculated in the CM frame in all possible two-body color channels, \(a\), and partial-wave channels, \(L\) (expanded up to \(L=8\) to ensure convergence at high momenta). The CM energy \(E_{\rm cm}\), incoming CM momentum \(p_{\rm cm}\), outgoing CM momentum \(p^{\prime}_{\rm cm}\), and scattering angle, \(x=\cos\theta_{\rm cm}\), are expressed as functions of \(E,\mathbf{p},\mathbf{q},\mathbf{p}^{\prime},\mathbf{q}^{\prime}\), through the transformation in Eq. (9). Instead of only the moduli of \(p_{\rm cm}\) and \(p^{\prime}_{\rm cm}\), their explicit Figure 6: Left and middle panels: The in-medium HQ free energies (blue) and potentials (orange) for \(\chi=0.6\) (solid) and \(1\) (dashed) at different temperatures in comparison to lQCD data for the HQ free energies from Ref. [46] for \(N_{f}=2+1\) light flavors (black dots). The right panel shows the temperature dependence of the screening masses, \(m_{d}\) (blue) and \(m_{s}\) (orange), for \(\chi=0.6\) (solid) and \(1\) (dashed) as resulting from our fit. The \(\chi=1\) results are taken from Ref. [28]. vector form is required here [19]: \[p_{\rm cm\parallel}=\frac{\varepsilon_{\rm p_{2}}p_{1\parallel}-\varepsilon_{\rm p _{1}}p_{2\parallel}}{\sqrt{s_{\rm on}}},\ \ \ {\bf p}_{\rm cm\perp}=\frac{{\bf p}_{1}p_{2\parallel}-{\bf p}_{2}p_{1 \parallel}}{|{\bf p}_{1}+{\bf p}_{2}|}, \tag{42}\] with \(\parallel\) and \(\perp\) indicating parallel and perpendicular to the relative velocity, respectively, and likewise for the outgoing (primed) momenta. In Fig. 10 we plot our results for the friction coefficient \(A(p)\) with the mixed confining potential (\(\chi=0.6\)) in comparison to the results with a purely scalar confining potential (\(\chi=1\)). We stipulate that both calculations are carried out for a thermal QGP medium which satisfies the constraints from the EoS and HQ free energy. With the vector component in the confining potential, the low-momentum values of the relaxation rate are enhanced by several tens of percent, but the more significant effect is the increase at higher momenta, for the same reasons as discussed above in the context of the single-parton spectral functions and their scattering amplitudes. For example, for a charm-quark momentum of \(4\,\)GeV, the enhancement is about a factor 2.6, while at momenta of \(10\,\)GeV it reaches an even larger factor of \(\sim\)3.5 at the lowest temperature. However, at the latter momentum, radiative contributions are expected to be large. At first sight it might be surprising that the enhancement due to the vector component in the confining potential also transpires at low momenta although the pertinent \(T\)-matrix amplitudes are smaller than those with purely scalar confining potential at low CM momenta (cf. Fig. 9). To some extent this can be understood due to the fact that even at vanishing HQ momentum the thermal motion of the surrounding medium partons creates a finite momentum in the CMS, but there is also a non-trivial interference effect in the expression (41) that plays a role. To scrutinize different contributions, we take the charm-light contribution (\(c\bar{q}\)) for \(A(p=0)\) at \(T=194\,\)MeV as an example and collect in Tab. 9 partial-wave components of the collision rate (obtained by replacing \((1-\frac{{\bf p}\cdot{\bf p}^{\prime}}{{\bf p}^{2}})\) by 1 in Eq. (40)) and relaxation rate up to angular momenta of 2 (note that for the collision rate the interference contributions should vanish, whereby the small numerical values quoted in the table, which are of the order of 1-2 permille of the total, are an indication of our numerical accuracy). We denote by "\(LL^{\prime}\)" the terms \(\sim T^{L}(T^{L^{\prime}})^{*}={\rm Re}T^{L}{\rm Re}T^{L^{\prime}}+{\rm Im}T^{L} {\rm Im}T^{L^{\prime}}+i(-{\rm Re}T^{L}{\rm Im}T^{L^{\prime}}+{\rm Im}T^{L}{\rm Re }T^{L^{\prime}})\) in Eq. (41), where the imaginary part vanishes by definition since \(T^{L}(T^{L^{\prime}})^{*}+(T^{L})^{*}T^{L^{\prime}}=2({\rm Re}T^{L}{\rm Re}T^{L^ {\prime}}+{\rm Im}T^{L}{\rm Im}T^{L^{\prime}})\). In accord with the \(T\)-matrix behavior at low momenta in Fig. 9, the collision rate for \(\chi=1\) is larger than that for \(\chi=0.6\) for each partial-wave component, leading to a larger total collision rate at low momentum. The situation is more involved for the relaxation rate: the diagonal partial-wave components (\(L=L^{\prime}\)) for \(\chi=1\) are smaller than those for \(\chi=0.6\), and one also notices the relatively more important role of the higher partial waves compared to the collision rate (which is dominated by the \(S\)-wave contribution). In addition, the presence of the \({\bf p}\cdot{\bf p}^{\prime}\) term, together with the Legendre polynomials, causes negative interference components (\(L\neq L^{\prime}\)), and their absolute values are larger for \(\chi=1\). Upon adding the diagonal and interference components the relaxation rate for \(\chi=0.6\) becomes larger. The widely discussed spatial diffusion coefficient, \(D_{s}=T/(M_{c}A(p=0))\), is related to the relaxation time, \(\tau_{c}=1/A(p=0)\), at vanishing 3-momentum of the heavy quark. It is commonly scaled by the inverse thermal wavelength, \(2\pi T\), to render a dimensionless quantity for which we display our results in Fig. 11 as a function of temperature. The \(\chi=0.6\) result shows a mild reduction relative to the \(\chi=1\) one, which is again caused by the larger average momenta of the thermal partons probed by the charm quark. The increase in the elastic charm-quark friction coefficient, and in particular its harder 3-momentum dependence, found here could have significant ramifications for the phenomenology of open HF probes in URHICs. In a recent work [47] a good description of \(D\), \(D_{s}\) and \(\Lambda_{c}\) observables in heavy-ion collisions has been achieved using the \(T\)-matrix based transport coefficients from Refs. [17; 48], which are based on the internal-energy (\(U\)) as a potential proxy but with an extra \(K\) factor of about 1.6. The pertinent results for \(A(p,T)\) (with \(K=1.6\)) are slightly larger than the ones from the SCS Figure 7: The pressure (normalized by \(T^{4}\)) in comparison to the lQCD data (black dots) from Ref. [34] (upper panel), and the in-medium light-quark and gluon masses as a function of temperature (lower panel). The \(\chi=1\) results are taken from Ref. [28]. Figure 8: Single-parton spectral functions for light quarks (first row), gluons (second row) and charm quarks (third row) with \(\chi=0.6\) (solid) and \(1\) (dashed) as a function of energy for various 3-momenta in each panel. From left to right, the four columns correspond to temperatures of \(T=194\), \(258\), \(320\) and \(400\,\)MeV, respectively. The \(\chi=1\) results are taken from Ref. [28]. Figure 9: The imaginary part of the \(S\)-wave charm-light \(T\)-matrices in the color-singlet channel at different temperatures. The \(T\)-matrix is displayed for four different values of the CM momentum (\(p_{cm}\)) in each panel. The \(\chi=1\) results are taken from Ref. [28]. with \(\chi=1\) at low momentum, but much larger at higher momenta. However, with our new \(\chi=0.6\) results, the low-momentum deficit can be overcome, while they still fall below the high-momentum results of the \(U\)-potential with \(K=1.6\). Yet, the inclusion of radiative processes, as computed within the \(T\)-matrix approach in Ref. [49] could result in a total transport coefficients that are quite comparable to the one employed in Ref. [47], without the need of any phenomenological adjustments. ## VII Conclusions We have augmented the thermodynamic \(T\)-matrix approach to include the effects of spin-dependent interactions between heavy quarks, including spin-orbital, spin-spin and tensor contributions, as part of the more general objective to assess \(1/M_{Q}\) corrections. Toward this end we have utilized the Breit-Fermi Hamiltonian to derive these interactions in the context of the Cornell potential as the two-body interaction kernel for the \(T\)-matrix equation. When benchmarking these interactions using the experimentally observed splittings in vacuum quarkonium spectroscopy, we have found that, in accordance with previous studies, a moderate admixture of a Lorentz-vector component in the confining potential allows for a much improved description especially in the charmonium sector. We have then implemented the amended interaction kernel into our quantum many-body approach for the QGP. While the spin-dependent interactions themselves are expected to be of minor importance (and therefore have been neglected), the vector component of the confining potential turns out to be rather significant. After selfconsistently refitting the in-medium HQ free energies and the QGP EoS under the inclusion of the vector component, quantitative modifications of the QGP properties toward shorter distances (larger momenta) were found. A strong broadening of the thermal-parton spectral functions persists to higher 3-momenta as a consequence of an increased interaction strength in the thermodynamic 2-body scattering amplitudes at larger momenta. The harder amplitudes and spectral functions are a consequence of the relativistic corrections induced by the vector part of the confining interaction, as opposed to a purely scalar interaction. This suggests that the nature of the confining force in the QCD vacuum has an impact on the properties of the strongly-coupled QGP, with liquid properties that extend to higher resolution scales compared to a purely scalar confining force. Finally, we have applied the modified set-up to calculate the friction coefficient of charm quarks. As compared to the results with a purely scalar string potential, a slightly larger relaxation rate is found at small momentum (and a pertinent decrease in the diffusion coefficient), but a much larger increase of a factor of \(\sim\)2-3 (or more) at momenta of around 5 GeV (and above). These are promising features to make a significant step forward in achieving a quantitative description of HF diffusion in heavy-ion collisions at RHIC and the LHC based on microscopically and non-perturbatively calculated transport coefficients. Work in this direction is in progress. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(LL^{\prime}\) & 00 & 11 & 22 & 01=10 & 12=21 \\ \hline \(\chi=1\) & 0.8058 & 0.3321 & 0.0678 & 0.0038 & 0.0016 \\ \(\chi=0.6\) & 0.7308 & 0.2943 & 0.0533 & 0.0022 & 0.0014 \\ \hline \hline \(\chi=1\) & 0.1501 & 0.1138 & 0.0354 & \(-0.0521\) & \(-0.0306\) \\ \(\chi=0.6\) & 0.1522 & 0.1313 & 0.0359 & \(-0.0322\) & \(-0.0305\) \\ \hline \hline \end{tabular} \end{table} Table 9: The contributions of various partial-wave components (specified in the first row) of the \(c\bar{q}\) scattering amplitude to the \(c\)-quark collision rate (lines 2 and 3) and relaxation rate (lines 4 and 5) for \(p=0\) at \(T=194\,\mathrm{MeV}\) (in units of fm\({}^{-1}\)). Figure 10: The charm-quark friction coefficient at different temperatures for \(\chi=0.6\) (solid) and 1 (dashed). The \(\chi=1\) results are taken from Ref. [19]. Figure 11: The charm-quark spatial diffusion coefficient for \(\chi=0.6\) (solid) and 1 (dashed). The \(\chi=1\) results are taken from Ref. [19]. ###### Acknowledgements. This work has been supported by the U.S. National Science Foundation under grant nos. PHY-1913286 and PHY-2209335, and by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics through the Topical Collaboration in Nuclear Theory on _Heavy-Flavor Theory (HEFTY) for QCD Matter_ under award no. DE-SC0023547.
2307.14352
General Image-to-Image Translation with One-Shot Image Guidance
Large-scale text-to-image models pre-trained on massive text-image pairs show excellent performance in image synthesis recently. However, image can provide more intuitive visual concepts than plain text. People may ask: how can we integrate the desired visual concept into an existing image, such as our portrait? Current methods are inadequate in meeting this demand as they lack the ability to preserve content or translate visual concepts effectively. Inspired by this, we propose a novel framework named visual concept translator (VCT) with the ability to preserve content in the source image and translate the visual concepts guided by a single reference image. The proposed VCT contains a content-concept inversion (CCI) process to extract contents and concepts, and a content-concept fusion (CCF) process to gather the extracted information to obtain the target image. Given only one reference image, the proposed VCT can complete a wide range of general image-to-image translation tasks with excellent results. Extensive experiments are conducted to prove the superiority and effectiveness of the proposed methods. Codes are available at https://github.com/CrystalNeuro/visual-concept-translator.
Bin Cheng, Zuhao Liu, Yunbo Peng, Yue Lin
2023-07-20T16:37:49Z
http://arxiv.org/abs/2307.14352v3
# General Image-to-Image Translation with ###### Abstract Large-scale text-to-image models pre-trained on massive text-image pairs show excellent performance in image synthesis recently. However, image can provide more intuitive visual concepts than plain text. People may ask: how can we integrate the desired visual concept into an existing image, such as our portrait? Current methods are inadequate in meeting this demand as they lack the ability to preserve content or translate visual concepts effectively. Inspired by this, we propose a novel framework named visual concept translator (VCT) with the ability to preserve content in the source image and translate the visual concepts guided by a single reference image. The proposed VCT contains a content-concept inversion (CCI) process to extract contents and concepts, and a content-concept fusion (CCF) process to gather the extracted information to obtain the target image. Given only one reference image, the proposed VCT can complete a wide range of general image-to-image translation tasks with excellent results. Extensive experiments are conducted to prove the superiority and effectiveness of the proposed methods. Codes are available at [https://github.com/CrystalNeuro/visual-concept-translator](https://github.com/CrystalNeuro/visual-concept-translator). ## 1 Introduction Image-to-image translation (I2I) task aims to learn a conditional generation function that translates images from source to target domain with source content preserved and target concept transferred[34, 46]. General I2I can complete a wide range of applications without dedicated model design or training from scratch [45]. Traditionally, generative adversarial networks (GAN) or normalizing flow [11] are mainly applied to I2I tasks [19, 19, 34, 3]. However, these methods suffer from the problem of lacking adaptability [41]. The model trained in one source-target dataset cannot adapt to another one, so they fail to work in the scenario of general I2I. Diffusion-based image synthesis has been developed rapidly in recent years due to the application of large-scale models [35, 37, 33]. Their strength is using a large number of image-text pairs for model training, so diverse images can be generated by sampling in the latent space guided by a specific text prompt. However, in our daily life, we accept massive visual signals containing abundant visual concepts. These visual concepts are difficult to describe in plain text just as the adage "A picture is worth a thousand words". In addition, I2I guided by reference images has wide applications including game production, artistic creation, and virtual reality. Therefore, research on image-guided I2I contains great potential in the computer vision community. Several works try to extract visual information from images with the desired concepts. Specifically, [9] proposes a technique named textual inversion (TI) which freezes the model and learns a text embedding to represent the visual concepts. On the basis of TI, DreamBooth [36] and Imagic [20] are proposed to alleviate overfitting caused by model fine-tuning. The above methods are under the few-shot setting but sometimes collecting several related images containing the same concept is difficult. To address this problem, [7] proposes to use both positive and negative text embedding to fit the one-shot setting. However, these methods cannot be directly used in I2I tasks because they cannot preserve the content in the source image. In order to preserve the source contents, the recently proposed DDIM inversion [6, 40] finds out the deterministic noise along the reverse direction of the diffusion backward process. Then, some studies[30, 12] further apply and improve the DDIM inversion to text-guided image editing. However, these methods are text-conditional so they fail to understand the visual concepts from reference images. Alternately, some works [49, 41] try to connect the source and target domain with image condition, but their models are task-specific so they cannot be used in general I2I. In this paper, to complete the general I2I tasks guided by reference images, we propose a novel framework named visual concept translator (VCT) with the ability to preserve content in the source image and translate the visual concepts with a single reference image. The proposed VCT solves the image-guided I2I by two processes named content-concept inversion (CCI) and content-concept fusion (CCF). The CCI process extracts contents and concepts from source and reference images through pivot turning inversion and multi-concept inversion, The CCF process employs a dual-stream denoising architecture to gather the extracted information to obtain the target image. Given only one reference image, the proposed VCT can complete a wide range of general image-to-image translation tasks with excellent results. Extensive experiments including massive tasks of general I2I and style transfer are conducted for model evaluation. In summary, our contributions are as follows (1) We propose a novel framework named visual concept translator (VCT). Given only a single reference image, VCT can complete the general I2I tasks with the ability to preserve content in the source image and translate the visual concepts. (2) We propose a content-concept inversion (CCI) to extract contents and concepts with pivot turning inversion and multi-concept inversion. We also propose a content-concept fusion (CCF) process to gather the extracted information with a dual-stream denoising architecture. (3) Extensive experiments including massive tasks of general I2I and style transfer are conducted for model evaluation. The generation results show the high superiority and effectiveness of the proposed methods. ## 2 Related Works ### Image-to-image Translation The I2I aims to translate an image from the source domain to the target domain. The current I2I paradigms are mostly based on GANs [1, 29, 8, 53, 58, 50, 59]. However, these methods suffer from the problem of lacking adaptability [41]. The model trained in one source-target dataset cannot adapt to another one. In addition, large training images are always required for these methods. The TuiGAN proposed by Lin et al. [27] can achieve translation with only one image pair, but their method necessitates retraining the whole network for each input pair, which is very time-consuming. One specific type of I2I named image style transfer tries to transform the image style from source to target. The seminal work of Gatys et al. [10] shows that artistic images can be generated by separating content and style with the deep neural network. Then, to realize real-time style transfer, Johnson et al. [18] train a feed-forward network to handle the optimization problem mentioned by Gatys et al. Many works [47, 42, 43, 24, 17, 23] are categorized into per-style-per-model where the trained model can only fit one specific style. In order to increase the model flex ibility, arbitrary style transfer is realized by many studies [15, 31, 16, 4, 28, 39, 48] where only single forward pass is needed for any input style image. However, these methods fail to generalize to general I2I tasks such as face swap because they lack the ability to process fine-grained information. ### Diffusion-based Image Synthesis Large-scale diffusion models conditioned on the plain text have shown good performance in high-resolution image syntheses recently, such as Stable Diffusion [35], Imagen [37] and DALL-E 2 [33]. The large text-image models [5, 32] are used by these methods to achieve text-guided synthesis. However, the text used to generate the target images is sometimes unavailable, so the inversion technique is used by some works [9, 36, 20] to learn a text embedding to guide the pre-trained large-scale diffusion models. To achieve translation of images from the source to the target domain, DDIM inversion [6, 40] finds out the deterministic noise vector with text condition along the reverse direction of the backward process, but this method is guided by text only. Our proposed method tries to handle the above drawbacks and fuses the abundant visual concepts from the image to complete the general I2I tasks. ## 3 Methods ### Preliminaries **Latent Diffusion Models**. Diffusion models are probabilistic generative models in which an image \(x_{0}\) is generated by progressively removing noise from an initial Gaussian noise image \(x_{T}\sim\mathcal{N}\left(0,\mathbf{I}\right)\) in the sequence of \(x_{T},x_{T-1},...,x_{1},x_{0}\). With the remarkable capacity of image generation, the Latent Diffusion Model (LDM) [35] is utilized as our model backbone. Different from the conventional diffusion models that perform denoising operations directly in the image space, LDM conducts the process in the latent space with an autoencoder. Specifically, an input image x is encoded into the latent space by the autoencoder \(z=\mathcal{E}(x),\hat{x}=\mathcal{D}(z)\) (with an encoder \(\mathcal{E}\) and a decoder \(\mathcal{D}\)) pre-trained with a large number of images. Then, the denoising process is achieved by training a neural network \(\epsilon_{\theta}\left(z_{t},t,v\right)\) that predicts the added noise, following the objective: \[\min_{\theta}E_{z_{0},\epsilon\sim\mathcal{N}(0,I),t\sim\mathrm{U}(1,T)}\left\| \epsilon-\varepsilon_{\theta}\left(z_{t},t,v\right)\right\|_{2}^{2}. \tag{1}\] Note that \(v\) is the text embedding generated from the text condition and \(z_{t}\) is the noisy latent in timestamp \(t\). \(z_{t}\) is generated by adding noise to the sampled data \(z_{0}\) as \[z_{t}=\sqrt{\alpha_{t}}z_{0}+\sqrt{1-\alpha_{t}}\epsilon, \tag{2}\] with \(0=\alpha_{t}<\alpha_{t-1}<...<\alpha_{0}=1\), which are hyperparameters of the diffusion schedule, and \(\epsilon\sim\mathcal{N}\left(0,I\right)\). The text embedding \(v\) is obtained by \(v=\tau\left(y\right)\) where \(\tau\) is a BERT [5] tokenizer and \(y\) is a text prompt. The tokenizer \(\tau\) converts each word or sub-word in an input string to a token, which is an index in a specific pre-defined dictionary. Each token is then linked to a unique embedding vector that can be retrieved through an index-based lookup. Figure 2: **The overall visual concept translator(VCT) framework**. Given a source image \(x^{src}\) and a reference image \(x^{ref}\): (A) Content-concept inversion (CCI) process, we apply Pivot Turning Inversion with \(x^{src}\) to obtain the source text embedding \(v^{src}\). Meanwhile, we apply Multi-concept Inversion with \(x^{ref}\) to learn the reference text embedding \(v^{ref}\). (B) Content-concept fusion(CCF) process, we employ a dual-stream denoising architecture for image translation work, including a main branch \(\mathcal{B}\) and a content matching branch \(\mathcal{B}^{*}\). They start with the same initial noise inverted by \(x^{src}\) using DDIM inversion. The content matching branch reconstructs the source image and extracts the attention maps for the attention control mechanism. Finally, the main branch gathers all the information to obtain a target image \(x^{tgt}\). **Texture inversion**. Textual Inversion (TI) [9] introduces a new concept in a pre-trained text conditional generative model by learning an embedding \(e^{*}\) as pseudo-words \(S^{*}\). With a small collection of images \(X\), TI do so by solving the following optimization problem: \[\min_{e}E_{x}\,{\sim}\,\mathcal{U}_{\mathcal{K}}\,E_{z_{t}\sim q(z_{t}|x)}\, \norm{\epsilon-\hat{\varepsilon}_{\theta}\left(z_{t},t,\tau\left(y,S^{*}\right) \right)}_{2}^{2}\,. \tag{3}\] As such, it motivates the learned embedding \(e^{*}\) to capture fine visual details unique to the concept at a coarse level. **DDIM inversion**. Inversion entails finding a noise map \(z_{t}\) that reconstructs the input latent code \(z_{0}\) upon sampling. A simple inversion technique was suggested for the DDIM sampling [6, 40], based on the assumption that the ODE process can be reversed in the limit of small steps: \[z_{t+1}=\sqrt{\bar{\alpha}_{t+1}}f_{\theta}\left(z_{t},t,v\right)+\sqrt{1- \bar{\alpha}_{t+1}}\varepsilon_{\theta}\left(z_{t},t,v\right). \tag{4}\] where \(z_{t}\) is noised latent code at timestep \(t\), \(\bar{\alpha}_{t+1}\) is noise scaling factor as defined in DDIM[6], and \(f_{\theta}\left(z_{t},t,v\right)\) predicts the final denoised latent code \(z_{0}\). \[f_{\theta}\left(x_{t},t,c\right)=\frac{x_{t}-\sqrt{1-\bar{\alpha}_{t}} \epsilon_{\theta}\left(x_{t},t,c\right)}{\sqrt{\bar{\alpha}_{t}}} \tag{5}\] In other words, the diffusion process is performed in the reverse direction, that is \(z_{0}\to z_{T}\) instead of \(z_{T}\to z_{0}\), where \(z_{0}\) is set to be the encoding of the given real image. **Classifier-free guidance**. The diffusion model may ignore the conditional input and produce results uncorrelated with this input. One way to address this is the classifier-free guidance[14]. During the denoising step, with a guidance scale \(w\geq 1\), the classifier-free guidance prediction is defined by: \[\tilde{\varepsilon}_{\theta}\left(z_{t},t,v\right)=w\cdot\varepsilon_{\theta }\left(z_{t},t,v\right)+\left(1-w\right)\cdot\varepsilon_{\theta}\left(z_{t}, t,v_{\varnothing}\right). \tag{6}\] where \(v_{\varnothing}\) represents the embedding of a null text. ### Overall Framework Given a source image \(x^{src}\) and a reference image \(x^{ref}\), the goal of VCT is to generate a new image \(x^{tgt}\) that complies with \(x^{ref}\) while preserving the structure and semantic layout of \(x^{src}\). Fig. 2 shows the overall framework of the proposed VCT including a content-concept inversion (CCI) process and a content-concept fusion (CCF) process. As shown in Fig. 2 (a), the CCI process extracts contents and concepts from source image \(x^{src}\) and reference image \(x^{ref}\) into learnable embeddings. Then in Fig. 2 (b), the CCF process employs a dual-stream denoising architecture including a main branch \(\mathcal{B}\) and a content matching branch \(\mathcal{B}^{*}\), and both branches starts from the same initial noise inverted by \(x^{src}\). The content matching branch reconstructs the source image and extracts the attention maps to guide the main process by the attention control mechanism. Then, the main branch gathers all information to obtain a target image \(x^{tgt}\). For better understanding, we first explain the CCF process in Section 3.3, then we describe the CCI process in Section 3.4. ### Content-concept Fusion \(\epsilon\)**Space Fusion**. Given two different text embedding \(v^{src}\) and \(v^{ref}\), they can be guided separately and yield two different noise prediction \(\epsilon^{src}\) and \(\epsilon^{ref}\): \[\epsilon^{src}=\varepsilon_{\theta}\left(z_{t},t,v^{src}\right),\epsilon^{ ref}=\varepsilon_{\theta}\left(z_{t},t,v^{ref}\right). \tag{7}\] We call this space \(\epsilon\)_space_, as shown in Fig. 3. According to the conclusion stated by classifier guidance[6] and classifier-free guidance[14], the noise prediction in \(\epsilon\) space in each diffusion step can be interpreted as score estimation function \(\varepsilon_{\theta}\left(z_{t},t,v\right)\approx-\sigma_{t}\nabla_{z_{t}} \log p\left(z_{t}\mid v\right)\), where \(\nabla_{z_{t}}\log p\left(z_{t}\mid v\right)\) represents the gradient of log-likelihood of an implicit classifier \(p\left(v\mid z_{t}\right)\propto p\left(z_{t}\mid v\right)/p\left(z_{t}\right).\) Figure 4: **Dual-stream denosing architecture.** Figure 3: \(\epsilon\)**Space Fusion**. In each denoising step \(t\), the text embeddings \(v_{t}^{src}\) and \(v^{ref}\) are extrapolated with guidance scale \(w\) in \(\epsilon\) space. Under the score estimation function view of \(\epsilon\) space, the independent feature \(v^{src}\) and \(v^{ref}\) can be fused in the \(\epsilon\) space to generate images containing certain attributes from both the source image and the reference image: \[\tilde{\varepsilon}_{\theta}\left(z_{t},t,v^{src},v^{ref}\right)=w\cdot\epsilon ^{src}+(1-w)\cdot\epsilon^{ref}. \tag{8}\] \(w\) is the hyperparameter that balances the two terms. It's noted that the classifier-free guidance is a special case of Eq. 8. **Dual stream denoising architecture**. Based on the \(\epsilon\) fusion mechanism, we now turn to the image translation task. As shown in Fig. 4, let \(x^{T}\) be the initial noise, obtained by inverting \(x^{src}\) using DDIM inversion with Eq. 4, where we set \(v=v_{\varnothing}\). Starting from the same initial noise \(x^{T}\), we employ a dual-stream denoising architecture for I2I, denoted as a main branch \(\mathcal{B}\) and a content matching branch \(\mathcal{B}^{*}\). The content matching branch \(\mathcal{B}^{*}\) is a denoising process that perfectly reconstructs the source image \(x^{src}\) (with \(z^{src}\) perfectly reconstructed in latent space for LDMs), and the main branch \(\mathcal{B}\) is the denoising process that finally serves for the I2I tasks. \[\begin{split}\mathcal{B}^{*}:z_{T}&\to z_{T-1}^{*} \rightarrow...\to z_{1}^{*}\to z^{src}\\ \mathcal{B}:z_{T}&\to z_{T-1}\rightarrow... \to z_{1}\to z^{tgt}.\end{split} \tag{9}\] At each denoising step \(t\), the content matching branch \(\mathcal{B}^{*}\) aims to extract the text embedding \(v_{t}^{src}\) and the attention map \(M_{t}^{*}\), which would serve for the parallel denoising step in the main branch. With \(\mathcal{B}^{*}\), we obtain meaningful embedding and generated structure of the source image. To better inject the information of the source image \(x^{src}\), the dual stream diffusion processes have almost the same computation pipelines, except for the reference embeddings used in \(\epsilon\) space fusion. We perform \(\epsilon\) space fusion in the content matching branch as the main branch by: \[\tilde{\varepsilon}_{\theta}\left(z_{t},t,v^{src},v_{\varnothing}\right)=w \cdot\epsilon^{src}+(1-w)\cdot\epsilon^{\varnothing}. \tag{10}\] The above sampling procedure reduces to the classifier-free guidance. And we should ensure that Eq. 8 and Eq. 10 have the same \(w\) for the dual-stream diffusion architecture. **Attention control**. Recent large-scale diffusion models [35, 37, 33] incorporate conditioning by augmenting the denoising network \(\varepsilon_{\theta}\) with self-attention layer and cross-attention layer [2, 44]. Of particular interest are the _cross-attention map_ and _self-attention map_, denoted as \(M\) in total, which is observed to have a tight relation with the structure of the image [12]. To this end, Amir at al.[12] pursue _prompt-to-prompt_ editing framework for text-guided image translation task, which controls the attention maps of the edited image by injecting the attention maps of the original image along the diffusion process. In our case, we employ soft attention control as described in _prompt-to-prompt_[12]. Let \(M_{t}^{*}\) be the attention map of a single step \(t\) of the content matching branch, and \(M_{t}\) be the attention map of the main branch. The soft attention control is defined as: \[\widehat{M}=\text{{AC}}\left(M_{t},M_{t}^{*},t\right)=\begin{cases}M_{t}^{*}& \text{if }t<\tau\\ M_{t}&\text{otherwise}\end{cases} \tag{11}\] where \(\tau\) is a timestamp parameter that determines until which step the attention map replacement is applied. We define \(\tilde{\varepsilon}_{\theta}\left(z_{t},t,v^{src},v^{ref}\right)\left\{M \leftarrow\widehat{M}\right\}\) to be the function that overrides the attention map \(M\) in \(\tilde{\varepsilon}\) with additional given map \(\widehat{M}\). ### Content-concept Inversion **Pivotal turning inversion** is proposed to generate the content embedding to guide the CCF process. We start by studying the DDIM inversion[6, 40]. In practice, a slight error is incorporated in every step. For unconditional diffusion models, the accumulated error is negligible and the DDIM inversion succeeds. However, recall that meaningful editing using the Stable Diffusion model [35] requires applying classifier-free guidance with a guidance scale \(w\). Ron et al.[30] have presented that such a guidance scale amplifies the accumulated error. To this end, Ron et al. [30] introduce null-text inversion technology to reconstruct the image and further for text-guided image translation tasks. Null-text inversion modifies the unconditional embedding in each timestamp \(t\) that is used for classifier-free guidance to match the initial conditional DDIM inversion trajectory. In our image-guided case, we do not know the exact text prompt of the source image \(x^{src}\). So, inspired by [30], we implement unconditional DDIM inversion, and optimize the source embedding \(v_{t}^{src}\) in each timestamp \(t\) for accurately matching the source image \(x^{src}\), instead of the DDIM inversion trajectory. In each timestamp \(t\), we optimize the \(v_{t}^{src}\) by: \[\min_{v_{t}^{src}}\|z_{0}-\hat{z}_{0}\left(z_{t},v_{t}^{src}\right)\|_{2}^{2} \tag{12}\] where \(\hat{z}_{0}\left(z_{t},v_{t}^{src}\right)\) refers to the estimated clean latent \(\hat{z}_{0}\) given \(z_{t}\) and \(v_{t}^{src}\), using the Tweedie's formula [21]. We rewrite it as: \[\hat{z}_{0}\left(z_{t},v_{t}^{src}\right)=\frac{z_{t}}{\sqrt{\tilde{\alpha}_{ t}}}-\frac{\sqrt{1-\tilde{\alpha}_{t}}}{\sqrt{\tilde{\alpha}_{t}}}\tilde{ \varepsilon}_{\theta}\left(z_{t},t,v^{src},v_{\varnothing}\right) \tag{13}\] where \(\tilde{\varepsilon}_{\theta}\left(z_{t},t,v^{src},v_{\varnothing}\right)\) is defined in Eq. 10. Note that for every \(t<T\), the optimization should start from the endpoint of the previous step \(t+1\) optimization, which computes a constant \(z_{t}\) using the optimized \(v_{t+1}^{src}\) and \(z_{t+1}\). Otherwise, the learned embedding would not hold at inference. **Multi-concept inversion** is proposed to represent complex visual concepts by generating the concept embedding. Lastly, we should learn a reference embedding \(v^{ref}\) from the reference image \(x^{ref}\). The methodological approach is related to Textual Inversion [9] and DreamArtist [7]. To represent the concepts in the input images, Textual Inversion [9] learns an embedding as pseudo-words \(S_{*}\) from few-shot images. DreamArtist [7] improves Textual Inversion, which learns a paired positive and negative multi-concept embeddings(\(S_{*}^{p}\) and \(S_{*}^{n}\)) from one-shot image and proposes reconstruction constraint for detail enhancement. In our case, we apply a similar strategy as DreamArtist, yet our method offers two improvements: Firstly, we find that the multi-concept embeddings are useful for mining semantics information from the images, while the negative embeddings are optional. And in our pipeline, the negative embeddings are in conflict with the source embedding \(x^{src}\). Thus, we use single positive multi-concept embeddings for learning the reference text embedding \(v^{ref}\). We freeze the parameters of the generative diffusion model \(\varepsilon_{\theta}\), and optimize the \(v^{ref}\) using the denoising diffusion objective[13]: \[\mathcal{L}_{ldm}=E_{\epsilon,t}\left[\left\|\epsilon-\varepsilon_{\theta} \left(z_{t}^{ref},t,v^{ref}\right)\right\|_{2}^{2}\right]. \tag{14}\] where \(v^{ref}\) is the multi-concept embeddings, \(z_{t}^{ref}\) is a noisy version of \(z_{0}^{ref}\) (the latent code of the reference image \(x^{ref}\)) obtained using Eq. 2, \(\epsilon\sim\mathcal{N}(0,I)\) and \(t\sim\mathrm{U}(1,T)\). Secondly, we improve the reconstruction constraint for the mechanism of detail enhancement in DreamArtist. DreamArtist applys reconstruction constraint in \(x\) space, which can be denoted as \(\mathcal{D}\left(\hat{z}_{t-1}\left(z_{t},S^{*}\right)\right)\leftrightarrow x _{0}\). For one thing, optimization in \(x\) space suffers from huge resource consumption due to the gradient backpropagation inside decoder \(\mathcal{D}\). For another thing, there is a gap between the estimated \(z_{t-1}\) and \(z_{0}\), especially in the early stage of the denoising process. Formally, we implement reconstruction constraint in \(z\) space. The reconstruction loss can be written as: \[\mathcal{L}_{rec}=E_{\epsilon,t}\left[\left\|z_{0}^{ref}-\hat{z}_{0}\left(z_{ t}^{ref},v_{t}^{ref}\right)\right\|_{2}^{2}\right]. \tag{15}\] where \(\hat{z}_{0}\left(z_{t}^{ref},v_{t}^{ref}\right)\) refers to the estimated clean latent \(\hat{z}_{0}^{ref}\) given \(z_{t}^{ref}\) and \(v_{t}^{ref}\), using Eq. 13. ## 4 Experiments ### Implementation details Putting all components together, our full algorithm is presented in our supplementary material. The core training process consists of two parts: pivotal tuning inversion with \(x^{src}\) and multi-concept inversion with \(x^{ref}\), which can be implemented independently. For more details please refer to our supplementary material. Our experiments were conducted using a single A100 GPU. We use Adam[22] optimizer for both training processes. We collect the evaluation images from the large-scale LAION 5B dataset [38] containing 5 billion images. ### Comparison to Prior/Concurrent Work **General I2I Tasks**. Here, we evaluate the performance of the proposed framework in general I2I tasks including leopard\(\rightarrow\)dog, face swap, and mountain\(\rightarrow\)snow mountain, as shown in Fig. 5. We compare the proposed method with TuiGAN [27], PhotoWCT [26], stable diffusion (SD) [35], textual inversion (TI) [9] and prompt-to-prompt (Ptp) [12]. For text-to-image models without learned embedding input including SD and Ptp, we use BLIP image caption model [25] to extract text description as input of diffusion model. From Fig. 5, the GAN-based translation methods TuiGAN and PhotoWCT cannot well translate the concept with only one image input with poor generation quality. For example, from columns 3-4 of Fig. 5, GAN-based methods only translate part of texture features from the reference image in leopard\(\rightarrow\)dog and face swap task, and the image quality is poor in the mountain\(\rightarrow\)snow mountain task. Therefore, the GAN-based methods cannot achieve satisfactory results in the one-shot setting. For diffusion-based methods SD and TI, the concepts of the reference image can be well preserved, but the information in the content image cannot be extracted. As shown in column 7 of Fig. 5, Ptp can well preserve content but the concepts in the reference images cannot be fused. By tackling all weaknesses of the above methods, the proposed VCT can generate the best results with concepts learned and content preserved. Furthermore, to evaluate the strong concept translation ability of the proposed VCT, we keep the content image fixed and change different reference images in Fig. 6. The generation results of different reference images show satisfactory content preservation and concept translation ability. More results can be found in the supplementary material. As shown in Fig. 7, we further make comparisons to concurrent one-shot baselines: Paint-by-example[49] and ControlNet[52]. These methods use additional conditions for controlling the generated image, while our method obtains better performance. **Image Style Transfer**. In addition to general I2I, the proposed method also achieves excellent results in tasks of image style transfer. We compare our method with recent SOTAs in style transfer tasks with different art styles. As shown in Fig. 8, we totally compare with three GAN-based methods including TuiGAN [27], PhotoWCT [26] and ArtFlow [54], and three diffusion-based methods including SD [35], TI [9] and Ptp [12]. Following the setting of general I2I, we use the BLIP image caption model to extract text descriptions for text-to-image model SD and Ptp. From the results in Fig. 8, large defects exist for results generated by GAN-based methods, especially for TuiGAN and ArtFlow as columns 3 and 5 in Fig. 8. The same content preservation problem exists in diffusion-based methods SD and TI as general I2I. For Ptp, although the contents are preserved, the concept in the reference images cannot be well translated. The proposed method can also generate the most satisfactory images, as shown in column 9 of Fig. 8. We also evaluate the model performance by keeping the reference image fixed and changing the content image, and vice versa. The results are shown in Fig. 9. The excellent translation results prove the generalization of the proposed method. lenging the same setting of StyTR2, we randomly choose 800 generated images from different translation tasks for quantitative comparison. We compare the proposed method with the state-of-the-art including Artflow [54], CAST [57], InST [51], StyleFormer [56] and StyTR2 [55], and the results are shown in Table 1. We use Learned Perceptual Image Patch Similarity (LPIPS) to evaluate the difference between output and source image, and CLIP score to evaluate the difference between output and reference image. The results show our proposed method can achieve the best performance with the lowest LPIPS and highest CLIPscore. ### Ablation Study Finally, we ablate each component of our method and show its effectiveness, including multi-concept inversion (MCI), pivotal turning inversion (PTI), and attention control (AC). See visual ablation studies in Fig. 10: (a) By removing MCI, where we use the word 'dog' to generate the reference embedding \(v^{ref}\) in our pipeline, the generated result is not the specific dog in the reference image. (b) Without using PTI, the content matching branch cannot reconstruct the content image, due to the inconsistent DDIM sampling trajectory. (c) By removing AC, the result can not retain the structure of the content image. Overall, we can obtain the best generation outputs by using all of our proposed components, which better preserves the structure and semantic layout of the content image, while complying with the reference image. Further ablations can be found in the supplementary material. Figure 8: Model Performance in the style translation task. The first and second columns show the reference and content images, respectively. The 3-7 columns show the translation results of different methods. Figure 10: Visualization of the ablation study. Figure 9: Content variance and style variance of the style transfer result. ## 5 Conclusion In this work, motivated by the importance of visual concepts in our daily life, we complete the general I2I with image guidance by proposing a novel framework named VCT. It can preserve the content in the source image and translate visual concepts guided by a single reference image. We evaluate the proposed model on a wide range of general image-to-image translation tasks with excellent results.
2310.06399
Lo-Hi: Practical ML Drug Discovery Benchmark
Finding new drugs is getting harder and harder. One of the hopes of drug discovery is to use machine learning models to predict molecular properties. That is why models for molecular property prediction are being developed and tested on benchmarks such as MoleculeNet. However, existing benchmarks are unrealistic and are too different from applying the models in practice. We have created a new practical \emph{Lo-Hi} benchmark consisting of two tasks: Lead Optimization (Lo) and Hit Identification (Hi), corresponding to the real drug discovery process. For the Hi task, we designed a novel molecular splitting algorithm that solves the Balanced Vertex Minimum $k$-Cut problem. We tested state-of-the-art and classic ML models, revealing which works better under practical settings. We analyzed modern benchmarks and showed that they are unrealistic and overoptimistic. Review: https://openreview.net/forum?id=H2Yb28qGLV Lo-Hi benchmark: https://github.com/SteshinSS/lohi_neurips2023 Lo-Hi splitter library: https://github.com/SteshinSS/lohi_splitter
Simon Steshin
2023-10-10T08:06:32Z
http://arxiv.org/abs/2310.06399v1
# _Lo-Hi_: Practical ML Drug Discovery Benchmark ###### Abstract Finding new drugs is getting harder and harder. One of the hopes of drug discovery is to use machine learning models to predict molecular properties. That is why models for molecular property prediction are being developed and tested on benchmarks such as MoleculeNet. However, existing benchmarks are unrealistic and are too different from applying the models in practice. We have created a new practical _Lo-Hi_ benchmark consisting of two tasks: Lead Optimization (Lo) and Hit Identification (Hi), corresponding to the real drug discovery process. For the Hi task, we designed a novel molecular splitting algorithm that solves the Balanced Vertex Minimum \(k\)-Cut problem. We tested state-of-the-art and classic ML models, revealing which works better under practical settings. We analyzed modern benchmarks and showed that they are unrealistic and overoptimistic. Review: [https://openreview.net/forum?id=H2Yb28qGLV](https://openreview.net/forum?id=H2Yb28qGLV) Lo-Hi benchmark: [https://github.com/SteshinSS/lohi_neurips2023](https://github.com/SteshinSS/lohi_neurips2023) Lo-Hi splitter library: [https://github.com/SteshinSS/lohi_splitter](https://github.com/SteshinSS/lohi_splitter) ## 1 Introduction Drug discovery is the process of identifying molecules with therapeutic properties [1]. To serve as a drug, a molecule must possess multiple properties simultaneously [2]. It must be stable [3], yet easily eliminated from the body [4], able to reach its target [5; 6; 7], non-toxic [8], cause minimal side effects [9], and therapeutically active on the target [10; 11]. To identify such molecules, researchers develop Molecular Property Prediction (MPP) models. These models are used in a virtual screening, during which the models are used to make predictions for a large number of molecules, after which the molecules with the best estimated property are selected for experimental validation [12; 13; 14; 15; 16]. To enable comparisons between various architectures, the research community relies on standardized benchmarks to evaluate model performance [17; 18; 19; 20; 21; 22; 23]. The prevailing assumption is that models with superior metrics on these benchmarks are more suitable for real-world applications. We believe this assumption is false. Modern benchmarks test in non-realistic conditions on impractical tasks. In this paper, we introduce two new practical drug discovery ML tasks -- Hit Identification and Lead Optimization -- that are often encountered in most drug discovery campaigns. We demonstrate that none of the benchmarks assess models for these tasks, which is why we propose seven new practical datasets that better imitate real-life drug discovery scenarios. To prepare Hi datasets, we designed a novel molecular splitter algorithm that solves Balanced Vertex Minimum \(k\)-Cut problem using Integer Linear Programming with heuristics. ## 2 Practical drug discovery ### Hit Identification (Hi) Drug discovery involves multiple stages. When the potential mechanism of action is known, an early step is Hit Identification. During this phase, chemists search for a "hit" - a molecule with the potential to become a drug [24]. A viable hit must exhibit some level of activity towards the target (e.g., Ki less than \(10\)\(\mu\)M) and possess novelty, meaning it is eligible for patent protection. Clinical trials can incur costs in the hundreds of millions of dollars [25; 26], making it risky to pursue non-patentable molecules. Consequently, companies prioritize patentable molecules from the outset. Novelty is an essential aspect of this process. In practice, medicinal chemists often pre-filter their chemical libraries before the virtual screening, eliminating molecules that exhibit a Tanimoto similarity above a specific threshold to molecules with known activity [10; 11; 16; 27; 28; 29]. ### Lead Optimization (Lo) Following Hit Identification, the next stage is Lead Optimization. Once a novel, patentable molecule with activity towards the desired target is identified, its closest analogs are also likely to exhibit activity [30]. In this phase, medicinal chemists often make minor modifications to the hit molecule to enhance its target activity, selectivity, and other properties [31; 32]. Unlike in Hit Identification, novelty usually is not a priority during Lead Optimization. Instead, the objective is to discover molecules that are similar to the original hit but possess improved characteristics. This is related to the field of goal-directed molecular generation [33], which is occasionally formulated as the problem of searching for molecules with maximal activity within the \(\varepsilon\)-neighborhood of a known hit [34; 35], within some scaffold [36], or as an optimization process in the latent space [37]. These works use predictive ML models to distinguish between minor chemical modifications [38]. The effectiveness of these models in their ability to distinguish between molecules with small modifications remains unclear. ### Model Selection In this context, Lead Optimization (Lo) and Hit Identification (Hi) represent contrasting tasks. Hi ML models are expected to demonstrate their generalization capabilities by predicting properties of molecules significantly different from the training set. Conversely, Lo ML models should predict properties of minor modifications of molecules with known activity. For selecting the appropriate ML model or adjusting its hyperparameters, it is crucial to test the model under conditions similar to its intended application. In Hi, models must identify novel molecules markedly different from known active ones, implying the models should be tested on molecules distinctly different from the training set. Conversely, Lo applies ML models to molecules resembling known active ones. The null Structure-Activity Relationship hypothesis assumes that small modifications will not alter the molecule's properties [30; 39]. Consequently, Lo ML models should be evaluated based on their ability to make predictions that surpass the null hypothesis. We show that modern benchmarks mix these two scenarios, which is why it remains unclear how effectively models can generalize to truly novel molecules and how proficiently they can distinguish minor modifications, thereby guiding molecular optimization. Furthermore, we untangle these scenarios using a novel benchmark, demonstrating that **different architectures are better suited for different tasks**. ## 3 Novelty But how to measure novelty? From a commercialization and regulatory perspective, a novel molecule is one that can be patented. Contemporary patents consist of human-written text and incorporate non-trivial substitutions, rendering the assessment of a molecule's patentability a complex task for legal professionals. To date, this process has not been automated. Additionally, during the Hit Identification, not only must the hit be patentable -- its neighborhood should be patentable as well to make Lead Optimization possible. When dealing with an extensive chemical library, these challenges make patentability an impractical criterion for determining novelty. While chemists agree that minor substitutions likely do not alter a molecule's function, a consensus on the distinction between a new molecule and a modified version of an existing one remains elusive. One might hope to find a general similarity threshold that would separate similar molecules with presumably the same activity from distinct molecules. This hope led to the "0.85 myth" [40], which proved to be false due to the significant variability of such thresholds across different targets [41]. About molecular similarity, it was said [42; 41], "Similarity is in the eye of the beholder." Nevertheless, a practical criterion exists. In study [43] 143 experts from regulatory authorities(FDA, FDA of Taiwan, EMA, PMDA) were shown 100 molecular pairs and asked if the molecules should be regarded as structurally similar. This study is especially notable because it simulates a real-life scenario in which a regulatory committee (EMA's Committee for Medicinal Products for Human Use) decides if a new drug is novel enough to grant it a beneficial status: orphan drug designation. The study shows that when two molecules have Tanimoto similarity \(\approx 0.4\) with ECFP4 fingerprints [44], half of the experts regard them as dissimilar, which is sufficient to conclude the novelty of the drug. See Appendix G for our reproduction. While Tanimoto similarity with ECFP4 fingerprints has its own disadvantages such as size bias [45], its simplicity makes it possible to quickly measure the similarity of molecules, so there are practical [16] and theoretical [46; 47] works that use "Tanimoto similarity < 0.4" as a novelty criterion. Because of the practical evaluation in real-life scenarios and efficiency, we assess novelty using Tanimoto similarity with ECFP4 fingerprints in our benchmark. ## 4 Our contribution * We suggest two new practical drug discovery ML tasks -- Hit Identification and Lead Optimization -- that better imitate real-life scenarios; * We designed a novel molecular splitting algorithm for the Hi task; * We propose seven new practical datasets; * We demonstrate that modern ML drug discovery benchmarks simulate impractical scenarios; * We evaluate modern and classic ML algorithms on our benchmark. ## 5 Modern benchmarks test neither Lo nor Hi To select a model or its hyperparameters, we must evaluate them under the conditions in which they will be used. We demonstrate that none of the modern benchmarks correspond to realistic conditions, thus raising questions about their suitability for evaluating practical machine learning models. Although it is impossible to examine every existing benchmark due to their sheer number, we have opted to assess a variety of benchmarks using different data, different preprocessing techniques, and originating from different authors. We will initially focus on standard benchmarks, while in the section 6, we will explore more exotic and specialized ones. We provide additional analysis in Appendix D. ### MoleculeNet: Esol MoleculeNet [17] is a widely-used benchmark for ML drug discovery, consisting of 17 datasets, including the ESOL dataset with water solubility data. This dataset is frequently used in studies for model comparisons [48; 49; 50; 51; 52; 53]. While the authors recommend a random train-test split, we discovered that it leads to 76% of the test molecules having a neighbor in the train with Tanimoto similarity > 0.4, making ESOL unsuitable for Hi scenario (see Fig. 3). It is also unclear how well the dataset represents the Lo scenario, as the distinct 24% of the train contributes to the evaluation, and the RMSE metric can be significantly improved over the constant baseline, even without identifying minor chemical modifications. ### MoleculeNet: Hiv In benchmarks, train and test sets are typically split using random, scaffold, or occasionally time splits when time stamps are available. It has been frequently observed that random splits can result in highly similar molecules in both train and test sets, leading to overly optimistic and impractical estimations. For example: "Random splitting, common in machine learning, is often not correct for chemical data" [17]. Thus, scaffold splitting [54] is sometimes suggested as an alternative. Scaffold splitting [54] is a method where each molecule is represented by a graph consisting of ring systems, linkers and side chains. A group of molecules may correspond to a single graph, in which case they are assigned to the same partition. The MoleculeNet Hiv dataset (ubiquitous in evaluation [55; 56; 57; 58; 59]) recommends using scaffold splitting, as it is believed to better reflect the process of discovering new molecules: "As we are more interested in discovering new categories of HIV inhibitors, scaffold splitting [...] is recommended" [17]. Although scaffold splitting makes the train set more distinct from the test set, it is still insufficient for the scenario of finding novel molecules. We discovered that 56% of Hiv test molecules have a train neighbor with a Tanimoto similarity > 0.4, indicating the dataset is unsuitable for the Hi scenario (see Fig. 3). Additionally, the dataset is unsuitable for the Lo scenario due to its binary label, rather than a continuous value. ### Therapeutic Data Commons: TDC.CYP2D6_Veith The Therapeutic Data Commons [18] is a novel platform designed for evaluating machine learning models in drug discovery. We examined the largest dataset, TDC.CYP2D6_Veith, within the Single-Instance Learning Tasks category. Although a scaffold split is recommended, the dataset is deemed unsuitable for the Hi scenario, as 78% of test molecules possess a training neighbor exhibiting a Tanimoto similarity greater than 0.4 (see Fig. 3). Additionally, the dataset is unsuitable for the Lo scenario due to its binary label, rather than a continuous value. Figure 3: Common benchmarks contain highly similar molecules across different splits. ### MolData MolData[19] is a recent benchmark for MPP based on PubChem data. The authors themselves analyzed the novelty of the molecules within the subset and found their scaffold split to be unrealistic: "[...] more than 44% of the molecules within the MolData dataset have at least one other similar molecule to them with a Tanimoto Coefficient of 0.7 or higher. This high percentage of the similarity can denote lack of diversity within this portion of the dataset." The authors use scaffold split to increase the difference between the train and the test, but we found that at least 88% of the molecules in the test have a similar molecule in the train with a Tanimoto similarity > 0.4, making the dataset unsuitable for the Hi scenario (see Fig. 3). Additionally, the dataset is unsuitable for the Lo scenario due to its binary label, rather than a continuous value. ## 6 Related works ### Out-of-distribution MPP In the Hi scenario, the training and test sets are significantly different to simulate the search for new molecules, making the Hi scenario interpretable as an Out-Of-Distribution (OOD) task. Although OOD benchmarks already exist for machine learning-based drug discovery, they do not align with practical drug discovery applications. DrugOOD[20] is a drug discovery benchmark dedicated to Out-Of-Distribution prediction. The authors accurately observe that "In the field of AI-aided drug discovery, the problem of distribution shift [...] is ubiquitous", and therefore suggest assay-based, scaffold-based, or molecular size-based partitions to create a distribution shift. However, their benchmark does not correspond to practical drug discovery, as it involves predicting average activity for all ChEMBL targets simultaneously in ligand-based drug discovery datasets. This approach is impractical because, in reality, researchers are not interested in average activity across ChEMBL targets, but rather in activity on a specific target. GOOD[21] is a benchmark explicitly designed to separate covariate and concept shifts. We examined the GOOD-HIV dataset, as it is a MPP benchmark containing a significant amount of diverse data (see the #Circles in Appendix A). In all Out-Of-Distribution test partitions, more than 40% of molecules have a neighbor in the training set at a Tanimoto similarity more than 0.4, making the dataset unsuitable for the Hi scenario. During the NeurIPS review period, Valence Labs and Laval University jointly published work -- independent from ours -- investigating Molecular-Out-Of-Distribution generalization [60]. Their research is akin to our Hi-scenario, but the authors studied different data splits, employed different models, and investigated uncertainty calibration. ### Activity cliff Activity cliff is a pair of structurally similar compounds that are active against the same bio-target but significantly different in binding potency [61; 22]. Recently, several remarkable studies have emerged, seeking to establish a standard benchmark for Activity Cliff Prediction [22; 23]. These works have inspired us to create our own benchmark. While Activity Cliff Prediction may be advantageous in the Lo scenario, it is inadequate for guiding molecule optimization. In the Lo scenario, it is essential not only to identify extreme activity cliffs with a 10x difference in activity but also to predict minor activity fluctuations. We propose alternative splits and metrics in the Lo scenario. ## 7 Results ### Hi-splitter To simulate the Hi scenario, it is necessary to divide the dataset into training and testing subsets such that any pair of molecules from different partitions has a ECFP4 Tanimoto similarity of less than \(0.4\). Despite the fact that the scaffold splitter produces a more diverse division than random splitter, we observe that it is inadequate for an effective Hi scenario. A greedy approach is to implement the conventional scaffold split and discard from the test set any molecules excessively similar to those in the training set. The issue with this method, however, is that data points are costly, and it is desirable to minimize the number of discarded molecules. Consequently, we have developed a novel algorithm for strict dataset splitting, which discards fewer molecules than the greedy algorithm. Let us consider molecules \(X=\{x_{1},...,x_{n}\}\). We construct a neighborhood graph \(G=(V,E)\), where each molecule \(x_{i}\) corresponds to a vertex \(v_{i}\in V\). Two vertices are connected by an edge if and only if the associated molecules have a similarity to each other greater than threshold \(t\): \(e_{vu}\in E\Leftrightarrow T(x_{v},x_{u})>t\), where \(T\) is Tanimoto Similarity with ECFP4 fingerprints. In such a graph, connected components can be assigned to the training or testing sets independently. However, in practice, 95% of the molecules belong to a single connectivity component. Our goal is to remove the minimum number of vertices such that the giant component breaks up into multiple components with size constraints, thereby enabling us to distribute them between the training and testing sets. This problem is known as the Balanced Vertex Minimum \(k\)-Cut and has been extensively researched in literature [62, 63, 64]. A review of similar problems can be found elsewhere [65, 66]. Similar to [62] we formulate the Integer Linear Programming formulation. Let \(K\) denote the set of integers \(\{1,2,\ldots k\}\). Let \(G=(V,E)\) represent a simple connected graph that we are going to split: \[V=\cup_{i=1}^{k}V_{i}\cup V_{0}\qquad\forall i\neq j\quad V_{i}\cap V_{j}=\emptyset\] For all vertices \(v\in V\) and for all integers \(i\in K\), let us associate a binary indicator \(y_{v}^{i}\) such that: \[y_{v}^{i}=\begin{cases}1,&\text{if }v\in V_{i}\\ 0,&\text{otherwise}\end{cases}\] Note that \(V_{0}\) denotes the set of removed vertices, so if \(\sum_{i\in K}y_{v}^{i}=0\) then the vertex \(v\) is in \(V_{0}\). Let \(b_{i}\in\mathbb{N}\) be a lower bound on the cardinality \(|V_{i}|\) of partition \(V_{i}\). We need it to get partitions with size constraints, e.g. 80% in train and 20% in test. Let \(w_{v}\) be the weights of the nodes. In simple formulation we take \(w_{v}=1\). We formulate the Balanced Vertex Minimum \(k\)-Cut as follows: \[\max\sum_{i\in K}\sum_{v\in V}w_{v}y_{v}^{i} \tag{1}\] \[\sum_{i\in K}y_{v}^{i}\leq 1\qquad\forall v\in V \tag{2}\] \[y_{u}^{i}+y_{v}^{j}\leq 1\qquad\forall i\neq j\in K,\forall e_{uv}\in E \tag{3}\] \[\sum_{v\in V}y_{v}^{i}\geq b_{i}\qquad\forall i\in K \tag{4}\] \[y_{v}^{i}\in\{0,1\}\qquad\forall i\in K,\forall v\in V \tag{5}\] Equation (1) minimizes the weight of removed molecules \(V_{0}\). Equation (2) states that each vertex \(v\) should be in one partition maximum. Equation (3) ensures there is no connectivity between different partitions, so for each edge \(e_{uv}\in E\) if \(u\) is in \(V_{i}\) partition, then the other vertex \(v\) is either in \(V_{i}\) as well, or in \(V_{0}\), meaning it was removed. Equation (4) puts constraints on the size of the partitions \(V_{i}\). While this formulation was effective on small-scale graphs (around 100 vertices), we found it too slow for a real DRD2 activity dataset with 6k molecules. To our knowledge, existing literature [62, 65, 67, 68] typically involves small graphs from the standard benchmarks with a maximum node count of several hundred. To expedite computation, we implemented a graph coarsening approach. The basic idea is outlined here, while the formal algorithm is detailed in Appendix E. We initially performed Butina clustering [69] on molecules and created a coarse graph wherein the vertices correspond not to individual molecules but to clusters of molecules. In the coarsened graph, each vertex is assigned a weight \(w_{v}\) equal to the number of molecules in the respective cluster. This approach accelerated computations and enabled us to partition the HIV dataset consisting of 40k molecules, removing fewer vertices than would be the case with the greedy approach (See Table 1). ### _Lo-Hi_ benchmark We conducted an analysis of numerous drug discovery benchmarks, yet none seemed to align with the actual drug discovery process. Consequently, we prepared seven datasets and are now making them available to the community. We selected datasets that represent realistic drug discovery problems, contain substantial amounts of qualitative data, and cover a diverse chemical space. Diversity was assessed using the recently introduced #Circles metric [70]. As the source code was unavailable at the time of writing, we employed our own implementation of the greedy algorithm outlined in [70]: Appendix H. We provide additional statistics for the original datasets in the Appendix A. We propose three distinct train-test splits for each dataset. We advise adjusting hyperparameters solely on the first split, applying the same hyperparameters to train and assess models on splits #2 and #3, and comparing models by averaging metrics across the splits. Datasets are released under the MIT license. #### 7.2.1 Hi In the Hit Identification scenario, we aim to predict binary labels for new molecules that significantly differ from those in the training set. We prepared four datasets which we divided into training and testing sets, ensuring that the Tanimoto similarity between any molecule in the test set and those in the training set is less than 0.4. See Fig. 1. Additionally, we show that such a split predicts experimental outcomes better than the scaffold split (Appendix F). DRD2-Hi involves predicting dopamine receptor inhibition, a GPCR target of therapeutic importance in schizophrenia [71; 72] and Parkinson's disease [73; 74]. To create this dataset, we obtained Ki data for DRD2 from ChEMBL30 [75], cleaned it (see Appendix A), and binarized it so that molecules with Ki < 10uM are considered active. HIV-Hi is an HIV dataset from the Drug Therapeutics Program AIDS Antiviral Screen that measures the inhibition of HIV replication. We obtained the prepared dataset from MoleculeNet [17]. DRD2-Hi and HIV-Hi are large. However, real-life data often comes in limited quantities. To simulate this crucial scenario, we prepared a smaller challenging dataset, KDR-Hi. This dataset is based on the ChEMBL30 IC50 data associated with vascular endothelial growth factor receptor 2, a kinase target for cancer treatment [76]. Its creation process was similar to that of DRD2-Hi, but we restricted the training folds to just 500 molecules. The Sol-Hi dataset draws from a public solubility dataset at Biogen [77]. We binarized this data such that molecules with a solubility of less than 10 ug/mL were assigned a positive label. Each dataset was divided into distinct training and testing sets. We used the Hi-splitter with \(k=3\) to obtain three highly dissimilar subsets: \(\{F_{1},F_{2},F_{3}\}\). These subsets were then combined to create three distinct folds, each with a unique test set. For instance, for the first fold, the training set was \(train_{1}=\{F_{1},F_{2}\}\) and the test set was \(test_{1}=\{F_{3}\}\). This methodology enabled us to assess the variability in quality resulting from using different data with the same models.1 Footnote 1: The DRD2–Hi preparation code can be found at notebooks/data/03_split_drd2_hi.ipynb. Hi MetricFor our benchmark, we have selected the PR AUC. As a simple binary classification metric without parameters, it is implemented in most libraries and normalized to a range of [0, 1]. \begin{table} \begin{tabular}{c c c} \hline \hline Method & DRD2 & HIV \\ \hline Greedy & 1066 (17.0\%) & 5851 (14.2\%) \\ Hi-Splitter & 97 (1.5\%) & 1598 (3.8\%) \\ \end{tabular} \end{table} Table 1: Number of removed molecules for 0.9:0.1 split The PR AUC favors early recognition models [78] and does not appeal to wrong intuition among readers in an unbalanced setting. #### 7.2.2 Lo In the Lead Optimization scenario, we aim to predict the activity of molecules that are highly similar to those in the training set. As similar molecules tend to exhibit similar activity, our focus is not on predicting binary labels, but rather on ranking, which indicates whether a modification increases the activity or not. To simulate the Lead Optimization scenario, we isolated clusters of molecules with intracluster similarity \(\geq 0.4\) and consisting of \(\geq 5\) molecules. We included them in the test dataset. In practical Lead Optimization, the activity of a given hit is already known; thus, for each cluster, we retained exactly one molecule with a similarity \(\geq 0.4\) to that cluster in the training set. See Fig. 2. We provide additional analysis in Appendix A. In order to confirm the validity of the Lo benchmark, we ensured that the intracluster variation in activity exceeds the experimental noise (See Appendix B). This step is critical, as if the variance within a cluster of similar molecules is not significantly greater than the experimental noise, it would not make sense to test models on such molecules -- under these conditions, even an ideal model would struggle to make accurate predictions. The formal pseudocode can be found in Appendix C. We assembled three datasets: DRD2-Lo, KCNH2-Lo and KDR-Lo. Both DRD2-Lo and KDR-Lo are based on the same data as their respective Hi counterparts but feature a different split between training and testing sets. KCNH2 is an ion channel that regulates heartbeat, and its inhibition can cause dangerous side effects [79]. Consequently, bioassays for KCNH2 are used as a screening method for cardiotoxicity. We extracted IC50 data from ChEMBL30, cleaned it, and divided it into training and testing sets. Each dataset was split three times using different random seeds, resulting in three folds. This method enables us to assess the variability in quality when using different data with the same models.2 Footnote 2: The DRD2-Lo preparation code can be found at notebooks/data/04_split_drd2_lo.ipynb. Lo MetricOur goal is to determine whether the models can make better predictions than assuming "the modified molecule active in the same manner as the original hit." We chose Spearman's correlation coefficient as our metric, calculated within each cluster and averaged across clusters. This metric does not rely on intracluster variation, depends solely on the ranking of molecules within the cluster, and is normalized, between minus one (ideally wrong), zero (random) and one (ideal), rendering it easily interpretable. ### Evaluation We evaluated both traditional and state-of-the-art ML models on our benchmark. We meticulously executed the hyperparameter search, following the procedure outlined in Appendix H. Results are presented in Table 2. The best scores for fingerprint and graph models are highlighted in bold. It should be noted that the mean and standard deviation were calculated not for random seeds, but within different folds. We found that the best models varied for the Hi and Lo tasks. The most effective model for the Hi task was the Chemprop graph neural network [80], aligning with its real-world success [16, 28, 29]. The second-best were gradient boosting and KNN for the small KDR-Hi dataset. Conversely, for the Lo task, the SVM model was found to be the most proficient, which corroborates previous works on activity cliffs [23]. Only for the small KDR-Lo did Chemprop outperform SVM, but even then, the performance was not satisfactory. We further provide a per-cluster Spearman distribution for SVM in Appendix I. The limited inability of Chemprop to distinguish minor modifications may be due to the limited expressivity of graph neural networks. However, this was surprising to us, considering the limited expressivity of binary fingerprints as well. ## 8 Conclusion In virtual screening, practitioners aim to discover novel molecules and filter their chemical libraries using Tanimoto similarity. Despite this common practice, there are no benchmarks that simulate this particular scenario. In molecular optimization, researchers employ predictive models to guide the optimization process in a step-by-step manner. However, it remains unclear whether these models possess the capacity to distinguish minor modifications. We identified several limitations in current drug discovery benchmarks and proposed a more realistic and practical alternative in the form of the _Lo-Hi_ benchmark. By introducing two tasks, Lead Optimization (Lo) and Hit Identification (Hi), which closely resemble real drug discovery scenarios, we created an environment to evaluate machine learning models under more representative conditions. We emphasize the importance of testing models under conditions similar to their intended application. Furthermore, the paper critically assesses existing benchmarks and related works, highlighting their inadequacies and the need for better evaluation methods. To address these issues, we suggest alternative datasets. To build them, we designed a novel molecular splitter algorithm for the Hi task. Different models proved to be better suited to different tasks. Our evaluation of both classical and modern ML models revealed Chemprop as the state-of-the-art for the Hi task, and SVM with ECFP4 fingerprints as the state-of-the-art for the Lo task. The paper's key contributions include the introduction of the _Lo-Hi_ benchmark, a comprehensive analysis of the limitations of modern ML drug discovery benchmarks, a novel molecular splitter algorithm and the evaluation of modern and classic ML algorithms on the proposed benchmark. This work sets the stage for a more accurate and reliable evaluation of machine learning models in the field of drug discovery, ultimately leading to better decision-making and improved outcomes in the search for new therapeutic compounds. ## 9 Limitations We have conducted hyperparameter tuning, although performing it thoroughly poses a challenge [83; 84; 85]. Convincing evidence supporting a particular architecture could be garnered from an open online contest with prizes, accompanied by an undisclosed test dataset. We faced numerous technical difficulties in executing and modifying Graphormer (see Appendix H.7). As such, we cannot definitively determine if Graphormer's failure is a consequence of its architecture or the result of improper dependency pinning by the authors. Our findings indicate that the Integer Linear Programming solution proves to be significantly more effective than the greedy approach. However, we have not explored different formulations in our study. Therefore, it is possible that more efficient methods for splitting molecular datasets could exist. We endeavored to encompass a diverse range of ligand-based drug discovery problems (GPCR inhibition, kinase inhibition, cardiotoxicity, solubility) in our benchmark. However, it is infeasible to capture every potential molecular property prediction task. We advise practitioners to use benchmark results to shortlist models, but also to test them against specific objectives. \begin{table} \begin{tabular}{l|c c c c|c c c} \hline \hline Model & DRD2-Ri & RIV-Hi & XOR-Hi & Sol-Hi & DRD2-Lo & KCHIE-Lo & KBR-Lo \\ \hline Dummy baseline & 0.6774\(\pm\)0.061 & 0.0440\(\pm\)0.014 & 0.6099\(\pm\)0.081 & 0.2154\(\pm\)0.008 & 0.0004\(\pm\)0.0 & 0.0004\(\pm\)0.0 & 0.0004\(\pm\)0.0 \\ KNN (Tanimoto distance, ECFP4) & 0.7060\(\pm\)0.074 & 0.0296\(\pm\)0.046 & **0.0484\(\pm\)0.025** & 0.0219\(\pm\)0.053 & 0.1644\(\pm\)0.014 & 0.1380\(\pm\)0.034 \\ KNN (Tanimoto distance, MCCCS) & 0.7204\(\pm\)0.042 & 0.0720\(\pm\)0.036 & 0.610\(\pm\)0.072 & 0.242\(\pm\)0.009 & 0.211\(\pm\)0.041 & 0.036\(\pm\)0.022 & 0.071\(\pm\)0.02 \\ Gradient Boosting (ECFP4) & 0.736\(\pm\)0.058 & **0.083\(\pm\)0.038** & 0.607\(\pm\)0.067 & 0.4294\(\pm\)0.006 & 0.145\(\pm\)0.052 & 0.37\(\pm\)0.003 & 0.0764\(\pm\)0.036 \\ Gradient Boosting (MACC) & **0.751\(\pm\)0.063** & 0.058\(\pm\)0.030 & 0.603\(\pm\)0.074 & **0.528\(\pm\)0.045** & 0.197\(\pm\)0.043 & 0.216\(\pm\)0.032 & 0.100\(\pm\)0.026 \\ SVM (ECFP4) & 0.677\(\pm\)0.061 & 0.0404\(\pm\)0.016 & 0.611\(\pm\)0.081 & 0.298\(\pm\)0.047 & **0.311\(\pm\)0.015** & **0.274\(\pm\)0.014** & **0.158\(\pm\)0.051** \\ SVM (MACC5) & 0.713\(\pm\)0.05 & 0.042\(\pm\)0.015 & 0.055\(\pm\)0.082 & 0.308\(\pm\)0.021 & 0.219\(\pm\)0.020 & 0.133\(\pm\)0.024 & 0.074\(\pm\)0.034 \\ MLP (ECFP4) & 0.717\(\pm\)0.063 & 0.049\(\pm\)0.019 & 0.626\(\pm\)0.074 & 0.030\(\pm\)0.017 & 0.094\(\pm\)0.059 & 0.106\(\pm\)0.040 & 0.058\(\pm\)0.030 \\ MLP (MACC5) & 0.6964\(\pm\)0.048 & 0.052\(\pm\)0.018 & 0.613\(\pm\)0.077 & 0.462\(\pm\)0.048 & 0.026\(\pm\)0.083 & 0.174\(\pm\)0.031 & 0.054\(\pm\)0.027 \\ \hline Chemprop [80] & **0.782\(\pm\)0.062** & **0.1484\(\pm\)0.114** & **0.676\(\pm\)0.026** & **0.6184\(\pm\)0.03** & **0.2986\(\pm\)0.035** & **0.375\(\pm\)0.067** & **0.161\(\pm\)0.024** \\ Graphormer [81; 82] & 0.7294\(\pm\)0.039 & 0.0964\(\pm\)0.070 & - & - & - & - & - \\ \hline \hline \end{tabular} \end{table} Table 2: _Lo-Hi_ results. PR AUC for Hi and mean Spearman for Lo. Future work Our focus was on medium and large datasets, yet many small datasets contain fewer than 100 datapoints. It would be beneficial to have smaller datasets similar to [86] but tailored for Hi generalization. While our emphasis was on ligand-based drug discovery, where the goal is to predict a molecule's property, there is also structure-based drug discovery. This approach not only involves predicting molecular properties but also incorporates protein information. Hence, it would be advantageous to have structure-based drug discovery datasets that are divided not just by protein (or pockets [87]) similarity but also with Hi generalization across ligands. A major ongoing challenge in molecular generative models is ensuring synthesizability, meaning that generated molecules can be made in the real world. Hi splits can help test the generalizability of synthesizability models. But, it's important to remember that Lo/Hi splits assume similar molecules have similar properties. While this holds for physico-chemical attributes, this premise remains to be validated in the context of feasibility measures. ## 11 Potential harmful Consequences One of the primary concerns arises from the ability to design molecules that are unfamiliar to medical chemists and experts in the field. While the novelty of these compounds can be advantageous for pushing the boundaries of current scientific knowledge, it also raises the potential risk of misuse, especially if malicious actors were to use the system to generate harmful or toxic compounds for hostile purposes. Given their unfamiliar nature, these molecules might not immediately raise flags upon review by experts or during synthesis orders at chemical laboratories. This could open the door for the creation and dissemination of harmful compounds. ## 12 Acknowledgements This work was funded by Gleb Pobegailo.
2306.03507
"A Little is Enough": Few-Shot Quality Estimation based Corpus Filtering improves Machine Translation
Quality Estimation (QE) is the task of evaluating the quality of a translation when reference translation is not available. The goal of QE aligns with the task of corpus filtering, where we assign the quality score to the sentence pairs present in the pseudo-parallel corpus. We propose a Quality Estimation based Filtering approach to extract high-quality parallel data from the pseudo-parallel corpus. To the best of our knowledge, this is a novel adaptation of the QE framework to extract quality parallel corpus from the pseudo-parallel corpus. By training with this filtered corpus, we observe an improvement in the Machine Translation (MT) system's performance by up to 1.8 BLEU points, for English-Marathi, Chinese-English, and Hindi-Bengali language pairs, over the baseline model. The baseline model is the one that is trained on the whole pseudo-parallel corpus. Our Few-shot QE model transfer learned from the English-Marathi QE model and fine-tuned on only 500 Hindi-Bengali training instances, shows an improvement of up to 0.6 BLEU points for Hindi-Bengali language pair, compared to the baseline model. This demonstrates the promise of transfer learning in the setting under discussion. QE systems typically require in the order of (7K-25K) of training data. Our Hindi-Bengali QE is trained on only 500 instances of training that is 1/40th of the normal requirement and achieves comparable performance. All the scripts and datasets utilized in this study will be publicly available.
Akshay Batheja, Pushpak Bhattacharyya
2023-06-06T08:53:01Z
http://arxiv.org/abs/2306.03507v1
A Little is Enough": Few-Shot Quality Estimation based Corpus Filtering improves Machine Translation ###### Abstract Quality Estimation (QE) is the task of evaluating the quality of a translation when reference translation is not available. The goal of QE aligns with the task of corpus filtering, where we assign the quality score to the sentence pairs present in the pseudo-parallel corpus. We propose a Quality Estimation based Filtering approach to extract high-quality parallel data from the pseudo-parallel corpus. To the best of our knowledge, this is a novel adaptation of the QE framework to extract quality parallel corpus from the pseudo-parallel corpus. By training with this filtered corpus, we observe an improvement in the Machine Translation (MT) system's performance by up to **1.8** BLEU points, for English-Marathi, Chinese-English, and Hindi-Bengali language pairs, over the baseline model. The baseline model is the one that is trained on the whole pseudo-parallel corpus. Our Few-shot QE model transfer learned from the English-Marathi QE model and fine-tuned on only 500 Hindi-Bengali training instances, shows an improvement of up to **0.6** BLEU points for Hindi-Bengali language pair, compared to the baseline model. This demonstrates the promise of transfer learning in the setting under discussion. QE systems typically require in the order of (7K-25K) of training data. Our Hindi-Bengali QE is trained on only 500 instances of training that is \(1/40^{th}\) of the normal requirement and achieves comparable performance. All the scripts and datasets utilized in this study will be publicly available. ## 1 Introduction In recent times, Neural MT has shown excellent performance, having been trained on a large amount of parallel corpora (Dabre et al., 2020). However, not all language pairs have a substantial amount of parallel data. Hence, we have to rely on the noisy web-crawled corpora for low-resource languages. The task of **Parallel Corpus Filtering** aims to provide a scoring mechanism that helps extract good-quality parallel corpus from a noisy pseudo-parallel corpus. The task of **Quality Estimation** (QE) aims to provide a quality score for a translation when the reference translation is unavailable. We use Quality Estimation to assign the quality scores to the sentence pairs present in pseudo-parallel corpora and extract good-quality parallel sentences. We aim to improve the quality of Machine Translation for English(En)-Marathi(Mr), Hindi(Hi)-Bengali(Bn) and Chinese(Zh)-English(En) language pairs by using sentence-level QE-based corpus filtering. We observe that QE-based corpus filtering performs better than previously proposed methods. Our contributions are: 1. Adaptation of the QE framework, which is normally used for MT evaluation, to extract high-quality parallel corpus from pseudo-parallel corpus; to the best of our knowledge, this is a novel adaptation of the QE framework to extracting quality parallel corpus from the pseudo-parallel corpus. 2. Demonstrating the promise of Few-Shot QE technique to generate training data for MT; a Hindi-Bengali QE model is trained with only 500 training instances transfer learned from an English-Marathi trained QE model; the filtered parallel data using this Hindi-Bengali QE system gives **0.6** BLEU point improvement over Hi-Bn MT system trained on the pseudo-parallel corpus. 3. Demonstrating performance improvement of the Machine Translation systems by up to **1.8** BLEU points for English-Marathi, Hindi-Bengali and Chinese-English language pairs, over the model trained on the whole pseudo-parallel corpus. Related work ### Parallel Corpus Filtering Neural Machine Translation (NMT) is extremely _data hungry_(Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017). Recently, there has been a growing interest in the process of filtering noisy parallel corpora to enhance the data used for training machine translation systems. The Conference on Machine Translation (WMT) has organized annual Shared Tasks on Parallel Corpus Filtering (WMT 2018, WMT 2019, WMT 2020). Lu et al. (2020) proposed an approach that uses the Dual Bilingual GPT-2 model and the Dual Conditional CrossEntropy Model to evaluate the quality of the parallel corpus. Feng et al. (2020) proposed the LaBSE model, which is a multilingual sentence embedding model trained on 109 languages, including some Indic languages. Herold et al. (2022) mentioned different types of noise that can be injected in a parallel corpus and investigated whether state-of-the-art filtering models are capable of removing all the noise types proposed by Khayrallah and Koehn (2018). Most recently, Batheja and Bhattacharyya (2022) used a combination of Phrase Pair Injection and LaBSE (Feng et al., 2020) based Corpus Filtering to extract high-quality parallel data from a noisy parallel corpus. In contrast, we use QE-based filtering to extract high-quality data from noisy pseudo-parallel data. We observe that QE quality scores are superior to the LaBSE quality scores. ### Quality Estimation Quality Estimation (QE) is the task of evaluating the quality of a translation when reference translation is not available. The state-of-the-art MonoTransQuest architecture, proposed by Ranasinghe et al. (2020), builds upon XLM-R, a widely-used pretrained cross-lingual language model known for its ability to generalize to low-resource languages (Conneau et al., 2020). (Kocyigit et al., 2022) proposed a combination of multitask training, data augmentation and contrastive learning to achieve better and more robust QE in a Parallel Corpus Mining setting. The Parallel Corpus Mining task aims to detect the most similar texts in a large multilingual collection and perform sentence alignment. This motivates us to use QE in the Parallel Corpus Filtering task. ## 3 Approaches We first discuss methods to extract good-quality parallel sentences from the pseudo-parallel corpus. Then we discuss a transfer learning-based filtering approach in few-shot settings. ### LaBSE based Filtering Language Agnostic BERT Sentence Embedding model (Feng et al., 2020) is a multilingual embedding model that supports 109 languages, including some Indic languages. We generate the sentence embeddings for the source and target sides of the pseudo-parallel corpora using the LaBSE 1 model. Then, we compute the cosine similarity between the source and target sentence embeddings. After that, we extract good-quality parallel sentences based on a threshold value of the similarity scores. Footnote 1: [https://huggingface.co/sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) ### Phrase Pair Injection (PPI) with LaBSE-based Filtering Batheja and Bhattacharyya (2022) proposed a combination of Phrase Pair Injection (Sen et al., 2021) and LaBSE-based Corpus Filtering to extract high-quality parallel data from a noisy parallel corpus. We train a PBSMT model on the noisy pseudo-parallel corpus using the _Moses_2 decoder. Then, we extract phrase pairs with the highest translation probability. Finally, we perform LaBSE-based filtering on these phrase pairs to remove poor-quality phrase pairs. We augment these high-quality phrase pairs with LaBSE-filtered parallel sentences. Footnote 2: [http://www2.statmt.org/moses/?n=Development.GetStarted](http://www2.statmt.org/moses/?n=Development.GetStarted) ### Quality Estimation based Filtering In this approach, we train the MonoTransQuest3(Ranasinghe et al., 2020) model and use Figure 1: Quality Estimation based Filtering Pipeline it to generate the quality scores for the pseudo-parallel corpus of the corresponding language pair. Then, we extract high-quality parallel sentences from the pseudo-parallel corpus using a threshold quality score value. ### Few-shot Quality Estimation The Quality Estimation task requires human-annotated Direct Assessment scores for the corresponding language pairs. In few-shot settings, we fine-tune a pre-trained QE model for a high-resource language pair on QE data for the corresponding low-resource language pair to obtain a QE model for the low-resource language pair. ## 4 Mathematical Preliminaries LaBSE scoringLet \(D=\{(x_{i},y_{i})\}_{i=1}^{N}\) be a pseudo-parallel corpus with \(N\) examples, where \(x_{i}\) and \(y_{i}\) represents \(i^{th}\) source and target sentence respectively. We first feed all the source sentences present in the pseudo parallel corpus as input to the LaBSE4Feng et al. (2020) model, which is a Dual encoder model with BERT-based encoding modules to obtain source sentence embeddings (\(S_{i}\)). The sentence embeddings are extracted as the l2 normalized **[CLS]** token representations from the last transformer block. Then, we feed all the target sentences as input to the LaBSE model to obtain target sentence embeddings (\(T_{i}\)). We then compute cosine similarity \((score_{i})\) between the source and the corresponding target sentence embeddings. Footnote 4: [https://huggingface.co/sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) \[S_{i}=LaBSE\left(x_{i}\right) \tag{1}\] \[T_{i}=LaBSE\left(y_{i}\right) \tag{2}\] \[score_{i}=cosine\_similarity\left(S_{i},T_{i}\right) \tag{3}\] QE scoringWe feed "\(x_{i}[SEP]y_{i}\)" as an input to the MonoTransQuest Ranasinghe et al. (2020) architecture which uses a single XLM-R model. The output of the [CLS] token is used as the input of a softmax layer that predicts the quality score \((score_{i})\) of the \(i^{th}\) sentence pair \(<x_{i},y_{i}>\). \[score_{i}=MonoTransQuest\left(x_{i},y_{i}\right) \tag{4}\] ## 5 Experimental Setup ### Dataset In all NMT experiments, we use two sets of corpus, namely, Parallel and Pseudo-Parallel corpus. The **Parallel corpus** consists of high-quality sentence pairs, while the **Pseudo-Parallel** corpus contains sentence pairs of varying quality. The En-Mr Parallel Corpus consists of the ILCI phase 1, Bible, PIB and PM-India corpus Jha (2010); Christos Christodouloupoulos (2015); Haddow and Kirefu (2020). The Zh-En Parallel corpus consists of ParaMed5 corpus. The Hi-Bn Parallel corpus is obtained from the OPUS6 corpus repository. The En-Mr and Zh-En Pseudo-Parallel Corpus consist of the Samanantar Ramesh et al. (2021) and WMT18 Zh-En7 corpus, respectively. The Hi-Bn Pseudo-Parallel Corpus consists of Samanantar and Tatobea Tiedemann (2020) corpus. The detailed data statistics are mentioned in table 1. Footnote 5: [https://github.com/boxiangliu/ParaMed](https://github.com/boxiangliu/ParaMed) Footnote 6: [https://opus.nlpl.eu/](https://opus.nlpl.eu/) Footnote 7: [http://data.statmt.org/wmt18/translation-task/preprocessed/zh-en/](http://data.statmt.org/wmt18/translation-task/preprocessed/zh-en/) In QE experiments, we create a small corpus (500 instances) for Hindi-Bengali language pair that consists of human-annotated Domain Adaptation scores for each sentence pair annotated by three annotators. The pairwise Pearson Correlation between the three annotators of Hindi-Bengali QE is **0.68**, **0.61** and **0.67**. This indicates a good agreement between the three annotators. Please refer to **Appendix**A.3 for further annotation details. We use the QE data provided by Ranasinghe et al. (2020) and Deoghare and Bhattacharyya (2022) for the Zh-En and En-Mr language pairs, respectively. The detailed QE data statistics are mentioned in \begin{table} \begin{tabular}{l|l|r} \hline \hline **Corpus Name** & **Language Pairs** & **Sentence Pairs** \\ \hline \multirow{3}{*}{**Parallel Corpus**} & Hindi-Bengali & 3.6M \\ & English-Marathi & 248K \\ & Chinese-English & 62K \\ \hline \multirow{3}{*}{**Pseudo-Parallel Corpus**} & Hindi-Bengali & 6.3M \\ & English-Marathi & 3.28M \\ \cline{1-1} & Chinese-English & 24.7M \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset Statistics of Parallel and Pseudo-Parallel Corpus for the task of Neural Machine Translation \begin{table} \begin{tabular}{l|r|r|r} \hline \hline **Language** & **train** & **dev** & **test** \\ \hline English-Marathi & 26,000 & 1,000 & 1,000 \\ Chinese-English & 7,000 & 1,000 & 1,000 \\ Hindi-Bengali & 440 & 50 & 10 \\ \hline \hline \end{tabular} \end{table} Table 2: Dataset Statistics of human-annotated z-standardized Domain Adaptation (DA) scores for the task of Quality Estimation table 2. For evaluation, we use the FLORES 101 test set which contains 1,012 sentence pairs for each language pair. ### Models We use MonoTransQuest model architecture to train the QE models. We use the Indic NLP library for preprocessing the Indic language data and Moses for preprocessing the English language data. For Indic languages, we normalize and tokenize the data. For English, we lowercase and tokenize the data. We use a Transformer based architecture provided by OpenNMT-py library to train the NMT models for all our experiments. The optimizer used was adam with betas (0.9, 0.98). The initial learning rate used was 5e-4 with the inverse square root learning rate scheduler. We use 8000 warmup updates. The dropout probability value used was 0.1 and the criterion used was label smoothed cross entropy with label smoothing of 0.1. We use a batch size of 4096 tokens. All the models were trained for 200,000 training steps. We use MonoTransquest8 model to train the sentence-level QE model. We start with a learning rate of 2e-5 and use 5% of training data for warm-up. We use early patience over ten steps. We use a batch size of eight. The model architecture is mentioned in **Appendix A.2**. **Baseline** We train the baseline NMT models on the whole pseudo-parallel corpus augmented with the parallel corpus for the corresponding language pairs. Footnote 8: [https://github.com/TharinduDR/TransQuest](https://github.com/TharinduDR/TransQuest) **LaBSE based Filtering** In this model, we use the LaBSE filtering with threshold 0.8 to extract good quality parallel sentences from the En-Mr, Hi-Bn and Zh-En pseudo-parallel corpus. Then we augment the parallel corpus with the LaBSE-filtered parallel sentences and train the respective NMT models. **LaBSE + PPI-LaBSE based Filtering** We extract LaBSEiltered parallel sentences and phrases from the pseudo-parallel corpus and augment them with the parallel corpora to train the respective NMT models. **Our Model, QE based Filtering** We train the sentence-level QE model from scratch for En-Mr and Zh-En language pairs using their respective training data, Table 2. We use the English-Marathi pre-trained QE model for the Hi-Bn language pair and finetune it on Hi-Bn training data, Table 2. We compute quality scores for the noisy pseudo-parallel corpora using the trained QE models. Then, we extract high-quality sentence pairs from the pseudo-parallel corpus using the threshold values of -0.5, -0.4, and 0 for En-Mr, Zh-En, and Hi-Bn language pairs, respectively. We augment the extracted high-quality sentence pairs with the parallel corpus and train the respective NMT models. \begin{table} \begin{tabular}{l r r} \hline **Technique** & **\# Sentence Pairs** & **Zh\(\rightarrow\)En** \\ \hline **QE based Filtering** & 15.09M & 8.7 \\ **LaBSE + PPI-LaBSE based Filtering** & 15.59M & 8.47 \\ **Batheja and Bhattacharyya** (2022) & & \\ **LaBSE based Filtering** & 15.57M & 8.29 \\ **Baseline** & 24.8M & 7.85 \\ \hline \end{tabular} \end{table} Table 4: BLEU scores of Zh\(\rightarrow\)En NMT models on FLORES101 test data. Here, we establish the efficacy of QE-based filtering in extracting a high-quality parallel corpus from Zh\(\rightarrow\)En pseudo-parallel corpus. For actual instances of translations please refer to Appendix A.1 \begin{table} \begin{tabular}{l r r r} \hline **Technique** & **\# Sentence Pairs** & **En\(\rightarrow\)Mr** & **Mr\(\rightarrow\)En** \\ \hline **QE based Filtering** & 2.61M & 9.4 & **17.7** \\ **LaBSE + PPI-LaBSE based Filtering** & 4.09M & **9.9** & 17.0 \\ **Batheja and Bhattacharyya** (2022) & & & \\ **LaBSE based Filtering** & 2.85M & 8.8 & 16.7 \\ **Baseline** & 3.53M & 8.8 & 15.9 \\ \hline \end{tabular} \end{table} Table 3: BLEU scores of En\(\rightarrow\)Mr and Mr\(\rightarrow\)En NMT models on FLORES101 test data. Here, we establish the efficacy of QE-based filtering in extracting a high-quality parallel corpus from En-Mr pseudo-parallel corpus. For actual instances of translations please refer to Appendix A.1. ## 6 Results and Analysis We evaluate our NMT models using BLEU [14]. We use _sacrebleu_[20] python library to calculate the BLEU scores. Table 5 shows that QE based filtering model outperforms all other models for Hi-Bn, En-Mr and Zh-En language pairs. The **QE based Filtering** model improves the MT system's performance by **0.85**, **0.6**, **1.8**, **0.37** and **0.63** BLEU points over the **baseline** model for Zh\(\rightarrow\)En, En\(\rightarrow\)Mr, Mr\(\rightarrow\)En,Hi\(\rightarrow\)Bn and Bn\(\rightarrow\)Hi, respectively. It also outperforms **LaBSE + PPI-LaBSE based Filtering** model by up to **0.7** BLEU points for Zh-En, En-Mr and Hi-Bn language pairs. The LaBSE + PPI-LaBSE based Filtering model performs better than QE based Filtering model for En\(\rightarrow\)Mr language direction. The LaBSE + PPI-LaBSE model, which is trained on nearly twice the amount of training data compared to the QE-based filtering model, can be a contributing factor to its better performance in En\(\rightarrow\)Mr. The improvement in the performance of the Bn\(\rightarrow\)Hi QE-based filtered MT system is comparable to the En\(\rightarrow\)Mr and Zh\(\rightarrow\)En QE-based filtered MT model. The Hi-Bn QE model is trained with only 500 training instances transfer learned from En-Mr trained QE models. This demonstrates the promise of the few-shot QE technique to generate training data for MT. We compute Pearson Correlation between human annotated quality scores and quality scores computed using LaBSE and QE, shown in Table 6. The QE quality scores have a higher correlation with human annotated quality scores, compared to LaBSE quality scores for all 3 language pairs. Table 7 shows the Pearson Correlation between LaBSE and QE quality scores for all 3 language pairs. We observe that the LaBSE quality score has a low correlation with the QE quality score and the QE quality score has a high correlation with the human annotated quality score. This establishes the superiority of QE over the LaBSE quality score. ## 7 Conclusion and Future Work We introduced a simple Quality Estimation based corpus filtering approach to extract high-quality parallel data from the noisy pseudo-parallel corpora. The takeaway from our work is that sentence-level QE-based filtering performs better than LaBSE-based filtering and helps improve the performance of NMT systems. We also show that few-shot QE models trained using a transfer learning-based approach can be used to extract good-quality parallel corpus from the pseudo-parallel corpus. Only \(1/40^{th}\) of the normal data requirement (7K-25K) of QE training data achieves comparable performance for the Hindi-Bengali language pair. We also show that the QE quality score is superior to the LaBSE quality score. In the future, we plan to use the proposed corpus filtering technique for other language pairs. This will provide us with a general overview of how this filtering technique performs for multiple languages. \begin{table} \begin{tabular}{l|c|c|c} \hline **Technique** & **En-Mr** & **Hi-Bn** & **Zh-En** \\ \hline **LaBSE** & 0.44 & 0.51 & 0.2 \\ **QE** & **0.52** & **0.53** & **0.4** \\ \hline \end{tabular} \end{table} Table 6: Pearson Correlation between human annotated quality scores and quality scores computed using LaBSE and QE \begin{table} \begin{tabular}{c|c|c} \hline **En-Mr** & **Hi-Bn** & **Zh-En** \\ \hline 0.5 & 0.37 & 0.28 \\ \hline \end{tabular} \end{table} Table 7: Pearson Correlation between LaBSE and QE quality scores computed on the pseudo-parallel corpus for En-Mr, Hi-Bn and Zh-En language pairs respectively \begin{table} \begin{tabular}{l c|c} \hline **Technique** & **\# Sentence Pairs** & **Hi\(\rightarrow\)Bn** & **Bn\(\rightarrow\)Hi** \\ \hline **QE based Filtering** & 7.77M & 13.28 & 21.06 \\ **LaBSE + PPI-LaBSE based Filtering** & 8.73M & 13.24 & 20.51 \\ **Batheja and Bhattacharyya** (**2022**)** & & \\ **LaBSE based Filtering** & 7.77M & 13.23 & 20.48 \\ **Baseline** & 10M & 12.91 & 20.43 \\ \hline \end{tabular} \end{table} Table 5: BLEU scores of Hi\(\rightarrow\)Bn and Bn\(\rightarrow\)En NMT models on FLORES101 test data. Here, we establish the efficacy of few-shot QE-based filtering using a pre-trained En-Mr model fine-tuned on Hi-Bn QE data to extract a high-quality parallel corpus from the Hi-Bn pseudo-parallel corpus. For actual instances of translations please refer to Appendix A.1 ## Acknowledgements We would like to thank the anonymous reviewers for their insightful feedback. We also express our gratitude towards Shivam Mhaskar, Sourabh Deoghare and other members of the Machine Translation group at CFILT, IIT Bombay, for their interesting and insightful comments. ## Limitations Although our primary effort in this work was to extract as much parallel corpora as possible, the improvement in the performance has been found to be only marginal. The LaBSE and QE-based filtering experiments involve a hyper-parameter called "threshold quality score." To achieve optimal results, we conduct experiments with different values of this hyper-parameter. The proposed few-shot transfer learning technique requires a small amount of data that needs to be annotated by multiple annotators. ## Ethics Statement The aim of our work is to extract high-quality parallel corpus from a noisy pseudo-parallel corpus. The datasets that we used in this work are publicly available and we have cited the sources of all the datasets that we have used. Publicly available datasets can contain biased sentences. We have also created a dataset for Hindi-Bengali few-shot QE. We briefly discuss the annotation guideline given to the annotators for the task in the **Appendix**A.3.
2302.07296
Formation of ultracold molecules by merging optical tweezers
We demonstrate the formation of a single RbCs molecule during the merging of two optical tweezers, one containing a single Rb atom and the other a single Cs atom. Both atoms are initially predominantly in the motional ground states of their respective tweezers. We confirm molecule formation and establish the state of the molecule formed by measuring its binding energy. We find that the probability of molecule formation can be controlled by tuning the confinement of the traps during the merging process, in good agreement with coupled-channel calculations. We show that the conversion efficiency from atoms to molecules using this technique is comparable to magnetoassociation.
Daniel K. Ruttley, Alexander Guttridge, Stefan Spence, Robert C. Bird, C. Ruth Le Sueur, Jeremy M. Hutson, Simon L. Cornish
2023-02-14T19:23:31Z
http://arxiv.org/abs/2302.07296v2
# Formation of ultracold molecules by merging optical tweezers ###### Abstract We demonstrate the formation of a single RbCs molecule during the merging of two optical tweezers, one containing a single Rb atom and the other a single Cs atom. Both atoms are initially predominantly in the motional ground states of their respective tweezers. We confirm molecule formation and establish the state of the molecule formed by measuring its binding energy. We find that the probability of molecule formation can be controlled by tuning the confinement of the traps during the merging process, in good agreement with coupled-channel calculations. We show that the conversion efficiency from atoms to molecules using this technique is comparable to magnetoassociation. Arrays of molecules confined in optical potentials are a powerful platform for quantum science [1]. Full quantum state control of molecules is essential to realise the potential of this system in the domains of quantum simulation [2; 3; 4; 5; 6; 7; 8] and quantum information processing [9; 10; 11; 12; 13]. Recent progress on trapping molecules in optical tweezers and optical lattices has demonstrated the first steps toward this goal, including single-site readout of individual molecules [14], entanglement of pairs of molecules [15; 16; 17] and the assembly of molecules predominantly occupying a single motional level of an optical tweezer [18]. Ultracold molecules may be prepared in optical potentials by following an indirect approach, where the molecules are produced by associating pre-cooled pairs of atoms. This approach benefits from the wealth of techniques developed for the cooling and control of atoms and results in the formation of molecules which inherit the low temperatures of the constituent atoms. The formation of molecules often follows a two-step process where first weakly bound molecules are produced from an atomic sample using magnetoassociation [19; 20] and then the weakly bound molecules are transferred to the rovibrational ground state using stimulated Raman adiabatic passage (STIRAP) [21; 22]. Magnetoassociation exploits an avoided crossing between atomic and molecular states as a function of magnetic field and has been widely employed to convert atoms trapped in weakly confining optical potentials into molecules. When the atoms are trapped in the tightly confining potentials of optical tweezers or lattices, only atom pairs in the relative motional ground state of the optical trap can be converted into weakly bound molecules using magnetoassociation [23]. In addition, confinement-related effects arise when the harmonic confinement length approaches the value of the s-wave scattering length. Effects like elastic [24; 25] and inelastic [26; 27] confinement-induced resonances (CIRs) have been observed experimentally in a number of different systems and dimensionalities [28; 29; 30; 31; 32]. These confinement-related effects offer new ways to form molecules. Using pairs of fermions in 1D, inelastic CIRs have been used to form molecules coherently in an optical trap [33]. In addition, molecules have been formed coherently utilising spin-motion coupling in a strongly focused optical tweezer with large polarisation gradients [34]. In contrast, there has been little experimental investigation of the interactions of two particles in separate optical potentials with tuneable separation [35; 36], despite the existence of theoretical work in this area [37; 38]. Stock _et al._[37] predicted the existence of avoided crossings between molecular and confined-atom states at critical values of the separation of two optical potentials. They termed these trap-induced shape resonances. Figure 1(a) shows the energies of a system with two atoms in separate but overlapping traps as a function of trap separation \(\Delta z\). At large separation, the energies of the separately confined atom pairs are almost independent of \(\Delta z\). However, there can also be a molecular state that is weakly bound at \(\Delta z=0\). The energy of this state increases quadratically with \(\Delta z\) due to the tweezer potentials and there is an avoided crossing with the lowest confined-atom state at a critical separation \(\Delta z\). The strength of the avoided crossing depends on the height and width of the barrier between the atomic and molecular wells, as shown in Fig. 1(b); it is greatest when the bound state is close to threshold, corresponding to a large positive value of the s-wave scattering length \(a_{\rm s}\), and when the confinement length \(\beta_{\rm rel}\) for relative motion of the atoms is comparable to \(a_{\rm s}\). This avoided crossing offers an unexplored path to the formation of molecules by merging together two optical potentials. This was not observed in previous demonstrations of molecule formation in lattices [39; 15; 40] and tweezers [34; 41], probably because \(a_{\rm s}\ll\beta_{\rm rel}\) for the systems investigated. In this Letter, we report the observation of molecule formation through merging of two optical tweezers, one containing a single Rb atom and the other a single Cs atom. Guided by coupled-channel calculations, we elucidate the experimental conditions that are necessary for molecule formation by merging of the optical potentials, a process referred to here as _mergo_association. We demonstrate mergoassociation by using optical spectroscopy to measure the binding energy of the molecular state occupied after molecule formation. The interaction potential between Rb and Cs atoms is accurately known [42] and comparison of our measurements with coupled-channel calculations of the near-threshold bound states allows us to identify the molecular state and understand the molecule formation process. We explore the tunability of the molecule formation probability by controlling the confinement strength during merging and compare the molecule formation probability to that obtained using magnetoassociation. Finally, we confirm that mergoassociation can be performed at low magnetic fields, without any magnetic field ramps, and detect the formation of molecules using microwave spectroscopy. This demonstrates the utility of mergoassociation in systems that do not possess Feshbach resonances suitable for magnetoassociation. Our experiments begin by preparing single \({}^{87}\)Rb and \({}^{133}\)Cs atoms in hyperfine states (\(f_{\rm Rb}=1,m_{\rm Rb}=1\)) and (\(f_{\rm Cs}=3,m_{\rm Cs}=3\)) in the motional ground states of spatially separated, species-specific optical tweezers [43; 44]. These tweezers are subsequently brought together in order to form a molecule. The Rb+Cs atom pair state \((1,1)+(3,3)\) has a near-threshold bound state with binding energy \(110\pm 2\) kHz\(\times h\) at magnetic fields far from any Feshbach resonance; this corresponds to an interspecies background scattering length \(a_{\rm s}=645(60)\)\(a_{0}\) (approx. 34.1(3) nm) [42]. The binding energy is comparable to the energy spacing of the harmonic levels in the tweezers and therefore within the regime where Stock _et al_. [37] predicted the existence of strong avoided crossings. We have carried out coupled-channel calculations of the energy levels for pairs of atoms in separated tweezers, using the methods described in Supplemental Material [44]. Our calculations treat the individual tweezers as spherical and harmonic and neglect coupling between motions in the relative and center-of-mass coordinates. We represent the atom-atom interaction with a point-contact potential chosen to reproduce \(a_{\rm s}\). The resulting energies for \(\beta_{\rm rel}=40\) nm are shown in Fig. 1(a). For each value of \(\beta_{\rm rel}\), we locate \(\Delta z_{\rm X}\) and characterize the strength of the avoided crossing in terms of an effective matrix element \(\Omega\); this gives the strengths shown in Fig. 1(c). The value of \(\Delta z_{\rm X}\) is approximately \(\beta_{\rm rel}\sqrt{3+\beta_{\rm rel}^{2}/a_{\rm s}^{2}}\), so the crossing occurs at larger interatomic separations for weaker confinement. This leads to a reduction in tunneling through the barrier and in the strength of the avoided crossing. If the avoided crossing is sufficiently strong, it may be traversed adiabatically with a slow enough change in \(\Delta z\). This leads to the conversion of an atom pair in the ground state of relative motion into a molecular bound state. We calculate the probability of traversing the avoided crossing adiabatically using Landau-Zener theory [44]. Figure 2(a) shows the experimental sequence used to probe and dissociate molecules. A single Rb atom and a single Cs atom are prepared in species-specific tweezers which are merged together at magnetic field \(B_{\rm merge}\). The field is then ramped down to \(B_{\rm spec}\). Atom pairs can either be mergoassociated during the merging step if the tweezer confinement is sufficiently strong or magnetoassociated if the magnetic field ramp crosses a Feshbach resonance. A spectroscopy pulse of light at 1557 nm is applied before the field ramps are reversed and the traps are unmerged. Then, fluorescence imaging of the atoms is performed to determine the trap occupancy. When the spectroscopy light is resonant with a molecular transition, loss to other molecular states results in no atoms being reimaged [46; 47; 48; 49]. We identify the molecular states that have been populated during a sequence by comparing to coupled-channel calculations using the RbCs interaction potential of Takekoshi _et al._[42; 44]. Figure 2(b) shows the binding energies of RbCs molecules formed by mergoassociation, measured using the optical spectroscopy [44]. Molecules formed when merging the traps at \(B_{\rm merge}=205\) G follow the path indicated by red circles when the field is ramped to \(B_{\rm spec}\) Figure 1: (a) Energy levels for the Rb+Cs system as a function of separation between two spherically symmetric optical tweezers with confinement length \(\beta_{\rm rel}=40\) nm for relative motion. The red dashed line shows the harmonic trapping experienced by the molecule in the absence of tunnelling. An avoided crossing between the molecular and atom-pair states at \(\Delta z_{\rm X}\), expanded in the inset, allows mergoassociation. (b) Cartoon of the interaction energy as a function of interatomic distance when \(\Delta z=\Delta z_{\rm X}\). (c) The effective matrix element \(\Omega\) between the lowest-energy atom-pair and molecular states as a function of \(\beta_{\rm rel}\). Entry into the near-threshold bound state by mergoassociation above the Feshbach resonance at 197.08(2) G allows us to approach this resonance from the molecular side, as shown in the inset, and subsequently to occupy states not accessible in magnetoassociation experiments from this starting field [45; 48]. By mergoassociating with \(B_{\text{merge}}\) below this resonance we instead follow the path indicated by blue square points as the field is ramped. For the purposes of this investigation, we utilise the difference in energy between the states G and D at \(B_{\text{spec}}=181\) G (purple dashed line) to distinguish between molecules that have followed these two paths. Figure 3 shows how the probability of mergoassociation depends on the tweezer confinement length \(\beta_{\text{rel}}^{\text{X}}\) at the avoided crossing. For these measurements, the traps are merged at an average speed 1.4 um/ms. However, the speed \((d\Delta z/dt)_{\text{X}}\) at the avoided crossing depends strongly on the alignment of the two tweezers; simulations of the combined potential predict that \((d\Delta z/dt)_{\text{X}}=0.9^{+2.7}_{-0.4}\) um/ms [44]. During the merging step, the applied magnetic field is \(B_{\text{merge}}=205\) G. Following merging, the magnetic field is jumped down to 199 G and then ramped down to 196.8 G in 3 ms; this step magnetoassociates any remaining atom pairs in the relative motional ground state with an expected conversion efficiency greater than 99%. The magnetic field is then ramped down to \(B_{\text{spec}}=181\) G in 3 ms in order to perform spectroscopy. The frequency of the spectroscopy laser at which we observe light-induced loss allows us to determine whether the molecule occupies state G or D and hence whether it was formed by mergoassociation or magnetoassociation. The panels in Fig. 3(a)(i)-(iii) show optical spectra for strong, intermediate, and weak confinement during merging. For strong confinement we observe high occupation of state G as the majority of atom pairs in the motional ground state are mergoassociated. As the confinement is reduced, fewer atom pairs are mergeoassociated, resulting in high occupation of state D. Figure 3(b) shows the probability of excitation by the spectroscopy light as a function of \(\beta_{\text{rel}}^{\text{X}}\) during merging of the traps. The spectroscopy laser saturates the transition Figure 2: (a) Sequence for molecule formation and detection. (b) The energy of weakly bound RbCs molecules (relative to threshold) as a function of magnetic field, \(B_{\text{spec}}\). The points show the measured energies of molecules produced when merging the traps at 205 G (red circles) and below 197 G (blue squares). The purple dashed line indicates the field used for spectroscopy in Fig. 3 from states D and G. Black lines show state energies calculated from the RbCs molecular potential of Ref. [42]. The inset shows the avoided crossing below the resonance near 197 G and highlights the paths for mergoassociation (red arrow) and magnetoassociation (blue arrow) [45]. Figure 3: (a) Spectroscopic identification of molecules formed by mergoassociation (red circles) and magnetoassociation (blue squares) for confinement lengths \(\beta_{\text{rel}}^{\text{X}}\) at the avoided crossing of (i) 37.7(8) nm, (ii) 47(1) nm, and (iii) 77(2) nm. Light-induced loss is measured as a function of the detuning \(\Delta_{\text{thresh}}\) from the transition between the atomic threshold and \((^{3}\Pi_{1},v^{\prime}=29,J^{\prime}=1)\) at 181 G. (b) The probability of light-induced loss of molecules formed by mergoassociation (red circles) and magnetoassociation (blue squares) as a function of \(\beta_{\text{rel}}^{\text{X}}\). The theory curves show the calculated Landau-Zener probabilities scaled to match the light-induced loss. The shaded regions indicate the experimental uncertainty in the merging speed. from either G or D to determine the occupied molecular state. We clearly observe a change in the probability of mergoassociation from high to low as the confinement is reduced. The peak probability of the mergoassociation data points (red circles) and the magnetoassociation data points (blue squares) indicates that the efficiency of the two techniques is similar. The red (blue) curve shows the calculated Landau-Zener probability \(P_{\mathrm{LZ}}\)[44] of mergoassociation (no mergoassociation) for our experimental parameters, scaled to that of the light-induced loss. The shaded regions show the uncertainty in \(P_{\mathrm{LZ}}\) arising from the uncertainty in \((d\Delta z/dt)_{\mathrm{X}}\). The experimental results are in good agreement with our theoretical model, with the observed crossover point in \(\beta_{\mathrm{rel}}\) within 10% of the theoretical prediction. The agreement is surprisingly good in view of the approximations made in the model, particularly the assumption that the tweezers are spherically symmetric. In reality, our tweezers have an aspect ratio of 2:5 between the confinement lengths along the axes of approach and tweezer-light propagation. Both magnetoassociation and mergoassociation rely on the initial atom pair residing in the ground state of relative motion, and this state preparation is the current limit to our association efficiency. Finally, we verify that mergoassociation can be performed at low magnetic fields, without any magnetic field ramps. Mergoassociation is still possible in this regime, because it relies only on the presence of a bound state near threshold, which for RbCs exists over a large range of magnetic field. The experimental sequence is similar to the one described earlier, but the magnetic field is held constant at the field applied during cooling of the atoms, \(B=4.78\) G, for the entirety of the molecule formation and detection portion of the sequence. A microwave pulse of frequency \(\sim 6.84\) GHz is applied for 89 us in place of the pulse of light at 1557 nm. This pulse length approximates a \(\pi\)-pulse for the Rb atom and for the RbCs molecule in the least-bound state S [44]. Following the unmerging of the traps, a resonant "pushout" pulse is applied; this ejects any Rb atoms in the state \((f=2)\)[50] to allow state-sensitive detection of the Rb atom. The results of the microwave spectroscopy are shown in Fig. 4. The blue squares show the results for weak confinement during merging, where the probability of mergoassociation is low and we expect to prepare an atom pair. We observe only a single feature in the probability \(P_{11}\) of observing a Rb and Cs atom at the end of the sequence; this is the feature corresponding to the hyperfine transition \((1,1)\rightarrow(2,2)\) in atomic Rb. In contrast, when the tweezers are merged with stronger confinement, as shown by the red circles, we mergoassociate the atom pair to form a molecule. Consequently, we observe an additional feature in \(P_{11}\), detuned by 35(2) kHz; this corresponds to the molecular transition \(\mathrm{S}\rightarrow\mathrm{S^{\prime}}\) illustrated in the inset. Both features are fitted with Lorentzians as shown by solid (dashed) lines for the atomic (molecular) transition. Using the RbCs interaction potential fitted in Ref. [42], we calculate the binding energy of state \(\mathrm{S^{\prime}}\) at 4.78 G to be \(80\ \mathrm{kHz}\times h\). This value is smaller than that of \(\mathrm{S}\) (\(122\ \mathrm{kHz}\times h\)) and the calculated difference in binding energy (\(42\ \mathrm{kHz}\times h\)) is in reasonably good agreement with the experimental measurement. The atom pair is prepared in the required hyperfine states in 78(1)% of experimental runs and the relative depths of the features in Fig. 4 indicate that 46(8)% of these atom pairs are converted into molecules. In summary, we have created a trap-induced avoided crossing between atomic and molecular states and used it to create molecules during the merging of pairs of optical tweezers. The efficiency of molecule formation depends on the strength of the avoided crossing, which critically depends on the confinement length for relative motion. The avoided crossing is strongest when there is a bound state near threshold and the confinement length is comparable to the scattering length. This situation is realised for Rb+Cs at a large range of magnetic fields. We have demonstrated that the efficiency of molecule formation by mergoassociation is comparable to that of magnetoassociation in this system. This work demonstrates a new technique for the formation of molecules in systems with large interspecies interactions. It will be effective even in systems that do not possess Feshbach resonances suitable for magnetoassociation [51]. It would be interesting to test this technique using transport in an optical lattice, where the tighter confinement achievable should allow efficient conversion Figure 4: Microwave spectroscopy of RbCs molecules produced by mergoassociation at \(B=4.78\) G. The probability \(P_{11}\) of detecting both a Rb and Cs atom at the end of the sequence is shown as a function of the microwave detuning from the transition \((1,1)\rightarrow(2,2)\) in atomic Rb for strong (red circles) and weak (blue squares) confinement during merging: \(\beta_{\mathrm{rel}}^{\mathrm{X}}=39.4(9)\) nm and \(55(1)\) nm respectively. When a molecule is formed, we observe the molecular transition \(\mathrm{S}\rightarrow\mathrm{S^{\prime}}\) at detuning 35(2) kHz. The inset shows the energy-level structure of the relevant states with atomic (molecular) states indicated with solid (dashed) lines. of atom pairs into molecules for systems with moderate positive scattering lengths. This would also open up applications in neutral-atom quantum computing by using the trap-induced avoided crossing for high-fidelity two-qubit quantum logic operations [52]. Further, the observation of such features in the merging of tweezers has important ramifications for collision measurements using tweezer-confined particles. This work also demonstrates the first production of RbCs molecules in optical tweezers. These weakly bound molecules can be transferred to the rovibrational ground state using STIRAP as has been previously demonstrated for weakly confined samples of RbCs [47, 48, 49]. The production of ground-state molecules using these techniques prepares the molecules predominantly in the lowest motional state of the trap, which will allow the implementation of high-fidelity entangling gates between molecules [10, 11]. We thank R. V. Brooks, I. Forbes, and K. K. Roice for early experimental assistance, A. L. Tao and G. Murray for assistance with the frequency stabilisation of the spectroscopy laser, and H. J. Williams and O. Dulieu for helpful discussions. We acknowledge support from the UK Engineering and Physical Sciences Research Council (EPSRC) Grants EP/P01058X/1 and EP/V047302/1, UK Research and Innovation (UKRI) Frontier Research Grant EP/X023354/1, the Royal Society and Durham University. The data presented in this paper are available from [http://doi.org/10.15128/r19w032304n](http://doi.org/10.15128/r19w032304n).
2306.15365
Herb-Drug Interactions: A Holistic Decision Support System in Healthcare
Complementary and alternative medicine are commonly used concomitantly with conventional medications leading to adverse drug reactions and even fatality in some cases. Furthermore, the vast possibility of herb-drug interactions prevents health professionals from remembering or manually searching them in a database. Decision support systems are a powerful tool that can be used to assist clinicians in making diagnostic and therapeutic decisions in patient care. Therefore, an original and hybrid decision support system was designed to identify herb-drug interactions, applying artificial intelligence techniques to identify new possible interactions. Different machine learning models will be used to strengthen the typical rules engine used in these cases. Thus, using the proposed system, the pharmacy community, people's first line of contact within the Healthcare System, will be able to make better and more accurate therapeutic decisions and mitigate possible adverse events.
Andreia Martins, Eva Maia, Isabel Praça
2023-06-27T10:30:51Z
http://arxiv.org/abs/2306.15365v1
# Herb-Drug Interactions: A Holistic Decision Support System in Healthcare ###### Abstract Complementary and alternative medicine are commonly used concomitantly with conventional medications leading to adverse drug reactions and even fatality in some cases. Furthermore, the vast possibility of herb-drug interactions prevents health professionals from remembering or manually searching them in a database. Decision support systems are a powerful tool that can be used to assist clinicians in making diagnostic and therapeutic decisions in patient care. Therefore, an original and hybrid decision support system was designed to identify herb-drug interactions, applying artificial intelligence techniques to identify new possible interactions. Different machine learning models will be used to strengthen the typical rules engine used in these cases. Thus, using the proposed system, the pharmacy community, people's first line of contact within the Healthcare System, will be able to make better and more accurate therapeutic decisions and mitigate possible adverse events. Herb-drug interactions, artificial intelligence, rule-based systems, knowledge bases, machine learning, healthcare ## I Introduction Polypharmacy is an indisputable reality of the XXI century. In addition to the medications that doctors prescribe, patients can add a variety of over-the-counter herbs, supplements or even food which have a high potential to interact [1]. In addition, over the last few years, the usage of Complementary and Alternative Medicine (CAM), such as herbs and dietary supplements, has increased considerably [2]. These products, unlike conventional drugs, embrace several bioactive entities that may hold therapeutic activity. CAM is very popular in several cultures and regions, and it is known by traditional or folk medicine, such as traditional Chinese, Tibetan, Japanese kampo, Indian ayurvedic and Yunani medicine [2, 3]. Despite the benefits that CAM can bring at the therapeutic level, there are several reports in the literature of adverse events as a consequence of an Herb-Drug interaction (HDI) or a Supplement-Drug Interaction (SDI) [3]. However, contrary to what happens with Drug-Drug Interactions (DDI) which is a challenge widely studied in clinical pharmacy, HDIs and SDIs are not a major concern in the pharmaceutical community [4, 5]. Therefore, it becomes imperative to alert consumers, clinicians, pharmaceutical industries and health authorities, about the dangers of combining CAM with conventional drugs, as is already happening with drug combinations [1, 2, 3]. In medicine and healthcare, many computer systems have been developed to automatically assist clinicians and patients, such as medical experts systems [6]. An expert system (Fig. 1), a sub-domain of Artificial Intelligence (AI), aggregates the knowledge provided by an expert into a knowledge base and encodes it as a set of _if-then_ rules, with an inference or rule engine, to emulate human thought processes so that the program operates at or near the level of human experts [7]. The rule engine includes an explanation module that can show users how the system reached its conclusion. The user interface allows the user to get an answer to his question or problem in an intuitive and simplistic way. In particular, to deal with the mass of knowledge about DDIs, some works have applied this knowledge-based approach. For instance, an ancient study implemented an expert system to aid decision making in combination with drug therapy [8], and another investigation resulted in a micro-computer based expert system on drug interactions [9]. However, to the best of our knowledge, no other work has yet applied expert systems to the domain of HDIs [8, 9]. Furthermore, recent studies have been applying other methods of AI to deal with DDIs, such as autoencoders and weighted Support Vector Machine (SVM) [10], Recurrent Neural Network (RNN) [11], among others. This range of investigations has the potential to improve the performance of an expert system. For example, some works suggest that, healthcare professionals have the ability to pay more attention to alerts that are most clinically significant if the volume of unnecessary DDI alerts is reduced [12]. AI can help with this topic by selecting the "right" alerts to display and reducing the number of alerts healthcare workers are exposed to. Consequently, early investigations have been carried out to establish a list of high-priority DDIs for alerting purposes resorting to semi-supervised learning algorithms [10]. This volume reduction would help to handle overwhelming amounts of data in the construction of the knowledge base of expert systems. Based on previous studies and resources, ForPharmacy project [13] intends to research and develop telepharmacy solutions with particular attention to those in direct connection to pharmacovigilance and HDIs using pharmacies near patients to early detect and advise on distinct health-related risk factors. In particular, reliable tools that keep healthcare professionals up-to-date on potential HDIs are imperative. Therefore, in the context of the project, this article aims to present the conceptualization and contextualization of a Decision Support System (DSS) that will act as a critical tool to help local pharmacists to transform large amounts of clinical data into actionable knowledge to raise awareness about HDIs. This work is organized into multiple sections that can be described as follows. Section II provides an overview on current state of the art of the HDIs domain and intelligent features that have emerged to address this problem. Section III describes the proposed system. Finally, Section IV provides a summary of the main conclusions of this work and appoints future research lines. ## II Artificial Intelligence in CAM The worldwide popularity of herbal products have been incorporated into society healthcare supported by the perception that "natural" ensures safety [14, 15]. Furthermore, concomitant intake of herbal medicines and prescription drugs is a fairly common practice, particularly in patients with hypertension, diabetes, cancer, seizures, and depression [16]. As a consequence, the risk of HDI is increasingly recognized as a public health problem [16]. This problem can be accompanied by Adverse Drug Reactions (ADRs) that can lead to prolonged hospitalization and fatality in some cases [16, 17]. There are several reasons of herb-drug interactions. Fig. 2 tries to summarize the main reasons for these interactions [18, 19]. Roughly speaking, HDIs can be characterized as Pharmacodynamic (PD) or Pharmacokine (PK), considering the mechanistic pathways through which the HDIs occur, resulting in null, beneficial or toxic responses. PD interactions can occur when the constituents of herbal products have synergistic or antagonistic activity respecting to the conventional drug. PK interactions result from changes in the absorption, distribution, metabolism or elimination of the conventional drug by the herbal product or other dietary supplements. Drugs with a narrow therapeutic index are, especially, a major security concern relative to potential HDIs. Warfarin, the most commonly used anticoagulant, has a narrow therapeutic index and a lot of medicinal herbs and food interactions [20]. In the investigation carried out by C. Awortwe _et al._[16], it was concluded that, in most cases, patients using warfarin and/or statins (atorvastatin, simva statin and rosuvastatin) for the treatment of cardiovascular complications described interactions after the combination with herbal products such as sage, flaxseed, SJW, cranberry, goji juice, green tea and chamomile. Potential interaction of warfarin and active constituents of herbal products led to ADRs including ecchymosis, epistaxis, haematuria, hemiplegia and elevated INR [16]. However, despite the negative consequences that various HDIs can bring, others have a beneficial effect on therapy when properly prescribed to the patient [2]. Fig. 3 presents these main mechanisms of HDIs [19, 21]. In the late 1990s, after realizing the clinical importance of HDIs, the scientific community and companies began to develop databases of different HDIs using Information Technology (IT). For instance, UW Drug Interaction Database (DIDB) [22] is a commercially available HDI Database founded by Dr. Rene Levy at the University of Washington in the late 1990s. Although it was created several years ago, it is updated daily and manually validated by experts. In june 2021, DIDB contained a total of 2,539 natural products (herbal medications and food products), with 15,864 drug interaction experiments/studies [14]. In the last decade, AI technologies, especially Natural Language Processing (NLP), have been applied in building HDI databases. This computational technique is used to analyze and represent naturally occurring texts with the goal of achieving human-like language processing for several applications [23]. Thus, NLP is particularly relevant on this topic as it contributes to understand and organize large amounts of biomedical text data [24, 25]. To date, the most representative example of the application of AI for an HDI database is SUPPAI [14], developed by L. Wang _et al._, in 2019 [26]. This database provides evidence of SDIs by automatically extracting supplement information and recognize such interaction from the scientific literature. The authors applied the RoBERTa language model [27], an iteration of BERT [28], using labeled data for DDI classification. The aforementioned model, on the SDI test set, reached 82% precision, 58% recall and 68% F1-score. Unlike the DIDB, that requires a manual curation effort, SUPP.AI is updated with no manual validation once in several months [14]. Fig. 1: Basic structure of an expert system. Moreover, an intuitive function interface is equally important to make full advantage of the collected data. Without it the end user will not be able to use the information gathered. Thus, databases designed to support healthcare professionals usually have the concern of developing some key interfaces to support the user, namely keyword-based searches, providing alphabetical indexes to facilitate searches and advanced search options in order to simplify filtration of undesirable search results. For example, Natural Medicines Comprehensive Database (NMCD), a collection of databases, allows users to eliminate certain fields for a particular query [14]. Despite AI's ability to process a large amount of data in order to achieve greater coverage of HDI information, there are still a number of limitations in database development. Some of the limitations are related to improve the accuracy of methods to perform various NLP tasks as well as some databases have not been updated for a few years. Furthermore, due to the rule-based and probabilistic methods used in database development, developers of HDI databases should plan ahead to carry out extensive method testing before implementing them [14]. Another problem is associated to the search for articles related to HDIs. Some studies suggested that searches conducted by physicians just achieve between 31-46% of relevant articles [29]. To overcome this disadvantage, K. Lin _et al._[30] construct an automated HDI interaction PubMed-based article retrieval system. This system avoid completely the need for users to write a PubMed query, accepting simple medication and herb names as input and returning the articles that are relevant. Evaluation was based on a randomly selected set of herb-drug pairs from a previous review article [31], achieving a precision score of 93% and a sensitivity score of 92%. In [32], D. Trinh _et al._, proposed an approach for semantic relation clustering with the aim of extracting potential HDIs from the biomedical literature and, consequently, saving time in such investigations. The authors used a feature reduction method, Principal Component Analysis (PCA), to perform sparse feature reduction and applied K-means to cluster all the potential relations. The system reached 54.45% precision, 75.71% recall and 63% F-score. Several developments in the HDI domain have occurred recently, however more advanced computational methods are needed to provide a better understanding of CAM to physicians and the public in general [33]. In that regard, some methods based on AI that have been already proposed for DDIs are also promising to be applied to the context of HDIs. N. Lui _et al._, proposed a Machine Learning (ML) framework with the aim of extracting useful features from the Food and Drug Administration (FDA) adverse event reports and, afterwards, identify potential high-priority DDIs. The introduced approach combines Stacked Autoencoders (SAE) and Weighted Support Vector Machine (wSVM). The experimental results demonstrate the profitable performance of the proposed algorithm to predict high-priority DDI candidates for medication alerts as well as that features derived from adverse events contains effective information regarding severity levels of DDIs [10]. In [11], S. Lim _et al._, used a RNN to improve the performance of DDI extraction in the biomedical domain. The proposed model overcame the state-of-the-art model by 4.4% and 2.8% in the detection and classification tasks, respectively. Expert systems have also been applied in DDI context, since it is crucial to deal with the large amount of knowledge about DDIs to the individual patient. In [34], A. Mahdi _et al._, introduced an expert system that draw conclusions from more complex interactions, using cat swarm algorithm [35] to get accurate and fast results. This system can identify possible interactions between drugs, drugs and disease's case and between drugs and food along with alternative drug suggestions that can be safer. However, to the best of our knowledge, no other work has yet applied expert systems to the domain of HDIs [8, 9]. There are several advantages of the rule-based reasoning approach such as natural expression [36]. Fig. 2: Main reasons of herb-drug interactions. Often humans tend to express their knowledge in _if-then_ rules, similar to the way knowledge is encoded into expert systems. Another advantage is related to the fact that rules are independent pieces of knowledge and consequently, can be easily reviewed and verified by experts. Furthermore, the rules are more transparent while compared with other forms of knowledge representation, for example, those employing neural networks [37]. ## III ForPharmacy Approach Most CAM products do not require a prescription and can be purchased at pharmacies or para-pharmacies. Therefore, pharmacists are the most suitable professionals to early detect possible HDIs, educate consumers about the use of CAM and, consequently, decrease the risk of HDIs, promoting the health of general population [38]. By this means, reliable tools that keep health professionals updated and help them obtain information quickly and usefully are required. The goal of ForPharmacy project is to expand access to pharmaceutical care by providing greater patient safety and counseling. Therefore, in the scope of the project, it is intended to develop an intelligent service to provide awareness about HDIs, particularly in relation to self-medication. To accomplish the aforementioned, Fig. 4 shows a draft of the proposed system. Three different general phases can be identified as follows: * **Knowledge extraction and representation:** the information about HDIs is extracted from different sources, e.g., directly from biomedical literature or using HDIs databases. As already mentioned in Section II this can be a huge task taking into account that the information is not in a standard format. Although, AI techniques, namely NLP can be used to automate this step. * **Knowledge completion:** the extracted information needs must be standardized to be easily integrated in the expert system. * **Knowledge exploitation:** the standardized information needs to be analysed and correlated to exploit and provide new knowledge to the pharmaceutical system. These three phases are crucial to ensure the efficiency of the system. Each phase has different components and brings different challenges to the development of the intelligent service. In the following paragraphs, we briefly describe each phase. A methodology for extracting HDIs from textual data is strongly needed since the inability of non-experts to review the biomedical literature about potential HDIs is one of the causes that lead to the combination of herb-based products and prescription drugs [32]. Therefore, knowledge extraction and representation is the first phase and can be achieved through different approaches. An approach involves conducting a literature review on the subject of HDIs. Subsequently, an evidence-based approach is needed to evaluate the interactions. This evaluation must take into account several parameters, for a greater standardization of the data, such as reported by M. Chavez _et al._[2], and C. Awortwe _et al._[16]. On the other hand, and considering that manually organizing information in a database is a costly and time consuming process, text mining tools, already applied for DDIs as mentioned in Section II, can be applied to automatically extract the information. Finally, the development of a data cloud is another approach to organize and integrate all the information from the existing HDI databases already mentioned in Section II. Any of these Fig. 3: Main mechanisms of herb-drug interactions. approaches is scalable and can be combined with other themes, such as DDIs and SDIs. All these sources should be consider when gathering data for the knowledge base. Moreover, it is essential to work the information provided by the different sources and represent it in a unique, standard format. In this way, it becomes possible to define standards to fill the information and, subsequently, serve as input into the expert system. This is done in the second phase designated by knowledge completion. Note that, as we have multiple sources, it is also important to do a first analysis of the information to ensure that we do not duplicate information from different sources. Then, all the knowledge gathered from different sources and represented in the standard format will be used by the expert system to exploit and provide new knowledge. This is the "heart" of ForPharmacy DSS system, and is developed in the final phase named by knowledge exploitation. The proposed expert system will work in a hybrid mode, not only the typical rule engine will be implemented, but it will be empowered by ML models. Thus, the acquired knowledge is encoded in a set of _if-then_ rules in order to maintain the explainability of HDIs. The rule engine runs an inference engine to process the rules in the system to identify possible interactions. To generate new rules from the knowledge base, different ML techniques will be implemented. Thus, the rule learner will use the best ML models to generate new rules and inserts them into the rule system. Several ML techniques will be studied to develop the rule learner, however one of the ML techniques that seems to be most suitable for this case is the Decision Tree, a supervised learning technique that is a very simple and widely used algorithm, since it offers a very good trade-off between performance and interpretability, favouring the latter [39]. As already mentioned in Section II, it is extremely important to ensure that the knowledge extracted is presented to the health user in the simplest and most accessible way. Also, in more specific industries such as healthcare, maintaining consistency with the user interface of the already used systems is crucial to ensure that users do not have a learning curve to use the new system. ForPharmacy DSS system will have a GUI that will be integrated in the pharmaceutical proprietary platform used in most Portuguese pharmacies. The interface design will be in accordance with the system that pharmacists are familiar with and will allow them to select the drug and herb names and verify if there is any interaction. If a possible interaction is identified, the information provided includes explanations of what may happen when two specific herbs and drugs are used together and why this interaction may occur. ## IV Conclusion ForPharmacy project aims to develop technologies that enable pharmacies to offer a wider and more reliable range of healthcare services. In particular, to the best of our knowledge pharmacies do not own multidisciplinary tools to alert the pharmacist about the potential risk to customers related to HDIs. In the scope of ForPharmacy project we aim to demonstrate for the first time, a DSS for HDIs which uses AI techniques to identify new interactions. It will work in an innovative hybrid mode: not only the typical rule engine will be implemented, but it will be empowered by ML models. Therefore, this system, which is currently under development, Fig. 4: ForPharmacy approach for constructing a decision support system of herb-drug interactions. will facilitate the daily life of pharmacists, helping them in preventing possible HDIs caused by self-medication. Furthermore, it should be noted that this new system is fully scalable to other subjects, such as DDIs and SDIs, which can greatly improve pharmacy systems. ## Acknowledgment The present work was done and funded under project For-Pharmacy (P2020-COMPETE-FEDER nr 070053). This work has also received funding from project UIDB/00760/2020.
2305.03057
How Crystalline is Low-Density Amorphous Ice?
Low-density amorphous ice (LDA) is one of the most common solid materials in the Universe and a key material for understanding the many famous anomalies of liquid water. Yet, despite its significance and its discovery dating nearly 90 years, the structure of LDA is debated. It is unclear if LDA is a glassy state representing a liquid or a heavily disordered crystal; indeed, two forms (LDA-I and LDA-II) have been discussed as amorphous and partially crystalline in the literature, respectively. Here, with two widely used water models, we show that the experimental structure factor of LDA is best reproduced computationally by a partially crystalline structure. Models for both LDA-I and LDA-II are highly similar, with differences only due to subtle differences in crystallinity and/or experimental error. Further support for this structural model of LDA comes from experiment: if LDA is partially crystalline, then its route to formation should result in different nanocrystallite cubicities, and thus give rise to different cubicities upon recrystallisation. This memory effect of LDA's creation route is observed and it is incompatible with a fully amorphous material. The results present a unified computational and experimental view that LDA is not fully amorphous but instead a partially crystalline material. This impacts LDA's many roles in nature and potentially our understanding of liquid water. Furthermore, the "re-identification" of such an intensely studied material highlights that great care will be needed when classifying the nature of glassy materials going forward.
Michael Benedict Davies, Alexander Rosu-Finsen, Christoph G. Salzmann, Angelos Michaelides
2023-05-03T18:53:23Z
http://arxiv.org/abs/2305.03057v2
# Low-Density Amorphous Ice Contains Crystalline Ice Grains ###### Abstract Low-density amorphous ice (LDA) is one of the most common solid materials in the Universe and influences a myriad of cosmological phenomena. In fundamental research, it is the potential key material for understanding the many famous anomalies of liquid water. Despite its significance, the structural nature of LDA remains heavily debated. It is unclear if LDA is a glassy state representing a liquid state or a heavily disordered crystal. Here we show that the experimental structure factor of LDA is best reproduced computationally by small ice grains with amorphous regions at the grain boundaries. Moreover, LDA materials made through different routes crystallise to give ice I with different stacking characteristics. These results suggest LDA is not truly amorphous but instead a partially crystalline material. Going forward, great care is needed in concluding from diffraction data if materials are truly glassy or instead partially crystalline states. ## I Introduction The matrix of life, the universal solvent, a powerful corrosive, and the strangest fluid - these are just some of the names for perhaps the most fascinating and important substance in nature: water. It is the many anomalies of water - over 70 have been compiled [1] - that facilitate its multitude of names and moreover shape the face of the Earth. Yet, despite intensive research efforts dating back to antiquity, a complete understanding of water remains a challenge. There are many physical states that water can exist in; for instance, 20 polymorphs of ice have now been created in the laboratory, [2; 3; 4; 5; 6; 7] with many more theoretically predicted ice polymorphs, [8] and under nanoscale confinement water exhibits yet more structures and a different phase diagram. [9] Of all water's forms, the most abundant in the Universe is found in the dense molecular clouds from which stars and planets are born, and as the bulk matter in comets: low-density amorphous ice (LDA). [10; 11] LDA is of great scientific interest, in part due to its involvement in many cosmological phenomena (for instance, it is key to promoting the formation of complex molecules in astrochemistry and thus potentially the origin of life [12]), but also due to its potential key role in explaining water's anomalies. Specifically, whether LDA has a corresponding liquid state is a cornerstone of the heavily debated two-state liquid model, which could account for many of water's thermodynamic anomalies. [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23] The name of LDA originates from the fact that its density of 0.94 g cm\({}^{-3}\) is lower than the 1 g cm\({}^{-3}\) of liquid water, and because it appears to be an amorphous solid according to diffraction studies. [24] However, despite the importance of LDA, there is ambiguity with respect to its structural nature with some studies pointing towards a more "crystal-like" nature than a purely glassy state. [15; 16; 18; 25; 26; 27; 28; 29; 30] Multiple experimental avenues for creating LDA have been discovered: (i) low-temperature deposition of water vapour; [31; 32; 10] (ii) hyper-quenching of micron-sized water droplets; [33; 34; 35] (iii) decompression or heating high-density amorphous ice (HDA); [36; 24] (iv) high-energy irradiation of crystalline ice; [37; 38; 39] and (v) heating ice VII or VIII at ambient pressure. [40; 41] Experimentally, LDA is identified on the basis of broad features in diffraction. However, discerning the exact structural nature of LDA from diffraction patterns is problematic. Amann-Winkel et al. distinguished three scenarios which are of interest for diffraction patterns of amorphous ice: a purely amorphous material, grains of crystalline ice embedded in an amorphous matrix, and a completely polycrystalline material. [42] The structural nature of LDA is further complicated by subtle differences that can be exhibited by samples depending on their preparation route (e.g. LDA-I, LDA-II and LDA(ice VIII) [43; 44]). Computer simulation can provide the atomic scale resolution needed to determine the structure of LDA. A common way to obtain LDA in simulation is to quench liquid water rapidly to avoid crystallisation. Martelli et al. quenched the TIP4P/2005 water model with a cooling rate (\(\kappa\)) of 1 K ns\({}^{-1}\) to obtain LDA samples. They searched the LDA model for crystal-ice domains and found none concluding it was indeed fully amorphous. [46] Furthermore, by quantifying the large-scale density fluctuations they showed it was nearly hyperuniform. [47] However - as discussed by Limmer and Chandler, [19] and others [48; 49; 50; 51] - \(\kappa\) is a key parameter which when varied affects the structural and energetic properties of water glasses. A range of different amorphous LDA samples is thus achievable in simulation. Evidence of multiple configurations corresponding to LDA have also been observed via decompression of HDA [52; 53] and via thermal annealing of HDA / very high-density amorphous ice (vHDA). [54] Moreover, given the suggestions of a crystal like structure in experiment, whether these fully amorphous states achievable in simulation are the best model for the LDA achieved in experiment (and detected in nature) is a key question to resolve. In this work, we aim to establish the structure of experimental LDA using computational and experimental approaches. Agreement with the structure factor, \(S(Q)\), of LDA - as measured via wide-angle X-ray scattering by Mariedahl et al. (2018)[45] - is screened across a range of computational models obtained from two diametrically opposed pathways: (i) quenching liquid water at different \(\kappa\); and (ii) constructing large-scale polycrystalline ice structures using Voronoi domains. In the former, the structure of LDA is explored starting from the disordered liquid water state, whilst in the latter it is approached using ordered crystalline ice building blocks. In both pathways, a near exact agreement with experiment is obtained by partially crystalline states consisting of isolated grains of crystalline ice I with amorphous material at the grain boundaries. Experimentally, indirect evidence of LDA's hybrid crystalline-amorphous nature is found when transforming it to stacking disordered ice I (ice I\({}_{\rm sd}\)) upon heating at ambient pressure. LDA samples from different preparation procedures yield ice I\({}_{\rm sd}\) with different stacking characteristics. This memory effect is incompatible with a fully disordered and glassy nature of LDA, and thus supports our hybrid structural model. ## II Results ### Quenching liquid water indicates that LDA is a partially crystalline state. In Fig. 1, we show the results of quenching liquid water from 300 K to 125 K at different cooling rates (\(\kappa\)). A computationally large periodic box (\(\approx 18\times 18\times 18\) nm in \(x,y,z\)) containing 192,000 mW water molecules was chosen to improve the ability to capture structural anisotropy. Most importantly, multiple local homogeneous nucleation events can be observed at this length scale - for further discussion on the choice of water model see _Materials & Methods_. \(\kappa\) of \(\infty\) K ns\({}^{-1}\) (an instantaneous change of temperature), 10 K ns\({}^{-1}\), 5 K ns\({}^{-1}\) and 1 K ns\({}^{-1}\) were employed. Crystalline (orange) and amorphous (blue) molecules are identified using the local \(q_{6}\) order parameter of Li et al. [55] The resulting configurations after quenching are shown in Fig. 1a, and can be split into two categories: partially crystallised states and polycrystalline (almost fully crystallised) state. The former are observed for \(\kappa\geq 5\) K ns\({}^{-1}\), and resemble multiple isolated grains of crystalline Figure 1: Quenching liquid water: the experimental diffraction data of LDA is best modelled with a partially crystalline state. (a) Snapshots of systems taken from quenching water from 300 K to 125 K at different cooling rates. “Start” indicates equilibrated water at 300 K. (b) \(S(Q)\) of the quenched systems along with experimental data from ref. 45 (black dashed line). (c) Pendry R factor (of the agreement between experiment and theory) against crystallinity of structure achieved in quenching. The proportion of hexagonal (h) and cubic (c) local environments are indicated for the best model in (a). ice with thick amorphous regions at the grain boundaries. The latter is obtained with \(\kappa\) of 1 K ns\({}^{-1}\), and resembles multiple grains of ice packed together. The formation of these structures can be explained as follows: As the temperature drops below the melting temperature of ice, the system enters the supercooled liquid state, during which local homogeneous nucleation events stochastically occur resulting in the creation of multiple isolated grains of crystalline ice embedded in liquid water. If the cooling rate is fast enough then before crystallisation can complete, the temperature drops to the point that the kinetic rearrangement of water molecules is extremely limited: the configuration of a partially crystallised state is thus frozen in. The faster the cooling the shorter the time window for nucleation and crystal growth, resulting in a greater proportion of molecules in amorphous environments - as seen in Fig. 1a. Conversely, for slower cooling (e.g. \(\kappa=1\) K ns\({}^{-1}\)) crystallisation can complete (apart from the amorphous molecules present at the grain boundaries) resulting in polycrystalline ice. In Fig. 1b the structure factors of the quenched systems are shown. The progressing crystalline nature of the configurations is indicated by a sharpening of the diffraction features. The best agreement with experiment is observed for the partially crystalline state with 19 % crystalline ice present in numerous and diversely sized grains. The proportion of cubic (_c_) local environments in ice I - termed the "cubicity" - in the ice grains is 48 %, which is close to the expected 50 % cubicity obtained in homogeneous ice nucleation.[56, 57, 58, 59, 60] Interestingly, agreement with experiment is observed to be a "goldilocks" scenario with the best model being not too amorphous and not too crystalline. The key effect of the crystallinity in these partially crystalline states is seen with respect to the prominence of the first peak and trough in \(S(Q)\) which reflects more long-range structural characteristics compared to higher \(Q\) features. To quantify the agreement between the computational models and experiment the Pendry R-factor (\(R\)) was employed[61] over the entire experimental \(q\) range of \(0.65-23\) A. \(R\) was developed and is widely used in low-energy electron diffraction (LEED) to quantify the agreement of computational models with experimental data (see _Materials & Methods_). A value of \(R=0\) represents perfect agreement whereas values of \(\approx 0.1\) are taken as excellent, \(\approx 0.2\) as good, \(\approx 0.3\) as mediocre, and \(\approx 0.5\) as bad.[62] Fig. 1c shows that the three partially crystallised states have excellent agreement with experiment. This agreement improves as the proportion of molecules in crystalline environments increases from 6.4 to 13 to 19 %. The best model for LDA is the system containing 19 % crystalline ice, which has an \(R\) value of 0.061 which indicates an extremely high agreement. The ranking given by \(R\) is shown to qualitatively agree with the coefficient of determination in Fig. S1. Fig. S2 shows \(S(Q)\) for the optimal LDA model and experimental data over the full \(0.65-23\) A range. The nucleation rate, which has a per unit time and per unit volume dependence, can be used to physically understand the states achieved from quenching. The former dependency can explain the effect of \(\kappa\) seen in Fig. 1. The latter is captured by system size. Fig. S3 shows that smaller systems give poorer matches to LDA as a result of the reduced length scale impairing the ability to capture polycrystalline structures. Moreover, different water models exhibit different nucleation rates and thus give different states when subject to the same quenching procedure. This is shown in Fig. S4 by employing the atomistic TIP4P/2005 water model.[63] This model's slower nucleation rate and dynamics results in more highly amorphous states. A poorer match to LDA is found for these more amorphous states, and notably it improves as the system gets closer to a partially crystalline state. This further underpins that experimental LDA is not fully amorphous but instead partially crystalline. ### Models of polycrystalline ice indicate LDA is a partially crystalline state. A second pathway to achieve an amorphous-like \(S(Q)\) is through creating highly polycrystalline ice: the random arrangement of many ice grains can broaden the diffraction features of crystalline samples. To this end, periodic polycrystalline structures containing approximately 320,000 mW molecules (\(\approx 21\times 21\times 21\) nm in \(x,y,z\)) with desired levels of polycrystallinity (number of grains) and cubicity were built using Voronoi domains. These were subsequently geometry optimised and annealed at 125 K (see For uHDA and eHDA the "grandparent" phases are also given in brackets. uHDA(I\({}_{\rm h}\))* was created using a different experimental procedure to the other uHDA samples (see _Materials & Methods_). A grid search was set up as illustrated in Fig. 2. Cubicities only refer to the subset of molecules in crystalline environments in the system and are not indicative of the total proportion of molecules that are in crystalline environments. The approximate mean diameters of the grains for each level of polycrystallinity are shown in Fig. S5. Fig. 2a shows a heatmap of the \(R\) values of the polycrystalline models against the experimental LDA \(S(Q)\). A strong dependence of the agreement is seen with polycrystallinity. Fig. 2b shows this is due to the increasing amorphous nature of the diffraction patterns with increasing polycrystallinity. As with the quenching simulations, the "goldilocks" scenario is again observed. The optimal match is found for the 2,500 grain systems which have an optimal \(R\) value of 0.068. Both lower and higher levels of polycrystallinity (and thus proportion of crystalline versus amorphous) result in poorer matches to experiment. The amorphous regions arise due to the lattice mismatches between the various Voronoi domains. A large number of small domains leads therefore to more amorphous regions. Fig. 2c shows the structure of the best fit. Again, the structure is a partially crystallised state with isolated grains of crystalline ice surrounded by amorphous "filler" regions. This is due to ice being a "soft" molecular material which results in quite thick amorphous regions at the grain boundaries. At this high level of polycrystallinity the system is 25 % crystalline and 75 % amorphous. It thus no longer resembles a "traditional" polycrystalline system - the type highlighted by Amann-Winkel et al. to be of interest but not a probable structure for amorphous iecs [42] - which have sharp interfaces at the grain boundaries and are therefore mainly crystalline. A weak, but noticeable, dependency on the match to experiment is seen for the level of cubicity in Fig. 2a with a best match around 30 % cubicity. The level of cubicity affects \(S(Q)\) and this is more pronounced for highly crystalline systems, resulting in the stronger dependency observed there. It is interesting to note that the two different approaches - building polycrystalline ice and quenching liquid water - converge to similar structural models of LDA. This gives greater confidence that experimental LDA is indeed partially crystalline. It is also interesting to note the slightly higher crystallinity for the LDA model found from the polycrystalline (25 %) versus the quenched systems (19 %). This difference is due to different distribution of crystalline ice grains in each model. Quenching produces crystalline ice through homogeneous ice nucleation, which is a stochastic process. The polycrystalline structures are similarly built in a stochastic manner, having defined numbers of randomly positioned and orientated nucleation sites which then grow isotropically at the same speed until they impinge on the neighbouring domains. However, the resulting grains of crystalline ice are more uniform in size than achieved through quenching. For randomly orientated grains, a small number of large grains gives a more structured \(S(Q)\) than the same amount of ice broken into many smaller domains. This results in the relatively small difference in crystallinity observed between the two LDA models, which can be taken as approximate upper and lower bounds for Figure 2: Constructing LDA with Voronoi domains containing crystalline ice. The agreement between experiment and polycrystalline systems was explored for model systems across a grid of different levels of cubicity and polycrystallinity (given by the number of grains) – snapshots of the grid’s top and left boundaries decorate the borders of the figure. (a) Heatmap of the Pendry \(R\) factor for the agreement of the \(S(Q)\) between experiment and the model systems and the computational systems as a function of the proportion of molecules with crystalline ice environments and the cubicity. (b) The computational \(S(Q)\) with optimal agreement with experiment for different levels of polycrystallinity. (c) Snapshot of the system with the best match to the experimental data, with its local atomic environments indicated. the crystallinity of experimental LDA. We also note that there is a slight difference in cubicity (48 % and 33 %) of the crystalline grains in the two LDA models. However, the dependence on the match to LDA with cubicity observed in the polycrystalline systems is minor. ### Exploring the hybrid crystalline-amorphous nature of LDA. We now further investigate the hybrid crystalline-amorphous nature of the two LDA models by analysing their local structures. In Fig. 3a the distributions of the atomic local environments of water, ice I\({}_{\rm h}\), ice I\({}_{\rm c}\), and the LDA models are plotted. Both the local \(q_{6}\) and local \(q_{3}\) competently identify crystalline versus amorphous configurations, with the latter also able to differentiate between cubic and hexagonal environments.[55] The local environments of the two LDA models are very similar to each other. They contain significant amounts of highly crystalline and amorphous (as explored by liquid water) atomic local environments, with a smooth distribution of environments explored between these two levels of local order. This is consistent with a hybrid crystalline-amorphous structure. The hybrid crystalline-amorphous nature of the LDA models is also reflected in their oxygen-oxygen radial distribution functions, \(g(r)\), shown in Fig. 3b and c. The two LDA models have near identical \(g(r)\) and both show a very good agreement to experiment which is well maintained up to the longest range signal measured in experiment of 23 A. In the short range, up to \(\sim\) 6 A illustrated in Fig. 3b, the first coordination shell of LDA is very similar compared to crystalline ice whereas the \(g(r)\) feature of the second coordination shell is significantly broadened. From \(\sim\) 6 A onwards LDA's radial correlations quickly decay leaving it dissimilar to crystalline ice. Interestingly, it also shows subtle dissimilarities to liquid water at this range, as it maintains stronger correlations as illustrated in Fig. 3c. Mariedahl et al. (2018) made similar observations from their experimental signal, stating "it is essential to develop models of amorphous ices that can better describe these intermediate-range correlations".[45] The strong agreement observed over all ranges probed between the LDA models and experiment shows that this structure is well explained by a partially crystalline state. Fig. S6 shows the primitive ring distributions of the structures. Both LDA models are very similar, peaking at 6-membered rings (which ice I is entirely comprised of), along with a significant number of 5- and 7-membered rings. Physically, this is consistent with a partially crystallised state: similar distributions have been reported by Haji-Akbari and Debendetti for ice nuclei Figure 3: The hybrid crystalline-amorphous nature of the LDA models. (a) Probability densities of the local atomic environment of water, ice, and the LDA models – the relative intensities are not normalised between the plots. Contours are drawn at 90, 70, 50, 30, 10, and 1 % of the height of the probability density. (b) & (c) Oxygen-oxygen radial distribution functions at short (b) and long (c) range for the various computational models and experiment. Ice I\({}_{\rm h}\), ice I\({}_{\rm c}\) and the LDA models are sampled at 125 K and water at 250 K. formed in homogeneous nucleation, and by Fitzner et al. for the most immobile regions of supercooled water where homogeneous nucleation was found to occur.[58; 64] Cubicity memory effects in experimental LDA samples can be explained by a hybrid crystalline-amorphous state. Experimentally, we find indirect evidence for the presence of crystallinity in LDA by heating it to form stacking disordered ice (ice I\({}_{\text{sd}}\)) and quantifying the stacking probabilities. Fig. 4 shows the remarkable variability in the stacking probabilities of ice I\({}_{\text{sd}}\) obtained from the various LDA samples. Each LDA sample has a different "parent" phase: unannealed high-density amorphous ice (uHDA) samples formed by compressing ice I\({}_{\text{h}}\), ice I\({}_{\text{sd}}\) and medium density amorphous ice (MDA) respectively;[65] expanded high-density amorphous ice (eHDA) formed by compressing MDA to HDA; ice VIII; vapour deposition to give amorphous solid water (ASW); and the hyperquenching, performed by Kohl et al. in ref. [66], of an aerosol of liquid water droplets to give hyperquenched glassy water (HGW). A fully amorphous LDA structure would not be expected to display such memory effects with respect to the parent phase which therefore indicates the presence of crystalline grains with different cubicities in LDA samples. The obtained cubicities range from \(20-65\) %. Such a range of cubicity is difficult to explain by nucleation occurring during the crystallisation of a fully amorphous structure - homogeneous ice nucleation results in random stacking with \(50\) % cubicity[56; 57; 58; 59; 60] - and instead indicates that crystal growth from already existing ice seeds is the dominant process. In conjunction, the weak dependence of cubicity observed in the polycrystalline LDA models shows that crystalline ice I grains with different cubicities can be well hidden in the broad diffraction features of LDA samples. Similar differences in the crystallisation behaviour of LDA samples created via different pathways have also been reported by Seidl et al.[67] and Mariedahl et al.,[68] but accurate cubicities have not been reported. ## III Discussion In this work, we have found strong evidence indicating that LDA has a hybrid crystalline-amorphous structure. Computational models with near exact agreement of the structure factors with experiment were found via quenching liquid water and building polycrystalline ice. In both cases, the models resemble a partially crystalline state with grains of crystalline ice surrounded by amorphous "filler regions". Matching experiment was found to be a "goldilocks" scenario, where the structure is not too amorphous and not too crystalline - containing in the region of \(19-25\)% ice for the mW water model. Experimentally, indirect evidence of LDA's hybrid crystalline-amorphous state was found by transforming LDA to ice I\({}_{\text{sd}}\): a memory effect with respect to the parent material of LDA is seen in the stacking probabilities that cannot be explained by a fully amorphous LDA state and hence indicates the presence of crystalline ice grains already in the LDA with different cubicities. The very similar local environments in LDA and ice I are also consistent with their Raman spectra of the O-H stretching modes.[69] In fact, the LDA to ice I\({}_{\text{sd}}\) phase transition is barely visible in Raman spectroscopy. In Fig. S7, the vibrational density of states for the different levels of polycrystallinity explored in this study are plotted. This shows a qualitative agreement with experiment whereby the spectra show a weak dependence on the level of polycrystallinity and no dependence on the cubicity. In experiment, Winkel et al. reported the existence of two sub-states of LDA - which they named LDA-I and LDA-II - that exhibit subtle structural differences in their diffraction patterns.[43] It has been suggested that LDA-I may contain crystalline ice whilst LDA-II does not; however, no evidence has been provided.[67; 30] In this study, we have looked for computational models to Figure 4: Crystallization of LDA to ice I\({}_{\text{sd}}\): the experimental stacking probabilities of ice I\({}_{\text{sd}}\) depend on how the LDA was made. In this “stackogram”, \(\Phi_{\text{cc}}\) and \(\Phi_{\text{hc}}\) define the probabilities of cubic stacking after a previous cubic or hexagonal stacking event, respectively. The blue diagonal line represents random stacking, where \(\Phi_{\text{hc}}\) and \(\Phi_{\text{cc}}\) are equal. The dashed grey lines indicate constant cubicities. The four corners represent ice I\({}_{\text{h}}\), ice I\({}_{\text{c}}\), a physical mixture of the two and the (hc)\({}_{x}\) polytype which consists of strictly alternating cubic and hexagonal layers of ice. For each data point the “parent” phase from which the LDA was made is noted. For uHDA and eHDA the “grandparent” phases are also given in brackets. uHDA(I\({}_{\text{h}}\))\(*\) was created using a different experimental procedure to the other uHDA samples (see _Materials & Methods_). match the X-ray diffraction pattern of LDA-II collected by Mariedahl et al. (2018).[45] In Fig. S2, the X-ray diffraction patterns of LDA-I and LDA-II measured by Mariedahl et al. (2019)[68] are plotted together with the LDA-II pattern of Mariedahl et al. (2018)[45] and the optimal computational LDA models discovered here. We find that the differences between the experimental diffraction patterns of LDA-I and LDA-II samples are very subtle. Indeed, the LDA-I signal lies within the variation of the two different LDA-II measurements, indicating the difference between LDA-I and LDA-II could be down to experimental error. In Raman spectroscopy, no difference could be found between LDA-I and LDA-II.[70] In any case, the high similarity of LDA-I and LDA-II diffraction patterns mean they are both optimally matched by the same computational LDA models discovered in this study. Our conclusion that LDA is best modelled by a partially crystalline state thus holds for both LDA-I and LDA-II. The structural difference between the LDA-I and LDA-II appear to be subtle differences in crystallinity or experimental error or both. A hybrid-crystalline structure of LDA has potential wide-reaching implications. Being the most abundant form of water in the universe, LDA is important in astronomy; for instance, in astrochemistry LDA is thought to play a key role in promoting the variety, complexity and richness of molecules observed in the stellar-forming regions of space.[11; 12] In cryopreservation, the vitrification (avoidance of ice crystallisation) of biological matter is thought to be key to cell survival - here a fully amorphous LDA is desired to act as the embedding matrix.[71; 72] A fully amorphous LDA is similarly desired in the widely used technique of cryogenic electron microscopy[73] - achieving a truly amorphous LDA would help prevent sample damage and could help improve resolution. Crucially, LDA is key for our understanding of liquid water. A corresponding liquid state to LDA is central to the two-state model. And whilst a fully amorphous LDA can be achieved in simulation, confirming if a fully amorphous LDA can be achieved experimentally or exists in nature is now of paramount importance - especially given the recent discovery of MDA.[65] To achieve a fully amorphous LDA, novel techniques might need to be employed. For instance, a rapid enough cooling rate might completely avoid crystallisation. Finally, the findings of this study have implications for the structural nature of other amorphous materials. Of immediate interest are the other amorphous ices, such as HDA; future work to clarify if they are truly amorphous or not is now needed. It also potentially impacts the myriad of glasses that underpin many technologies (e.g. OLEDs and fibre optics). The beneficial application of glasses over crystals here is largely due to their macroscopic homogeneity and absence of grain boundaries.[74] Determining which materials are truly glassy and which ones are not could provide future opportunities for technological innovation. Another question that arises is the nature of "ultrastable glasses"[74; 75] - a class of material that can exhibit highly desirable properties. The low entropy of these materials could be explained by the presence of crystalline grains that were previously undetected, thereby providing a resolution to the Kauzmann paradox.[76] ## IV Materials & Methods ### Molecular dynamics (MD) simulations. MD simulations were performed with the large-scale atomic/molecular massively parallel simulator (LAMMPS) code.[77] Cubic simulations boxes periodic in \(x,y,z\) were used. The constant number of molecules, constant pressure, and constant temperature (NPT) canonical ensemble was sampled using 10-fold Nose-Hoover thermostats and barostats. MD simulations of mW used thermostats and barostats with relaxation times of 500 femtoseconds (fs) and 5 picoseconds (ps) respectively, and integrated the equations of motion with a timestep of 10 fs. MD simulations of TIP4P/2005 used thermostats and barostats with relaxation times of 200 fs and 2 ps, respectively. Simulations of quenching liquid water and of annealing polycrystalline ice employed isotropic and anisotropic barostats, respectively. There are, of course, many water models. In this study we chose to use mW because it offers the best trade-off between accuracy and computational cost. The choice was based on several reasons: (i) mW accurately captures important properties of water such as the density, structure, melting temperature, and the formation of LDA;[78] (ii) it has been widely and successfully used in studying the nucleation of ice;[55; 59; 60; 78; 79; 80; 81; 82; 83] (iii) it accurately captures the metastability of ice I\({}_{c}\)[78; 84] as well as the thermodynamics of stacking faults;[59; 85; 86] (iv) crucially, because it is considerably cheaper than an all-atom model and has faster dynamics mW enables quenching and crystallisation simulations in large system sizes (\(>100,000\) particles) - such system sizes are essential for determining the balance of ice-like and amorphous domains. The TIP4P/2005 water model was employed as an atomistic water model as it has also been widely and successfully applied to study water and ice, including the formation of LDA.[21; 46; 63; 81; 87] Quenching simulations were performed in a similar fashion to Gartner et al.[51] Water was first equilibrated at 300 K and 0 Pa for 2 ns, prior to generating starting configurations. The desired \(\kappa\) rate was achieved by smoothly ramping the temperature from 300 K to 125 K over the required time period. For \(\kappa=\infty\) K/ns, the structure was further relaxed (4 ns for mW, and 20 ns for TIP4P/2005). Using our Stacky program,[88] a range of ice I supercell structures with 60 layers, 322,560 mW water molecules and cubicities of 1, 0.73, 0.5, 0.37, 0.27, 0.13 and 0 were produced. After transforming the hexagonal to orthorhombic cells, the cell dimensions were 21.5, 21.8 and 22.0 nm in \(x\), \(y\), \(z\). The purpose-written nanoVD program then used these structures to create 5, 10, 20, 50, 100, 200, 500, 1,000, 2,500, 5,000, 10,000 or 15,000 Voronoi domains within a cell of the same size. The center points of the Voronoi domains were randomly chosen. The ice I structures were rotated randomly using a _zyz_ rotation matrix and shifted by random fractions of the cell edges. The latter step is essential to ensure that the final polycrystalline structure reflects the cubicity of the parent structure. The structures were then placed at the centers of the Voronoi domains followed by deleting all molecules that were more than half-way away with respect to the centers of the surrounding Voronoi domains including those located outside the cell through the periodic boundary conditions. As a quality check, it was confirmed for each polycrystalline structure that the number of the mW molecules approximately matched the corresponding number in the start structure. The various polycrystalline structures with different domain sizes and cubicities were then geometry optimised in LAMMPS using the steepest descent algorithm, then equilibrated at 125 K and 0 Pa for 50 ps with a timestep of 0.25 fs, then for 2 ns with a timestep of 10 fs, prior to undertaking analysis. Fig. S10 shows snapshots of the systems before and after the geometry optimisation plus equilibration. Structure factors were calculated using the Debye software ([https://github.com/wojdyr/debyer](https://github.com/wojdyr/debyer)) with a cutoff of half the minimum box dimension, and the sinc-fuction (sin(x)/x) to dampen the cut-off and reduce the Fourier termination effects. Ring statistics were calculated with the Rigorous Investigation of Networks Generated using Simulations (R.I.N.G.S) software [89] and the primitive rings criterion. The Visual Molecular Dynamics software [90] was used to calculate radial distribution functions and to generate images of systems. [91] ### Classifying local environment of water molecules. The local \(q_{6}\) order parameter of Li et al. was used to classify mW molecules as either crystalline or amorphous. [55] The threshold for labelling a particle as crystalline or amorphous was calculated following the methodology used by Espinosa et al. [92] Periodic boxes of bulk ice I\({}_{\text{h}}\), ice I\({}_{\text{c}}\), and water containing 184,000, 216,000 and 192,000 mW molecules respectively were simulated at 250 K and 0 Pa, and the corresponding local \(q_{6}\) values were extracted every 200 ps over 2 ns. The distribution of the local \(q_{6}\) values for crystalline and amorphous environments were then taken from the bulk crystalline ice and bulk liquid water simulations respectively. As shown in Fig. S11 the cutoff for the local \(q_{6}\) between crystalline and amorphous environments are then taken where the probability of mislabelling a particle as crystalline when it is amorphous and vice versa is equal: local \(q_{6}\) gives a mislabelling percentage of 0.8 % here. A similar methodology was used to classify molecules as either ice I\({}_{\text{h}}\) or ice I\({}_{\text{c}}\), where instead the local \(q_{3}\) values were compared between bulk ice I\({}_{\text{h}}\) and ice I\({}_{\text{c}}\). Fig. S12 which shows that local \(q_{3}\) gives a mislabelling percentage of 0.04 % here for the mW model. The same method was used to distinguish crystalline form amorphous molecules for the TIP4P/2005 model (using the configuration of the oxygens); however an alternative definition of local \(q_{6}\) by Lechner and Dellago [93] was used as it was found to give a smaller mislabelling percentage here than that of Li et al. Fig. S13 shows a mislabelling percentage of 0.84 % was achieved. The plugin for metadynamics 2 (PLUMED2) [94; 95] was used to calculate local \(q_{6}\) and local \(q_{3}\). ### Pendry R-factor. The Pendry reliability factor ("Pendry R-factor") was calculated as defined by Pendry. [61] Let \(f\) be a signal and \(\dot{f}\) be its first derivative. To compare \(f\) with another signal, the logarithmic derivative is employed to give emphasis on the location of the maxima/minima rather than the amplitudes. \[L=\dot{f}/f \tag{1}\] A comparison between theory, \(\bar{L}\), and experiment, \(\hat{L}\), using the L functions gives too high an emphasis to zeros; therefore, the Pendry y-function is then employed to give a similar emphasis to troughs and peaks in the signal \[Y=L^{-1}/(L^{-2}+V^{2}) \tag{2}\] The imaginary part of the electron self energy term used in the LEED community, V, was set to two here. The Pendry R factor between theory, \(\bar{Y}\), and experiment, \(\hat{Y}\), is then defined as \[R=\frac{\Sigma_{i}(\bar{Y}_{i}-\hat{Y}_{i})}{\Sigma_{i}(\bar{Y}_{i}^{2}+\bar{Y }_{i}^{2})} \tag{3}\] This was employed between computational and experimental \(S(Q)\) over the entire \(0.65-23\) A range of the experimental signal. \(R\) was employed due to its popularity in the LEED community in quantifying the agreement of computational models with experiment. In Fig. S1, the results obtained with \(R\) are shown to qualitatively agree with the coefficient of determination in this study. ### Experimental procedures for creating the LDA "parent" phases. All uHDA samples, except from uHDA(I\({}_{\text{h}}\))*, were prepared in a 0.8 cm diameter pressure die lined with indium, pre-cooled to 77 K, and compressed to 1.6 GPa at 5 kN/min. uHDA(I\({}_{\text{h}}\))* was prepared by injecting 0.4 mL H\({}_{2}\)O into the pre-cooled indium cup, forming ice I\({}_{\text{h}}\), followed by pressure-induced amorphization (PIA) similar to Mishima et al.[24] In the case of uHDA from MDA, a \(77\) K-cooled indium cup was filled with MDA and placed into the pressure die where the preparation of MDA can be found in ref. [65]. To create the parent phases of ice I\({}_{\text{h}}\) and ice I\({}_{\text{sd}}\), the \(0.4\) mL H2O (Milli-Q, Millipore) injected into the cold pressure die was compressed to \(0.35\) GPa and heated to \(200\) K forming ice II. Following quenching to \(77\) K, ice II was decompressed to \(100\) N whereafter ice I\({}_{\text{sd}}\) or I\({}_{\text{h}}\) was extracted at \(182\) K or \(260\) K, respectively as determined by the in situ volume changes during heating. The resultant ice was then transformed to uHDA through PIA. Further steps were taken when creating eHDA, which was made by heating uHDA to \(145\) K at \(0.25\) GPa before quenching to \(77\) K. All samples were extracted in a liquid nitrogen environment for further analysis. ### Experimental procedure for transforming LDA samples to ice I\({}_{\text{sd}}\)on heating. The various samples were transferred under liquid nitrogen into a purpose-built Kapton window sample holder mounted on a Stoe Stadi P diffractometer with Cu K\(\alpha\)1 radiation at \(40\) kV, \(30\) mA and monochromated by a Ge \(111\) crystal. A Mythen \(1\)K area detector collected data every \(10\) K from \(93-260\) K with a heating rate of \(1\) K/min as controlled by an Oxford Instruments Cryojet HT. The samples form LDA around \(110\) K (except ASW and HGW which are already LDA to begin with), and ice I\({}_{\text{sd}}\) around \(140-150\) K, before finally converting to I\({}_{\text{h}}\) above \(220\) K. An example of this heating procedure when applied to uHDA is shown in Fig. S8. The first ice I\({}_{\text{sd}}\) patterns after the conversion from LDA upon heating were fitted with the MCDIFFaX software.[96] The software searches for the optimum lattice constants, stacking probabilities, peak profile parameters and zero-shift to reach a \(\chi^{2}\) convergence and hence best-fit to the experimental data. The resultant fits overlaying the experimental data are shown in Fig. S9. The MCDIFFaX fits of ice I\({}_{\text{sd}}\) from ASW and LDA from ice VIII have already been published in ref. [44] and [97]. All other X-ray data has not been previously published. ## V Acknowledgements We thank S. J. Cox and V. Kapil for stimulating discussions and suggestions, M. Vickers for help with the X-ray diffraction measurements and J. K. Cockcroft for access to a Cryojet HT. **Funding** We are grateful to the Materials Chemistry Consortium (Grant EP/L000202) and the UK Materials and Molecular Modelling Hub (Grants EP/P020194/1 and EP/T022213/1) for access to the ARCHER, Thomas, and Young supercomputers. We acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research innovation programme grant 725271. **Competing interests** Authors declare that they have no competing interests. **Author contributions** Conceptualization: MBD, CGS, AM. Investigation: MBD, ARF, CGS, AM. Funding acquisition: CGS, AM. Software: MBD, CGS. Visualisation: MBD, CGS. Writing - original draft: MBD. Writing - review & editing: MBD, ARF, CGS, AM.
2305.04738
Scaling and Universality in the Temporal Occurrence of Repeating FRBs
Fast Radio Bursts (FRBs) are energetic phenomena that have significant implications for understanding fundamental physics and the universe. Recent observations of FRB 121102, FRB 20220912A, and FRB 20201124A by the Five-hundred-meter Aperture Spherical Telescope (FAST) showed high burst rates and distinctive energy distribution and temporal properties. In this study, we examine these observations to investigate the scale invariance of the waiting times between bursts for intervals longer than approximately 1 second. Our analysis revealed a unified scaling law for these longer intervals, which is similar to the behavior of solar flares. This discovery inspires us to suggest a dual analogy of the FRB scenario across the entire time intervals: with earthquake dynamics at subsecond scales and with solar flare dynamics beyond the one-second threshold. This threshold potentially aligns with the dynamic time scale of neutron star crusts, offering insight of the occurrence of FRBs into the internal processes of neutron stars.
Yan-Qi Du, Ping Wang, Li-Ming Song, Shao-Lin Xiong
2023-05-08T14:41:06Z
http://arxiv.org/abs/2305.04738v3
# Scaling and Universality in the Temporal Occurrence of Repeating FRBs ###### Abstract The dynamics of repeating fast radio bursts (FRBs) are driven by their physical nature and central engine, however their event rate, energy distribution and temporal occurrence behaviour are still remain uncertain due to the server lack of information of bursts. Recently, the available of high-frequency observation data for the Five-hundred-meter Aperture Spherical radio Telescope (FAST) has made it possible to statistically study the temporal occurrence on timescales from several milliseconds to over several thousands seconds. In this research we studied both the FRB121102 and FRB20201124A temporal occurrence and report here an statistical result about the behavior of the waiting time (or recurrence-time) between successive bursts. The results exhibit novel scaling and universality which have not reported in the field yet. Specifically, we find the scaling law for FRBs recurrence-time distribution which is a clear indication of the importance of correlations in the structure of its physical nature and central engine. The scaling relationships were observed for time scales spanning three orders of magnitude. Given that they are sharing the same scaling law between two repeating FRBs, we infer that the scaling law of waiting time distribution should acts as a indicator which provides insights into the physical nature and the development of the central engine model. Introduction FRBs are one of the brightest outbursts in the universe, which release huge energy at a short time and detected as intense millisecond-duration bursts at radio wavelength[1; 2; 3; 4]. Recent observations of both FRB121102 and FRB20201124A by FAST reveal their high burst rate and more subtle structure about the distribution of energy and temporal properties of bursts[5; 6]. Here, we provide evidence of scale invariance of the occurrence time intervals between successive bursts. The result shows scale invariance under the rescaling of time by the rate of bursts in the case of different fluence (or energy) thresholds. Furthermore, the possible dynamical mechanism of repeating FRBs occurrence rate is also discussed based on time-dependent Poisson process and the scale invariance of waiting time is naturally arose as a result of this process. For the purpose of studying the issue of temporal clustering of repeating FRBs, we search two high-quality data sets, one is FRB121102 [5] which include 1652 independent bursts, in 59.5 hours spanning 47 days, and another is FRB20201124A [6] which include 1863 independent bursts, in 82 hours spanning 54 days. Both data sets were detected by FAST recently and the availability of high resolution detection greatly lowered the flux threshold up to three times than previous observations. For FRB121102 recent research has shown that there is no periodicity or quasi-periodicity on timescales between 1 ms and 1000 s[5]. In this analysis, we investigate both waiting time and its scale invariance for different fluence (or energy) thresholds, and we find a novel scaling and universality which have not reported in the field yet. The process is remarkably well described by a time-dependent Poisson except for the most rare events. For a time interval spanning three orders of magnitude, the dynamics of the bursts recurrence times distribution are in agreement with the predictions of a time-dependent Poisson process. For the reason of the observation of FRB121102 and FRB20201124A by FAST is discontinuous, here we define the waiting time as recurrence interval between continuous events during the same one periods of observation, avoiding long gaps of observation. Temporal clustering of FRBs described by means of the recurrence intervals distribution \(P(\Delta t)\), where \(\Delta t=t_{i+1}-t_{i}\), \(t_{i}\) and \(t_{i+1}\) are the successive arrival times of \(i\)th and \((i+1)\)th burst separately. In statistical analysis the logarithmic bins are adopted just as the intervals \(\Delta t\) have the same length in log scale for assurance of an appropriate bin size for each time scale. However, as it is the probability density function (PDF) of that are calculated, the number of bins and their width are irrelevant to the analysis. ## II Results The characteristics of waiting time distributions \(P(t)\) of FRB121102 are shown in Fig.1(a). For every threshold, the range of the PDF bins was set between the minimum and maximum values of \(\Delta t\) in logarithmically spaced equal bins. Here, \(P(t)\) exhibits a power law tail with different slop for different intensity thresholds. With increasing of the threshold \(P(t)\) evolves smoothly, becoming flatter up to longer times. They all share the similar qualitative behavior that a slow decrease at beginning of its temporal range and ending with a much sharper decay. The scaling law for waiting time distribution was firstly put forward by Bak et al. in description of earthquakes waiting time statistics[7], and self-organized-criticality (SOC) mechanism was introduced to account for the complexity of seismicity[8]. In this scenario the waiting time statistics of earthquakes can described by a universal scaling relation \[P(t,M)\sim Rf(tR), \tag{1}\] where the mean rate \(R\) is magnitude M dependent and defined as the total number of earthquakes divided by the whole time interval that earthquakes taking place. The scaling function \(f\) is the same for all different magnitude M's. Similar scaling argument was adopted here, where \(R(I)\) provides the proper rescaling factor for the statistics of waiting time, and \(I\) is fluence of bursts. Fig.1(b) shows the rescaling of previous all distributions by the mean rate \(R(I)\) for different fluence thresholds. All the data collapse onto a single unified function, and the scaling function is \[g(\tau)\sim(1+\frac{\tau}{\beta})^{-\gamma}, \tag{2}\] where \(\gamma\simeq 3.009\pm 0.016\) and \(\beta\simeq 0.051\pm 0.0014\), respectively. The result above shows that one single scaling function reasonably well describes the waiting time distribution for different thresholds, with small differences in each case. At short times distribution exceeds the value given by the scaling function, and deviations may be due to overcounting introduced by sub-bursts. In this way, relation \(g\) can be regarded as a unified scaling function, reflecting the self-similarity in intensity of the temporal occurrence of repeating FRBs. Despite the large variability associated with the intensity of fluence, it is surprising that the temporal occurrence of repeating FRBs is governed by a unique simple scaling law, and the validity of the scaling law is much more general and independent on any model of its temporal occurrence. It should be noted that although the isotropic equivalent energy distribution is fitted by a bimodal structure[5], suggesting possibly more than one emission mechanism, emission site or beam shape. The above analysis clearly shows that they all collapsed on one single scaling function of waiting time which is irrelevant of energy thresholds. For comparison the same analysis is conducted with FRB20201124A. The waiting time distribution \(P(t)\) shown in Fig.2(a) shares similar behaviors for different fluence thresholds compared with FRB121102 results shown in Fig.1(a). After rescaling using Eq.(1) the entire distribution for all thresholds is also consistent with the same function \(g(\tau)\) [En.(2)], with \(\gamma\simeq 2.311\pm 0.007\) and \(\beta\simeq 0.052\pm 0.0002\), respectively. Some deviation from the scaling function for large \(t\) may be affected by the finite time length of each observation session, which underestimates the occurrence of long waiting times. In these cases the same unified function \(g(\tau)\) obtained are in perfect agreement with both FRB121102 and FRB20201124A, demonstrating the universality of the phenomenon. The slow decay provided by the waiting time distribution (Eq.(2)) is a signature of bursts clustering, compared with a Poisson process which means that the bursts tend to be closer to each other in the short time scale. However, the lack of exponential decay might simply imply that the system is being driven in a correlated way. Although the physical origin of the correlated drive will be physical background dependent, the present characterization of the stochastic temporal occurrence of bursts by means of a unified law indicates the existence of universal mechanisms in the bursts generation process, governed only by the bursts of occurrence rate. A non-Poisson property of bursts clustering over time and power-law energy distribution for both FRB121102 and FRB20201124A have been shown[5; 6; 9; 10; 11; 12], implying that bursts are likely to occur involving a more complex stochastic process. Supposing that the burst rate \(\lambda(t)\) of repeating FRBs is time dependent and varies slowly with time. Similar process was introduced in modeling time dependent Poisson process of solar flare waiting time distribution[13; 14]. Then the distribution of waiting times is described as \[P(t,I)=\frac{\int_{0}^{\infty}f_{I}(\lambda)\lambda^{2}\exp{(-\lambda t)d\lambda}} {\overline{\lambda}_{I}}, \tag{3}\] where the denominator \(\overline{\lambda}_{I}=\int_{0}^{\infty}\lambda f_{I}(\lambda)d\lambda\) is the average bursts rate, and \(f_{I}(\lambda)d\lambda\) is the Figure 1: Waiting time distributions without and with rescaling. (a) Waiting time distributions \(P(t)\) of FRB121102 for different fluence thresholds (Unit: Jy ms). One of the threshold of fluence 0.281 approximately near to energy \(3.0\times 10^{38}\) (erg). (b) Waiting time distributions after rescaling by the mean rate of bursts in the case of different fluence thresholds (Unit: Jy ms), respectively. fraction of the time that the burst rate is in the range \((\lambda,\lambda+d\lambda)\) with intensity greater than threshold \(I\). The analytic form of \(f_{I}(\lambda)\) is constrained by the scaling function \(g(\tau)\), when the analytic form of \(f_{I}\) is taken as Figure 2: Waiting time distributions without and with rescaling. (a) Waiting time distributions \(P(t)\) of FRB20201124A for different fluence thresholds (Unit: Jy s). (b) Waiting time distributions after rescaling by the mean rate of bursts in the case of different fluence thresholds (Unit: Jy s), respectively. \[f_{I}(\lambda)=\lambda^{-2}\lambda_{I}G_{I}(\lambda), \tag{4}\] and \(G_{I}\) is the Gamma distribution: \[G_{I}(\lambda)\sim(\frac{\beta\lambda}{R_{I}})^{\gamma-1}\exp(-\frac{\beta \lambda}{R_{I}}), \tag{5}\] the unified scaling function Eq.(2) is restored after taking \(f_{I}\) into Eq.(3). Furthermore, from Eq.(6) we get the value of the parameter \(\beta=\gamma/\lambda_{0}\) \[\lambda_{0}=\int_{0}^{\infty}\lambda G_{I}(\lambda)d\lambda, \tag{6}\] The characteristic of non-Poisson waiting time distribution implies that central engine of bursts involving more complex dynamical mechanism rather than a pure stochastic process. Recent study shown that many driven nonequilibrium systems of sufficient complexity are often effectively described by a superposition of different dynamics on different time scales[15; 16; 17; 18; 19], where the \(\lambda\) in Gamma distribution \(G_{I}\) is corresponded to the superposition of \(n=2\gamma\) independent Gaussian variable \(X_{k}\) with average 0 and squared, Figure 3: Comparison between FRB121102 (Fluence unit: Jy ms) and FRB20201124A (Fluence unit: Jy s) after rescaling. \[\lambda=\sum_{k=1}^{n}X_{k}^{2}. \tag{7}\] In this sense the Gamma distribution appears naturally for a variable with a finite number \(n\) of degrees of freedom, corresponding to the \(n\approx 6\) and \(n\approx 4\) for FRB121102 and FRB20201124A respectively. The exact value of the parameter \(\gamma\) can be affected by the numbers of freedom of Gaussian distributions, which are constituted by the possible specific stochastic processes. The unified scaling function \(g(\tau)\) also can be rewritten as: \[g(\tau)\sim(1+\lambda_{0}(q-1)\tau)^{-\frac{1}{q-1}}, \tag{8}\] after the replacing \(\gamma\) with \(1/(q-1)\) and \(\beta\) with \(\gamma/\lambda_{0}\). This new function form also is called Tsallis q-exponential function, where parameter q measures the deviation from an exponential distribution and \(q>1\) means a long-tailed tendency. ## III Discussion It is worth noting that solar flares also share the same scaling feature of waiting time distribution, which is independent of the intensity threshold[20] for the whole phase of the solar cycle. At solar maximum, the parameter \(\gamma\simeq 2.83\), which is in approximately agreement with the FRB121102 (\(\gamma\simeq 3.009\)). At solar minimum the parameter \(\gamma\simeq 1.51\) is well described by the distribution of laminar phase in the context of marginal behavior in on-off intermittency with an exponent \(\gamma=3/2\)[21]. In general, intermittency is defined as the occurrence of a signal that randomly switches between regular laminar phases and relatively short irregular bursts. For different types of intermittency, the distribution of laminar phase shows various exponents[22]. The distribution of laminar phase of Type-II intermittency provide an exponent \(\gamma=2\), which is also in approximately agreement with the waiting time distribution parameter \(\gamma\simeq 2.311\) for FRB20201124A. Inspired by the fact that parameter \(\gamma\) is different for the different phase of the solar cycle, the differences seen in the \(\gamma\) values obtained from the unified scaling function for both FRB121102 and FRB20201124A may be attributable to different stochastic process. Solar flares are highly energetic explosions from active regions of the Sun powered by strong and twisted magnetic fields, together with repeating FRBs they all involving huge and rapid energy releases featured with complex temporal occurrence. Another analogy between solar flares and repeating FRB is also supported by the results of power law frequency distributions for energies[5; 10; 23; 24]. Together with these, the observed universality of those stochastic processes underlying two apparently different phenomena suggests a common approach to the interpretation of both phenomena in terms of the similar driving physical mechanism. Recent study also suggested the possible central engine that FRBs are generated by magnetic energy released in magnetar magnetospheres[25] and argued that the repeating FRB121102 is powered by millisecond magnetar, through its rotational or magnetic energy. Alternatively, fully developed turbulence also presents similar statistical results for some observable quantities (velocity or magnetic) fluctuation and waiting time distribution[15; 17; 19]. Different from SOC process, turbulence is arising in fluids, where chaotic dynamics and power law statistics coexist, in which conceptually different mechanisms account for the SOC phenomenon and the phenomenon of intermittency. **Acknowledgements:** We would like to thank Mingyu Ge for useful discussions. The authors acknowledge the computing resources provided by National High Energy Physics Data Center and High Energy Physics Data Center, Chinese Academy of Science.
2305.13244
Pseudospin-orbit coupling and non-Hermitian effects in the Quantum Geometric Tensor of a plasmonic lattice
We theoretically predict the full quantum geometric tensor, comprising the quantum metric and the Berry curvature, for a square lattice of plasmonic nanoparticles. The gold nanoparticles act as dipole or multipole antenna radiatively coupled over long distances. The photonic-plasmonic eigenfunctions and energies of the system depend on momentum and polarization (pseudospin), and their topological properties are encoded in the quantum geometric tensor. By T-matrix numerical simulations, we identify a TE-TM band splitting at the diagonals of the first Brillouin zone, that is not predicted by the empty lattice band structure nor by the highly symmetric nature of the system. Further, we find quantum metric around these regions of the reciprocal space, and even a non-zero Berry curvature despite the trivial lattice geometry and absence of magnetic field. We show that this non-zero Berry curvature arises exclusively from non-Hermitian effects which break the time-reversal symmetry. The quantum metric, in contrast, originates from a pseudospin-orbit coupling given by the polarization and directional dependence of the radiation.
Javier Cuerda, Jani M. Taskinen, Nicki Källman, Leo Grabitz, Päivi Törmä
2023-05-22T17:13:44Z
http://arxiv.org/abs/2305.13244v2
Pseudospin-orbit coupling and non-Hermitian effects in the Quantum Geometric Tensor of a plasmonic lattice ###### Abstract We theoretically predict the full quantum geometric tensor, comprising the quantum metric and the Berry curvature, for a square lattice of plasmonic nanoparticles. The gold nanoparticles act as dipole or multipole antenna radiatively coupled over long distances. The photonic-plasmonic eigenfunctions and energies of the system depend on momentum and polarization (pseudospin), and their topological properties are encoded in the quantum geometric tensor. By T-matrix numerical simulations, we identify a TE-TM band splitting at the diagonals of the first Brillouin zone, that is not predicted by the empty lattice band structure nor by the highly symmetric nature of the system. Further, we find quantum metric around these regions of the reciprocal space, and even a non-zero Berry curvature despite the trivial lattice geometry and absence of magnetic field. We show that this non-zero Berry curvature arises exclusively from non-Hermitian effects which break the time-reversal symmetry. The quantum metric, in contrast, originates from a pseudospin-orbit coupling given by the polarization and directional dependence of the radiation. ## I Introduction The search of novel states of matter demands new tools for classifying the underlying structure of eigenstates describing the system in the form of a non-trivial geometry, beyond the traditional classification based on the energy band structure. Concepts such as the Berry curvature or the Chern number have contributed to this, revealing that the key properties of the system may be protected in the form of topological invariants that take discrete values. The discovery of topological insulators and superconductors [1; 2; 3] has recently been extended to bosonic (photonic) systems [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15] that may break the time-reversal symmetry [16] and feature, as the former, chiral edge states and protection against imperfections [17]. The quantum geometric tensor (QGT) [18] has become an important concept as it provides the structure of eigenfunctions (Bloch states) of a Hamiltonian. Its real part is the quantum metric [19; 20; 21; 22; 23; 24], related to the distance between eigenstates, and the imaginary part gives the Berry curvature [16; 25] that characterizes their phase. Recently, the full QGT has been measured in several systems [26; 27; 28; 29; 30]. Further, non-Hermitian effects [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51] have been shown to manifest as non-trivial quantum geometric phenomena [52], and very recently non-Hermitian quantum metric has been reported [53]. In Ref. [54], we experimentally show the existence of a non-zero quantum metric, and the pioneering observation of non-Hermitian Berry curvature in a plasmonic lattice. Both in electronic and photonic systems, Berry curvature can be created by breaking the time reversal or some other symmetry [55]. Moreover, even in time-reversal invariant systems, the spin-orbit coupling can generate an effective magnetic field that produces non-trivial quantum geometric or even topological phenomena [56; 57]. The spin-orbit coupling can be induced through several mechanisms: for example, it can come from a genuine magnetic field interacting with spin up and spin down states [3], or from fluxes introduced by complex hoppings in a two-site unit cell lattice [58], or from a specific light polarization (pseudospin) dependence of the band structure of a photonic system [59]. Here we unveil, with detailed numerical simulations, the connection of such phenomena with the observations in Ref. [54]. In particular, we find that square plasmonic lattices have a pseudospin-orbit coupling originating from their specific radiation properties, and that time-reversal-symmetry breaking by losses plays a crucial role in the quantum geometric properties of the system. The structure of this article is the following. We review the definition of the QGT in Section II. The quantum geometric tensor describes properties of the single particle wavefunction, it does not involve multiparticle and interactions effects which could lead to entanglement or quantum statistics effects. Thus, our results apply for both quantum and classical regimes. The experiments in Ref. [54] were performed in the many-photon (classical) setting. In Section III we implement a transition (T-)matrix approach that accounts for both the long-range radiative interactions within a plasmonic lattice and the optical response of the individual metallic nanoparticles. By means of infinite summation techniques across the whole lattice, the collective modal dispersion relations \(\omega^{\prime}(\mathbf{k})\) and the losses of each mode \(\omega^{\prime\prime}(\mathbf{k})\) are numerically obtained in the reciprocal space. With this method, we investigate the possible removal of degeneracies and gap openings that typically lead to non-trivial topological phenomena [60; 61]. Here we reveal a TE-TM band splitting along the diagonals of the first Brillouin zone, for both the real (band energy) and the imaginary part (modal losses) of the eigenfrequency. In Section IV, by systematically tracking the eigenfunctions obtained with the T-matrix approach across a grid in \(k-\)space, we explicitly calculate the quantum metric and Berry curvature. Our results show a clear non-zero contribution of all the components of the QGT along the diagonals of the Brillouin zone, intimately related to band degeneracy removal and dissipation losses. Finally, in Section V, we discuss on the nature of time-reversal symmetry breaking by losses (or gain) and its relation with the Berry curvature. ## II Quantum geometric tensor: definitions Here we review basic concepts of band geometry and topology that are utilized throughout this paper. We consider systems that are periodic in real space, thus the eigenmodes of the system \(|u_{n,\mathbf{k}}\rangle\) correspond to Bloch bands with energies \(E_{n,\mathbf{k}}\), where \(n\) is the band index and \(\mathbf{k}\) is a vector in the reciprocal space. We will consider lattice modes that are confined in the plane of the lattice, hence we may restrict our treatment to the two-dimensional \(k-\)space: \(\mathbf{k}=(k_{x},k_{y})\). All the definitions used, as well as our theoretical results, apply for quantum and classical wave functions equally. We use the terminology "quantum geometric tensor" and "quantum metric" as a convention, even if they can be used to characterize classical systems/parameter regimes too. We investigate the underlying quantum geometry that describes small changes in the parameters \(\mathbf{k}\rightarrow\mathbf{k}^{\prime}=\mathbf{k}+d\mathbf{k}\) that affect \(|u_{n,\mathbf{k}}\rangle\). To this purpose, we first introduce the quantum metric, also called Fubini-Study metric. We start from the definition of the infinitesimal distance between the states \(|u_{n,\mathbf{k}}\rangle\) and \(|u_{n,\mathbf{k}^{\prime}}\rangle\): \[ds^{2}=1-|\langle u_{n,\mathbf{k}}|u_{n,\mathbf{k}+d\mathbf{k}}\rangle|^{2}. \tag{1}\] By means of Taylor expansion to second order in Eq. (1), combined with the normalization \(\langle u_{n,\mathbf{k}}|u_{n,\mathbf{k}}\rangle=1\), the metric that defines such a distance is characterized by a real, second-rank symmetric tensor \(ds^{2}=g_{ij}^{n}dk_{i}dk_{j}\), defined as follows: \[g_{ij}^{n}=\Re\bigg{\{} \bigg{\langle}\frac{\partial u_{n,\mathbf{k}}}{\partial k_{i}} \bigg{|}\frac{\partial u_{n,\mathbf{k}}}{\partial k_{j}}\bigg{\rangle}\] \[-\bigg{\langle}\frac{\partial u_{n,\mathbf{k}}}{\partial k_{i}} \bigg{|}u_{n,\mathbf{k}}\bigg{\rangle}\bigg{\langle}u_{n,\mathbf{k}}\bigg{|} \frac{\partial u_{n,\mathbf{k}}}{\partial k_{j}}\bigg{\rangle}\bigg{\}}. \tag{2}\] The quantum metric (2) is gauge-invariant, hence a measurable property of the system. Another important gauge-invariant quantity is the Berry curvature. While the quantum metric is derived from the modulus of two quantum states via the distance (1), the Berry curvature can be obtained by using the phase from the complex inner product of states \(\langle u_{n,\mathbf{k}}|u_{n,\mathbf{k}^{\prime}}\rangle=re^{i\Delta\varphi_ {\mathbf{k},\mathbf{k}^{\prime}}}\). By iteratively applying this operation along a closed loop in \(k-\)space, it is found that the accumulated phase only depends on the followed geometrical path \(\gamma\)[55]. This is the celebrated Berry phase, and reads as follows: \[\varphi_{B}=i\oint_{\gamma}\langle u_{n,\mathbf{k}}|\nabla_{\mathbf{k}}u_{n, \mathbf{k}}\rangle\cdot d\mathbf{k}. \tag{3}\] Using the Stokes theorem, we may rewrite Eq. (3) as a surface integral, obtaining: \[\varphi_{B}=\int_{S}\sum_{i<j}\mathfrak{B}_{ij}^{n}dk_{i}\wedge dk_{j}, \tag{4}\] where \(S\) is the oriented surface enclosed by the contour \(\gamma\). The integrand of Eq. (4) is the Berry curvature: \[\mathfrak{B}_{ij}^{n}=-2\Im\bigg{\langle}\frac{\partial u_{n,\mathbf{k}}}{ \partial k_{i}}\bigg{|}\frac{\partial u_{n,\mathbf{k}}}{\partial k_{j}}\bigg{\rangle}, \tag{5}\] which is an antisymmetric second-rank tensor independent of the choice of \(\gamma\) and \(S\). The gauge-invariant quantum metric (2) and Berry curvature (5) tensors characterize the geometry of the quantum state manifold, and may be combined into the quantum geometric tensor [18]: \[T_{ij}^{n}=\Big{\langle}\frac{\partial u_{n,\mathbf{k}}}{\partial k _{i}}\Big{|}\frac{\partial u_{n,\mathbf{k}}}{\partial k_{j}}\Big{\rangle}\] \[-\Big{\langle}\frac{\partial u_{n,\mathbf{k}}}{\partial k_{i}} \Big{|}u_{n,\mathbf{k}}\Big{\rangle}\Big{\langle}u_{n,\mathbf{k}}\Big{|}\frac{ \partial u_{n,\mathbf{k}}}{\partial k_{j}}\Big{\rangle}. \tag{6}\] The real and imaginary parts of the QGT in Eq. (6) relate directly to the quantum metric and Berry curvature: \(g_{ij}^{n}=\Re T_{ij}^{n}\), and \(\mathfrak{B}_{ij}^{n}=-2\Im T_{ij}^{n}\). Although expressions (2) and (6) have been applied previously for studies of non-Hermitian systems [52], it should be noted that they are based on the hermiticity of the norm: \(\langle u_{n,\mathbf{k}}|u_{n,\mathbf{k}}\rangle=1\). It has been argued that the non-Hermitian version of the quantum metric is more general than Eq. (2) and can be defined via the left and right eigenstates [62]. Moreover, for strong non-hermiticity, bands may become degenerate and coalesce into an exceptional point [52]. In such a case, the winding number of a complex effective field replaces the Chern number as the meaningful invariant [50]. In this work, we study non-degenerate bands (in fact, degeneracy is lifted due to losses, see Section III), suggesting that the non-hermiticity is weak [49]. Further, we show in Section IV that the Berry curvature of non-Hermitian origin is several orders of magnitude smaller than the quantum metric. Since the QGT is a positive semi-definite tensor, the quantum metric is bounded from below by the Berry curvature, \(\sqrt{\det(g)}\geq|\mathfrak{B}_{xy}|/2\)[63], and our results are consistent with this. All this suggests that the definitions (2), (5) and (6) are a good approximation for our study due to the weakness of the non-Hermitian effects. The consequences of non-hermiticity are thoroughly examined in Ref. [54], by explicitly concerning the left and right eigenstates of the system. ## III T-matrix simulations and band tracking We calculate the collective response of an infinite square plasmonic lattice, see Fig. 1(a), including the full optical description of the individual metallic nanoparticles as well as the radiative interactions explained in Ref. [54]. Plasmonic lattices as considered here have eigenmodes that are hybrids of the localized plasmonic resonances and the diffraction orders of the lattice [64; 65; 66; 67; 68; 69]. These modes are called surface lattices resonances (SLRs); for their description within the empty lattice approximation, and further information and references, see Ref. [54]. Most of the previous theoretical studies describing the effect of long-range interactions and retardation treated the metallic nanoparticles as point dipoles [90; 91; 92]. We utilize an open source implementation [93; 94] of the more general transition matrix (T-matrix) method [95], that accounts for higher multipolar orders that may become relevant for large particle sizes beyond the dipole approximation. We numerically extract the eigenfunctions corresponding to the modes of the system, which is crucial for the QGT calculation in Eq. (6). Within this approach, Figure 1: Degeneracy removal of dispersion bands along the diagonal of the first Brillouin zone. (a) Band structure of SLRs that emerge from the empty lattice bands \((-1,0)\) and \((0,-1)\), here called TE and TM bands, respectively. Band energies are given by the real part of the complex eigenfrequency calculated from T-matrix simulations along the path \(\Gamma-A-B-\Gamma\) in \(k-\)space (see inset), where \(A=(1.5,0)\) μm\({}^{-1}\) and \(B=(1.5,1.5)\) μm\({}^{-1}\). A schematic representation of the square lattice is included (only few unit cells are shown) with the typical measures \(p=570\) nm and \(d=2r\). The radius of the nanoparticles is \(r=70\) nm in panels (a)-(d). (b)-(d) Zoomed-in selected areas from panel (a) where bands are degenerate in the empty lattice approximation. While the SLR bands remain degenerate at the \(\Gamma\)-point, a splitting can be observed along the \(B-\Gamma\) trajectory with a gap opening at the point \(B\). (e) Energy difference between the TE and TM bands along the trajectory \(B-\Gamma\) for various nanoparticle sizes. In (b)-(e), the \(k-\)axis values denote distance traveled along the \(\Gamma-A-B-\Gamma\) path. (f) Size of the energy difference (gap) at \(B\) as a function of the nanoparticle radius. the lattice modes are governed by the following expression: \[(I-TW)\mathbf{f}(\mathbf{k})=0. \tag{7}\] Here, \(T\) is the transition matrix that describes the individual nanoparticles modelled as metallic nanospheres, \(W\) carries out an infinite Ewald summation across every lattice site, \(I\) is the identity matrix, and \(\mathbf{f}(\mathbf{k})\) is a column vector that contains complex coefficients. Detailed derivation and definitions are found in Appendix A and Ref. [93]. In the T-matrix method, the electric field is expanded in a basis of vector spherical wavefunctions (VSWFs), labeled with subindices \(\{\tau,l,m\}\) that correspond to electric (\(\tau=2\)) and magnetic (\(\tau=1\)) type of solutions to the Maxwell equations, as well as the multipole degree \(l=1,2,\ldots\) corresponding to dipolar, quadrupolar, and higher multipole orders. For instance, the electric field from a scatterer in one unit cell is given by the summation: \[\mathbf{E}_{\mathrm{sc}}(\omega,\mathbf{r})=\sum_{\tau=1,2}\sum_{l=1}^{\infty }\sum_{m=-l}^{l}f_{\tau lm}\mathbf{u}_{\tau lm}(\kappa(\mathbf{r})), \tag{8}\] where \(\mathbf{u}_{\tau lm}(\kappa(\mathbf{r}))\) are called _outgoing_ VSWFs [96]. The components of the coefficient vector \(\mathbf{f}\) in Eq. (7) are the coefficients \(f_{\tau lm}\) in Eq. (8) that determine the eigenmodes of the system. We thus define our quantum state \(|u_{\mathbf{k}}\rangle\equiv\mathbf{f}(\mathbf{k})\) as the column vector with coefficients that follow: \[f_{\tau lm}\in\mathbb{C}\ | \tau=1,2;\ l=1,2,\ldots,l_{max}; \tag{9}\] \[m=-l,-l+1,\ldots,+l.\] The length of the eigenvector \(|u_{\mathbf{k}}\rangle\) in the space of VSWFs is defined by the sum in Eq. (8), and ideally ranges up to \(l_{max}=\infty\) in (9). In our implementation, we set a cutoff to \(l_{max}\) and truncate the sum of Eq. (8), such that elements \(T_{\tau lm}^{\tau^{\prime}l^{\prime\prime}m^{\prime}}\) of the \(T\) matrix are considered negligible for \(l,l^{\prime}\geq l_{max}\)[93]. The length of the eigenvector is given by \(N_{m}\equiv 2l_{max}(2l_{max}+1)\) and accordingly the dimension of the square matrix \((I-TW)\) is \(N_{m}^{2}\). In this work, \(l_{max}=2\) was fixed, and convergence of the results was tested with higher values. Solving the problem of Eq. (7) involves calculating both eigenvalues and eigenvectors of the matrix \(M(\omega)\equiv I-TW\). Even at the individual particle level, the T-matrix is non-Hermitian for lossy scatterers, and in general the eigenvalues of the full problem matrix \(M(\omega)\) are complex eigenfrequencies of the form \(\omega=\omega^{\prime}+i\omega^{\prime\prime}\), where \(\omega^{\prime},\omega^{\prime\prime}\in\mathbb{R}\). The real part \(\omega^{\prime}\) of the obtained eigenfrequency \(\omega\) gives the energy of the collective lattice mode and the imaginary part \(\omega^{\prime\prime}\) accounts for both the radiative and dissipative losses of the mode. Our numerical procedure calculates the complex eigenvalues \(\omega(\mathbf{k})\) for each value of the momentum parallel to the lattice plane \(\mathbf{k}=(k_{x},k_{y})\). To that purpose, we implement Beyn's integral method that numerically locates all the complex eigenvalues of the matrix problem (7) within a defined contour in the complex frequency plane. Eigenvectors corresponding to each eigenvalue are then obtained, see Refs. [93] and [97] for details. We first study the band dispersion energies of the SLR modes, see Fig. 1(a). Given a linear path in the two-dimensional \(k-\)space between generic points \(\mathbf{k}_{0}\) and \(\mathbf{k}_{f}\), discretized by \(N_{k}\) points such that \(\mathbf{k}_{j}=\mathbf{k}_{0}+j(\mathbf{k}_{f}-\mathbf{k}_{0})/N_{k}\) with \(j=0,1,\ldots,N_{k}\), we consecutively determine the eigenvalues \(\omega_{i}(\mathbf{k}_{j})\), \(i=1,\ldots,N\leq N_{m}\) inside an elliptical contour in the complex frequency plane, for each \(\mathbf{k}_{j}\). We focus on a range of energies around the first \(\Gamma\)-point by tracking the path \(\Gamma-A-B-\Gamma\) within the irreducible Brillouin zone (see the inset of Fig. 1(a)). Studies of other high-symmetry points is an interesting direction for future work [98; 99]. Since many physical solutions result from the calculation, some prior knowledge of SLRs in a square lattice is instrumental for identifying the nature of each eigenmode. A good guide to the dispersions of SLRs is provided by the empty lattice approximation (see Ref. [54]) where four diffraction orders cross at the \(\Gamma-\)point. Along the high symmetry line \(\Gamma-X\) in Fig. 1(a), modes can be classified according to their polarization properties: assuming that the nanoparticles behave dominantly as electric dipoles polarized in the same direction as the incoming light, the modes \((\pm 1,0)\) can only be excited with transverse electric (TE, \(y\)-polarized) radiation, likewise, \((0,\pm 1)\) are transverse magnetic (TM) polarized modes [85]. We keep this naming for SLR bands in Fig. 1, and even along other trajectories where modes are not purely TE or TM polarized. Scattering by the nanoparticles couples the diffraction orders at the \(\Gamma-\)point and forms two TE SLR modes separated in energy by a small bandgap [84; 100; 101]. As a consequence, one of the TE modes is shifted to lower energies than the empty lattice approximation (red line in Fig. 1(a)), while the other is shifted to higher energies (not shown in Fig. 1(a)). The TM modes (green line in Fig. 1(a)) remain degenerate with both the TE mode at the \(\Gamma-\)point, and with each other along the high-symmetry line \(\Gamma-A\). In what follows, we focus on the TE SLR that forms from the \((-1,0)\) empty lattice mode, and on the TM mode coming from \((0,-1)\). The bandgap opening at the \(\Gamma-\)point and other high symmetry points occurs because several diffraction orders cross in those points. Away from the high symmetry points the dispersions are expected to resemble those of the empty lattice approximation [85]. Indeed this is the case around the point \(A\) in Fig. 1(a). However, a different behavior is found around the \(B-\)point of Fig. 1(a), placed on the diagonal of the first Brillouin zone. Tracking the bands in \(k-\)space along the diagonal \(B-\Gamma\) reveals that the TE and TM dispersion energies do not remain degenerate as in the empty lattice approximation. Instead, the degeneracy is lifted increasingly while moving away from the \(\Gamma-\)point along the diagonal. To further examine this behavior, we provide in Figs. 1(b)-(d) zoomed-in areas of the SLR bands studied in Fig. 1(a). Fig. 1(b) shows that the degeneracy of the TE (red line) and TM bands (green line) holds at the \(\Gamma\)-point for both the empty lattice approximation and realistic simulations with a finite particle radius of \(r=70\) nm. TE and TM degeneracy at the \(\Gamma-\)point is understood from the \(x-y\) symmetry of the square lattice, as the system is invariant under \(\pi/2\) rotations around the out-of-plane \(z\)-axis, regardless of the particle size. Away from the \(\Gamma\)-point along the trajectory \(\Gamma-A\) of Fig. 1(b), the \(x\)-\(y\) symmetry is broken with \(k_{x}\neq 0\), thus degeneracy is lifted and the TE and TM modes emerge. Energy loss of photons by scattering with the metallic nanoparticles explains the TE and TM energy bands departing further from the empty lattice approximation in Fig. 1(b). However, degeneracy of these bands at the \(\Gamma\)-point is not lifted by the presence of the nanoparticles. A different scenario is presented in Figs. 1(c),(d), as the presence of nanoparticles clearly lifts the empty lattice degeneracy along the trajectory \(B-\Gamma\). Fig. 1(c) shows that, while the degeneracy exactly at the \(\Gamma-\)point is respected consistently with Fig. 1(b), finite sized particles remove the degeneracy of TE and TM bands along the diagonal of the first Brillouin zone. As a consequence, a gap is created between these bands at the point \(B\) of the chosen path (see Fig. 1(d)). We achieve insight into the degeneracy removal at the diagonal \(B-\Gamma\) by tracking the energy difference between the TE and TM dispersion bands. Fig. 1(e) shows that the band splitting increases with the particle size and is ever growing along this trajectory when moving away from the \(\Gamma-\)point. Interestingly, a superlinear dependence of the gap at the point \(B\) is found as a function of the particle radius, see Fig. 1(f). This implies that the splitting can be experimentally detected with a moderate particle size (e.g., we find an energy difference of 7 meV with a particle radius of 75 nm). We note that \(B\) is not a high-symmetry point of the square lattice, and is introduced only to illustrate the degeneracy removal along the diagonal of the first Brillouin zone. To understand possible non-Hermitian effects, we now focus on the losses inherent to the system. The losses featured by the TE and TM modes of the plasmonic lattice are given by the imaginary part of the eigenfrequency \(\omega^{\prime\prime}\) obtained from the T-matrix simulations. Fig. 2(a) shows the evolution of these modal losses for a lattice of particles with radius \(r=70\) nm, tracked along the same path in \(k-\)space as in Fig. 1(a). While the TE and TM dispersion bands in Fig. 1(a) converge to the empty lattice approximation for decreasing particle size, losses in Fig. 2(a) converge to zero. This is consistent since dissipation within the metal is the main loss mechanism of plasmonic nanoparticles, and the empty lattice approximation deals ideally with lossless photonic modes. We have found that, along the studied path in reciprocal space, the loss-dependence of the TE mode on \(\mathbf{k}\) is qualitatively similar regardless of the particle size, and the same applies to the TM mode. However, Fig. 2(a) shows that the dependence of the TE mode along the path \(\Gamma-A-B\) is different from that of the TM mode. Notably, along the trajectory \(B-\Gamma\), the features of the band dispersions in Fig. 1(a)-(d) and modal losses in Fig. 2(a) are similar: they become equal at the \(\Gamma-\)point and depart further when moving along the diagonal of the Brillouin zone. In Fig. 2(b) we show the difference in modal losses between the TE and TM modes, revealing that the trend is analogous to that followed by the band energy difference in Fig. 1(e). Consequently at the point \(B\), a gap opening takes place also in the imaginary part of the eigenfrequency. The size of the gap increases with the radius of the nanoparticles (see inset of Fig. 2(b)), which is consistent with the increased ohmic losses. The splitting of the imaginary eigenfrequency \(\omega^{\prime\prime}\) is smaller than the one found for the real part \(\omega^{\prime}\) at every point on the diagonal \(B-\Gamma\), and accordingly the size of the gap in Fig. 2(b) is smaller than in Fig. 1(f). We conclude that both the real and imaginary part of the eigenfrequency exhibit a band splitting along the diagonal of the first Brillouin zone, with a larger effect on the dispersion energies. These findings provide a relevant insight into the quantum geometric and topological properties of the system; in particular, the band energy and modal loss splitting at the diagonal of the Brillouin zone suggests that a complex-valued term \(\Omega_{y}(\mathbf{k})=\Omega^{\prime}_{y}(\mathbf{k})+i\Omega^{\prime\prime}_ {y}(\mathbf{k})\), with \(\Omega^{\prime}_{y},\Omega^{\prime\prime}_{y}\in\mathbb{R}\) and \(\Omega^{\prime\prime}_{y}\ll\Omega^{\prime}_{y}\) for every \(\mathbf{k}\) must be present in the two band Hamiltonian of Ref. [54]. In agreement with the results presented here, it is found that the real part \(\Omega^{\prime}_{y}\) (band energy) creates the observed quantum metric, and the imaginary part \(\Omega^{\prime\prime}_{y}\) (modal loss) induces non-zero Berry curvature, see sections IV and V. We next describe the eigenvectors corresponding to the TE and TM bands above, given by the complex-valued coefficients \(f_{\tau lm}(\mathbf{k})\) in the VSWF basis as specified in (9). Analysis of the largest coefficients provides understanding of the relevant multipolar contributions to each mode, therefore we first examine the modulus \(|f_{\tau lm}(\mathbf{k})|\) along the path \(\Gamma-A-B-\Gamma\). While we have shown that the band dispersions and modal losses are highly dependent on \(\mathbf{k}\), only four momentum-dependent coefficients are significant for the considered range of particle sizes and carry the same magnitude along the whole considered path, see Figs. 2(c),(d). The main contribution to the eigenvector comes from the superposition of two terms of electric dipolar nature, with subindices \(\{\tau,l,m\}=\{2,1,1\}\) and \(\{2,1,-1\}\) dominating in Fig. 2(c),(d); the combination of coefficients with \(m=\pm 1\) reveals that the modes are polarized in the lattice plane [93]. A minor contribution comes from the coefficients with subindices \(\{\tau,l,m\}=\{2,2,2\}\) and \(\{2,2,-2\}\), corresponding to electric quadrupolar components with in-plane polarization. The typical size of the nanoparticles in our study is relatively large compared to the lattice period [91]. Nevertheless, we have found that the collective behavior of the lattice is governed by modes of electric dipolar nature. Although the same combinations of \(\{\tau,l,m\}\) contribute to the eigenvectors of both the TE and TM modes, the phases \(\varphi_{\tau lm}(\mathbf{k})\) of the corresponding coefficients \(f_{\tau lm}=|f_{\tau lm}|e^{i\varphi_{\tau lm}}\) differ substantially, see the lower panels of Figs. 2(c),(d). We choose the overall phase factor of both eigenvectors such that the coefficient \(f_{21-1}\) is real along the considered path (\(\varphi_{21-1}=0\)), and the rest of the relative phases are defined accordingly. Hence the phase \(\varphi_{211}\) (blue dashed line in lower panels of Figs. 2(c),(d)) accounts for the phase difference between the two dominant dipolar VSWFs: \(\Delta\varphi\equiv\varphi_{211}-\varphi_{21-1}\). This phase difference is \(\Delta\varphi=0\) for the TE mode at the \(\Gamma-\)point (see Fig. 2(c)), and then evolves smoothly with \(\mathbf{k}\); however, at \(B\) (that is, the point placed at the diagonal of the first Brillouin zone) there is an abrupt shift to \(\Delta\varphi\approx\pi/2\). Similarly, the phase difference of the TM mode in Fig. 2(d) is \(\Delta\varphi=-\pi\) at the \(\Gamma-\)point, but experiences a sudden change to \(\varphi=-\pi/2\) at the point \(B\). Additional features at points \(A\) and \(B\) in Fig. 2(d) are attributed to the choice of \((-\pi,\pi]\) as the principal branch of the phase. We further discuss the implications Figure 2: Modal losses, eigenvectors, and field profiles of the TE and TM SLR modes shown in Fig. 1. (a) Modal losses along the path \(\Gamma-A-B-\Gamma\) within the first Brillouin zone, calculated as the imaginary part of the mode eigenfrequency from T-matrix simulations of a lattice of nanospheres with radius \(r=70\) nm. (b) Losses of the TE mode subtracted from those of the TM mode along the trajectory \(B-\Gamma\) for various particle sizes. Compared to the dispersion energies in Fig. 1(e), the splitting is smaller yet has a similar trend, with zero loss difference at the \(\Gamma\)-point, and a gap opening at the point \(B\). The inset shows the size of the gap in the imaginary part of the TE and TM eigenfrequencies at the point \(B\), as a function of the particle radius. (c) Tracking of the complex coefficients \(f_{\tau lm}\) that enter into the eigenvector components of the TE mode: both the modulus (upper panel) and phase (lower panel) of each relevant coefficient are shown. (d) Same as (c), but for the TM mode. Both panels (c) and (d) are calculated for a lattice of metallic nanospheres of radius \(r=50\) nm. Eigenvectors obtained for other particle sizes reported in this work are qualitatively similar. (e) Field profile of the TE mode in a unit cell of the lattice (the period is 0.57 μm, but only the neighboring area to the nanoparticles is shown), as calculated at the \(\Gamma\)-point from the in-phase superposition of VSWFs \(\mathbf{u}_{21\pm 1}\) according to panel (c). White arrows indicate both magnitude and direction of the electric field \(\mathbf{E}\), and the colorscale map shows its modulus \(|\mathbf{E}|\). (f) Field profile of the TM mode at the \(\Gamma\)-point resulting from the superposition of VSWFs \(\mathbf{u}_{21\pm 1}\) with a phase difference of \(\pi\), as in panel (d). of the behavior of eigenvectors close to the diagonal in Section IV. Finally, we use the modulus and phase of the VSWF coefficients that enter into the eigenvector to achieve further insight into the considered modes. Following Eq. (8) and disregarding the quadrupolar terms, we may write the field distribution of the modes as the superposition of two outgoing VSWFs: \[\mathbf{E}=f_{211}\mathbf{u}_{211}+f_{21-1}\mathbf{u}_{21-1}. \tag{10}\] At the \(\Gamma\)-point, we have found that the coefficients \(f_{21\pm 1}\) have the same modulus with a phase difference. Therefore we may write: \(f_{21-1}=\exp(i\Delta\varphi/2)\) and \(f_{211}=\exp(-i\Delta\varphi/2)\), where \(\Delta\varphi=0\) for the TE mode, and \(\Delta\varphi=\pi\) for the TM mode. Figs. 2(e),(f) show the resulting field profiles in one unit cell of the lattice according to Eq. (10) that confirm the dipolar character of the analyzed modes. The polarization properties are also revealed: the profile in Fig. 2(e) can only be created with \(y\)-polarized light, therefore the mode is TE for propagation in the \(x\)-direction. By the same reasoning, the mode in Fig. 2(f) is TM for the same propagation direction. We also note that at the \(\Gamma\)-point these two modes are doubly degenerate in energy, and hence, due to the symmetry of the square lattice, the field profiles are equivalent to each other under a \(\pi/2\) rotation in real space. Further investigation involving the eigenvectors from the T-matrix approach is carried out in Section IV for a region of \(k\)-space around the \(\Gamma\)-point; in particular, we utilize them to calculate the quantum metric and Berry curvature of TE and TM modes discussed in that Section. ## IV Calculation of the QGT from T-matrix Eigenfunctions We have studied the surface lattice resonance modes sustained by a square lattice, and concluded that modes with different polarization properties \(-\) TE and TM modes \(-\) show intriguing features along the diagonal of the first Brillouin zone, both in the real and imaginary parts of the eigenfrequency from T-matrix simulations. Importantly, the T-matrix method allows fully characterizing the eigenvectors of the corresponding modes in the VSWF basis, and we have found that the components of the eigenvectors are complex-valued due to both radiative and dissipative losses in the system. In this section, we study whether such losses, in combination with the band structure shown above, enable non-Hermitian effects that explain the degeneracy removal along the diagonal of the Brillouin zone. In particular, since losses can break the time-reversal symmetry, they might lead to interesting effects on the quantum geometry and topology of the system. To explore this, we utilize the T-matrix eigenvectors to calculate the full QGT that comprises the quantum metric and the Berry curvature. To calculate the QGT, we first define a grid in a square portion of \(k\)-space that is centered at the \(\Gamma\)-point, and well within the first Brillouin zone, see upper panel of Fig. 3(a). The total size of the discretized surface is \(10^{4}\times 10^{4}\) m\({}^{-2}\), with a maximum grid spacing of 10 m\({}^{-1}\) in both \(k_{x}\) and \(k_{y}\) directions. Then, we systematically calculate the eigenmodes at every point of the grid, using the band tracking procedure described in Section III, and focus on the TE and TM modes studied in Figs. 1 and 2. In Figs. 2(d),(e), we found that the dominant contributions to the eigenvectors are the in-plane dipolar coefficients \(f_{21\pm 1}\), followed by the in-plane quadrupolar terms \(f_{22\pm 2}\), independently of the path followed in \(k-\)space. We use this to identify our modes and ensure that the correct bands are followed. Moreover, we demand continuity of the dispersion energies with the \(\mathbf{k}\)-vector. According to Eq. (3), the Berry phase can be calculated within closed loops connecting neighboring points of the grid in the upper panel of Fig. 3(a). From that calculation, taking the limit of a small loop, the Berry curvature may be obtained using Eq. (4). Here, however, we use a different method, namely we calculate numerical derivatives of the eigenfunctions explicitly at each point of the grid, see lower panel of Fig. 3(a). We set four additional data points (red crosses in Fig. 3(a)) to control the step lengths \(\Delta k_{x,y}\) and achieve convergence of the numerical values of the derivatives \(|\partial_{k_{x,y}}u_{\mathbf{k}}\rangle\) without increasing the number of grid points, thus speeding up the calculations. The derivatives are calculated as follows: \[\frac{\partial|u_{\mathbf{k}}\rangle}{\partial k_{i}}\approx\frac{|u_{\mathbf{ k}+\Delta k_{i}\mathbf{\hat{k}}_{i}}\rangle-|u_{\mathbf{k}-\Delta k_{i} \mathbf{\hat{k}}_{i}}\rangle}{2\Delta k_{i}}+\mathcal{O}(\Delta k_{i}^{2}), \tag{11}\] where \(i=x,y\) and as indicated, the convergence of the derivative estimate is quadratic with the step size \(\Delta k_{i}\). This estimate based on the Taylor expansion of \(|u_{\mathbf{k}}\rangle\) is more efficient than linear convergence from usual approximations for right or left derivatives: \(\partial_{k_{i}}|u_{\mathbf{k}}\rangle\approx(|u_{\mathbf{k}+\Delta k_{i} \mathbf{\hat{k}}_{i}}\rangle-|u_{\mathbf{k}}\rangle)/\Delta k_{i}+\mathcal{O}( \Delta k_{i})\). Overall, direct access to the numerical value of the derivatives \(|\partial_{k_{x,y}}u_{\mathbf{k}}\rangle\) at each point of the grid \(\mathbf{k}_{j}\) allows for a straightforward calculation of the QGT, providing all the components of both the quantum metric and the Berry curvature tensors simultaneously. For this purpose, we also define the inner product in the space of the VSWFs for two generic vectors \(|u_{\mathbf{k}}\rangle\) and \(|v_{\mathbf{k}}\rangle\) as follows: \[\langle u_{\mathbf{k}}|v_{\mathbf{k}}\rangle =\begin{pmatrix}f_{21-1}^{*},f_{210}^{*},f_{211}^{*},\ldots \end{pmatrix}\begin{pmatrix}f_{21-1}^{\prime}\\ f_{210}^{\prime}\\ f_{211}^{\prime}\\ \vdots\end{pmatrix} \tag{12}\] \[=f_{21-1}^{*}f_{21-1}^{\prime}+f_{210}^{*}f_{210}^{\prime}+f_{211}^ {*}f_{211}^{\prime}+\ldots\] With the considerations above, we insert definitions (11) and (12) into Eq. (6) for the QGT, and calculate systematically the distribution of quantum metric and Berry curvature around the \(\Gamma-\)point. As we restrict our study to the two-dimensional \(k-\)space, we consider only the \(g_{xx}\), \(g_{yy}\), \(g_{xy}\) and \(g_{yx}\) components of the quantum metric tensor, and the components \(\mathfrak{B}_{xy}\) and \(\mathfrak{B}_{yx}\) of the Berry curvature. Other components are trivially zero in every point of the reciprocal space. Our results reveal a highly non-trivial profile in the quantum metric components and also in the Berry curvature, with an interesting non-zero distribution close to the diagonals of the first Brillouin zone, see the colorscale maps in Figs. 3(b)-(e). In particular, we find that the components \(g_{xx}\) and \(g_{yy}\) of the quantum metric (Figs. 3(b),(c)) are equal with positive contributions along both high-symmetry axes \(k_{x}=\pm k_{y}\) in the analyzed area of \(k-\)space. On the other hand, the components \(g_{xy}=g_{yx}\) (Fig. 3(d)) display very similar values to the other components of the quantum metric, but with a negative sign along the main diagonal of the Brillouin zone. These findings reflect on the symmetry of the square lattice since the diagonals of the Brillouin zone \(k_{x}=\pm k_{y}\) are high-symmetry axes. However, we note that all the components of the quantum metric are zero in the rest of the analyzed area of \(k\)-space, including the other two high symmetry axes, \(k_{x}=0\) and \(k_{y}=0\), of this system. As is noticed in Ref. [54], the empty lattice bands along the diagonal \(k_{x}=k_{y}\) show degeneracies of the TE and TM modes while in Figs. 1 and 2 these degeneracies are removed to yield a band splitting of the SLR modes, both in the real and in the imaginary part of the eigenfrequency. This, in turn, results in abrupt changes in the phase of the corresponding eigenvectors close to the diagonal (e.g. see Figs. 2(d),(e), close to the point \(B\) in \(k-\)space), and dramatically affects the eigenfunctions and the corresponding derivatives involved in the calculation of the QGT, see Eqs. (2), (5) and (6). The non-trivial features at the diagonals of the first Brillouin zone in Figs. 3(b)-(d) are thus a consequence of the band splitting and loss behavior of the SLR modes described in Figs. 1 and 2. Of special interest is the emergence of non-zero Berry curvature close to the diagonals, since our work is concerned with a simple, symmetric square lattice where one would not expect it. The presence of Berry curvature in the first Brillouin zone is linked to the Chern number [55]: \[C_{n}=\frac{1}{2\pi}\int_{BZ}d^{2}\mathbf{k}\mathfrak{B}_{xy}^{n}(\mathbf{k}), \tag{13}\] Figure 3: Quantum geometric tensor (quantum metric and Berry curvature), as obtained from T-matrix simulations of the TE mode in Figs. 1 and 2. (a) Schematic representation of the discretization of \(k\)-space (upper panel with dotted blue circles) and of the numerical procedure that calculates the derivatives of the eigenvectors obtained at each discretization point \(\mathbf{k}_{i}\) (lower panel). The calculated eigenmodes (filled blue circles) are shown only for the (-1,0) mode in the first quadrant of the reciprocal space for clarity. Panels (b)-(d) show the components \(g_{xx}\), \(g_{yy}\) and \(g_{xy}\) of the quantum metric tensor, respectively, and (e) shows the Berry curvature \(\mathfrak{B}_{xy}\). Colorscale units are square meters (m\({}^{2}\)). Results of the quantum metric tensor are the same for the TM mode, and the Berry curvature of that mode differs by a minus sign from the TE result. where \(n\) is the band index according to Eq. (5). In contrast to the quantum metric components in Figs. 3(b)-(d) that display a symmetric distribution with respect to the diagonals of the Brillouin zone, the Berry curvature in Fig. 3(e) is antisymmetric, with contributions of equal magnitude and opposite sign at each side of all the diagonals. Hence the net contribution from each diagonal is zero, and the integral (13) over the analyzed area of \(k-\)space is canceled out. This is expected to extend to the rest of the Brillouin zone, and yield a zero Chern number for the square lattice case that we consider. Further analysis of the square lattice symmetry allows insight into the patterns in Figs. 3(b)-(e). We invoke here the relevant components of the quantum metric: \[g_{xx} =\Re\{\langle\partial_{k_{x}}u_{\mathbf{k}}|\partial_{k_{x}}u_{ \mathbf{k}}\rangle-\langle\partial_{k_{x}}u_{\mathbf{k}}|u_{\mathbf{k}} \rangle\langle u_{\mathbf{k}}|\partial_{k_{x}}u_{\mathbf{k}}\rangle\}, \tag{14}\] \[g_{yy} =\Re\{\langle\partial_{k_{y}}u_{\mathbf{k}}|\partial_{k_{y}}u_{ \mathbf{k}}\rangle-\langle\partial_{k_{y}}u_{\mathbf{k}}|u_{\mathbf{k}} \rangle\langle u_{\mathbf{k}}|\partial_{k_{y}}u_{\mathbf{k}}\rangle\},\] (15) \[g_{xy} =\Re\{\langle\partial_{k_{x}}u_{\mathbf{k}}|\partial_{k_{y}}u_{ \mathbf{k}}\rangle-\langle\partial_{k_{x}}u_{\mathbf{k}}|u_{\mathbf{k}} \rangle\langle u_{\mathbf{k}}|\partial_{k_{y}}u_{\mathbf{k}}\rangle\},\] (16) \[g_{yx} =\Re\{\langle\partial_{k_{y}}u_{\mathbf{k}}|\partial_{k_{x}}u_{ \mathbf{k}}\rangle-\langle\partial_{k_{y}}u_{\mathbf{k}}|u_{\mathbf{k}} \rangle\langle u_{\mathbf{k}}|\partial_{k_{x}}u_{\mathbf{k}}\rangle\}, \tag{17}\] and explain the symmetric contributions of the quantum metric with respect to the diagonals of the Brillouin zone in Figs. 3(b)-(d). Upon a mirror operation \(\sigma_{v}\) with respect to the main diagonal \(k_{x}=k_{y}\), the Brillouin zone remains invariant with the change of coordinates \(k_{x}\to k_{y}\) and \(k_{y}\to k_{x}\). Introducing this transformation in the derivatives of Eqs. (14)-(17), we obtain that \(g_{ij}^{n}\to g_{ji}^{n}\) which results in an equal contribution on both sides of the diagonal since the quantum metric tensor is symmetric by definition: \(g_{ji}^{n}=g_{ij}^{n}\). On the other hand, the \(C_{4}\) symmetry with respect to the out-of-plane axis \(\mathbf{k}_{z}\) allows four rotations, each with an angle of \(m\pi/2\) where \(m=1,\ldots,4\), that leave the Brillouin zone invariant. Therefore, the result of the quantum metric components in the first quadrant \(k_{x,y}>0\) is extended to the rest of the Brillouin zone. For instance, a \(\pi/2\) rotation from the first quadrant involves the change of coordinates \(k_{x}\to k_{y}\) and \(k_{y}\to-k_{x}\), introducing a minus sign in the derivative \(\partial_{k_{x}}\) and thus in the components \(g_{xy}\) and \(g_{yx}\) in Eqs. (16),(17) while leaving \(g_{xx}\) and \(g_{yy}\) in Eqs. (14),(15) unchanged. This explains that while the quantum metric components \(g_{xx}\) and \(g_{yy}\) have the same magnitude and sign close to the diagonals in Figs. 3(b),(c), the components \(g_{xy}=g_{yx}\) in Fig. 3(d) display equal magnitudes with an extra (-1) factor per each \(\pi/2\) rotation. A similar analysis may be carried out for the Berry curvature in Fig. 3(e), whose expression reads as follows: \[\mathfrak{B}_{xy}=i\left(\langle\partial_{k_{x}}u_{\mathbf{k}}|\partial_{k_{y }}u_{\mathbf{k}}\rangle-\langle\partial_{k_{y}}u_{\mathbf{k}}|\partial_{k_{x} }u_{\mathbf{k}}\rangle\right). \tag{18}\] By the same reasoning as above, it is found that the mirror operation \(\sigma_{v}\) reverses the sign of \(\mathfrak{B}_{xy}\) in Eq. (18), leading to an antisymmetric distribution of Berry curvature along the main diagonal of the Brillouin zone. In addition, successive \(\pi/2\) rotations leave Eq. (18) unchanged, hence the antisymmetric pattern found in the first quadrant (\(k_{x,y}>0\)) is extrapolated by rotations to the rest of the Brillouin zone. We note that the behavior in Fig. 3(e) is in accordance with the pseudovector nature of the Berry curvature [55]. Namely, one may define a pseudovector \(\mathfrak{B}_{k}\) related to the Berry curvature tensor in Eq. (5) as follows: \(\mathfrak{B}_{k}\equiv\epsilon_{ijk}\mathfrak{B}_{ij}\). The components of this pseudovector are given by the curl of the Berry connection: \(\mathfrak{B}=\nabla_{\mathbf{k}}\times\mathbf{A}\), where \(\mathfrak{B}=(\mathfrak{B}_{yz},\mathfrak{B}_{zx},\mathfrak{B}_{xy})\). As we restrict to the two-dimensional \(k-\)space, the pseudovector only has one non-zero component in the out-of-plane direction: \(\mathfrak{B}=\mathfrak{B}_{xy}\mathbf{\hat{k}}_{z}\). The properties of a generic pseudovector \(\mathbf{v}\) are such that it will transform as \(\mathbf{v}^{\prime}=(\det R)(R\mathbf{v})\) for a rotation \(R\), which is either proper (e.g. rotations around an axis) with \(\det R=1\), or improper (such as the mirror operation \(\sigma_{v}\) considered above) with \(\det R=-1\). Thus a mirror operation reverses the direction of the Berry curvature \(\mathfrak{B}^{\prime}=-\mathfrak{B}_{xy}\mathbf{\hat{k}}_{z}\) with equal magnitude, explaining the change of sign at the diagonals of the Brillouin zone in Fig. 3(e), and rotations around the \(\mathbf{k}_{z}\)-axis leave the Berry curvature invariant. Overall, the quantum metric and Berry curvature found close to the \(\Gamma\)-point in Figs. 3(b)-(e) for a square plasmonic lattice are apparently unique to systems with long-range radiative interactions. Previous studies have identified non-zero QGT components around high-symmetry or other specific points in \(k-\)space [29; 58; 60; 61], generally in tight-binding or continuum systems. Here, in contrast, we find that the non-zero contributions to the QGT are displayed along entire high-symmetry lines of the Brillouin zone, owing to the band structure of the system that is dominated by diffraction, see Ref. [54]. Importantly, we have found a Berry curvature that is non-zero at the local level, even for the highly symmetric, apparently trivial geometry of a square lattice. Such behavior, as discussed in Section III, points to a non-Hermitian origin since dissipation in the nanoparticles may break the time-reversal symmetry and could be linked with the band splitting along the diagonals of the Brillouin zone. ## V Loss dependence of the QGT We now address the intriguing question of why the Berry curvature becomes locally non-zero in a trivial lattice geometry with no apparent symmetry breaking. For that purpose, we clarify the role of losses in our results, and carry out additional simulations of the quantum metric and Berry curvature while varying the imaginary part of the metal permittivity \(\epsilon_{m}\) corresponding to the ohmic losses, see Fig. 4. For illustration, we study the evolution of the QGT components at a single point \(\mathbf{k}_{0}\) of the Brillouin zone close to the diagonals where the quantum geometric features in Figs. 3(b)-(e) displayed maximum values (see Fig. 4(a)). Fig. 4(b) shows that the Berry curvature \(\mathfrak{B}_{xy}\) at the point \(\mathbf{k}_{0}\) depends highly on the dissipation losses. In particular, manually increasing the value of the metal permittivity beyond \(\Im\epsilon_{m}\gtrsim 7\) results in a linear increase of the Berry curvature, and a similar dependence is obtained below \(\Im\epsilon_{m}\lesssim-9\) (for which gain instead of loss is numerically introduced to the involved TE mode) with opposite sign. The Berry curvature is negative but very small for values around \(\Im\epsilon_{m}\approx 0\), including the physical value \(\Im\epsilon_{m}\approx 2.3\) of realistic lossy nanoparticles. Overall, the presence of local Berry curvature in \(k-\)space correlates well with the existence of losses (or gain) in the system. The non-trivial emergence of Berry curvature in our square plasmonic lattice is thus a non-Hermitian effect, produced by the time-reversal symmetry breaking through dissipation of the system, rather than induced by the lattice symmetry or its breaking. Further evidence of this is provided in Ref. [54], where setting zero losses in the two-band model therein leads to a zero Berry curvature in every point of the Brillouin zone. The quantum metric components also exhibit an interesting dependence on ohmic losses, as their absolute values become larger for increased losses or gain, with a nearly symmetric distribution around values of \(\Im\epsilon_{m}\) corresponding to low-loss particles, see Figs. 4(c),(d). The slight asymmetry with respect to gain and loss points out that also other effects than dissipation affect the quantum geometric properties of the realistic system: we address these and offer additional physical insight with the simple two-band model in Ref. [54]. We note that the quantum metric is positive semidefinite for every studied value of the ohmic losses, as it should, see Fig. 4(e). We also find that additional radiative losses inherent to sufficiently large plasmonic nanoparticles introduce an asymmetry in the loss dependence of Figs. 4(b)-(e) with respect to the value \(\Im\epsilon_{m}=0\) (for which ohmic losses are neglected). Fig. 4(f) reveals that both the radiative and ohmic losses of the analyzed TE mode are fully compensated for \(\Im\epsilon_{m}\approx-2.9\) (green dashed line in Figs. 4(b)-(f)), that coincides with the change of trend in Fig. 4(b), and with the peak values in Figs. 4(c),(d). We thus confirm the combined influence of both radiative and dissipative losses in the quantum geometric tensor components of a plasmonic lattice. ## VI Discussion and Conclusions We have studied the quantum geometric tensor, comprising of the quantum metric and the Berry curvature, in a square plasmonic lattice. The band structure of the system was investigated using a T-matrix method that combines the optical properties of individual metallic nanoparticles with the long-range radiative interactions between them. This method includes the dissipative losses inherent to metals, enabling access to both the band energies and modal losses. By numerically tracking a closed path in reciprocal space, with trajectories parallel to the high-symmetry axes of the system, a band splitting was found along the diagonal of the first Brillouin zone. In addition, we calculated the quantum geometric tensor explicitly, with the complex-valued eigenvectors provided by the T-matrix approach, and found non-trivial distributions of quantum metric and Berry Figure 4: Dependence of the QGT components (quantum metric tensor and Berry curvature) on dissipation losses. (a) We compute the quantum metric and Berry curvature at the point \(\mathbf{k}_{0}=(10,9.5)\) mm\({}^{-1}\) in \(k-\)space (green point), close to the diagonal \(\Gamma-M\) of the first Brillouin zone where non-zero components of the QGT are found in Fig. 3. (b) Berry curvature \(\mathfrak{B}_{xy}\) at the point \(\mathbf{k}_{0}\), calculated from several simulations analogous to those in Fig. 3. The imaginary part of the permittivity \(\epsilon_{m}\) characterizing the metallic nanoparticles is manually varied in each simulation, while the real part is kept constant at the physical value \(\Re\epsilon_{m}\approx-26.3\). The physical value of \(\Im\epsilon_{m}\approx 2.3\) corresponding to Fig. 3 is marked as a red dashed line. In our convention, \(\Im\epsilon_{m}>0\) corresponds to a lossy material, and \(\Im\epsilon_{m}<0\) introduces gain in the nanoparticles. (c)-(e) Components \(g_{xx}\) and \(g_{xy}\) of the quantum metric, and square root of the determinant \(\det(g)=g_{xx}g_{yy}-g_{xy}g_{yx}\), respectively, calculated at the point \(\mathbf{k}_{0}\) for varying \(\Im\epsilon_{m}\) as described above. For every value of \(\Im\epsilon_{m}\), we found that \(g_{yy}\approx g_{xx}\) and \(g_{yx}\approx g_{xy}\) as in Fig. 3, hence these components are not shown. We have found that \(\sqrt{\det g}>0.5|\mathfrak{B}_{xy}|\), as it should by definition [63], for every of \(\Im\epsilon_{m}\) shown here. (f) Modal loss of the analyzed TE mode as a function of \(\Im\epsilon_{m}\). Due to non-negligible radiative losses, the modal loss becomes zero at \(\Im\epsilon_{m}\approx-2.9\) (green dashed line in panels (b)-(f)). This explains the asymmetry with respect to the value \(\Im\epsilon_{m}=0\) (where only ohmic losses are compensated) in those panels. Results for the TM mode are the same as presented here, with an extra minus sign in the Berry curvature \(\mathfrak{B}_{xy}\). curvature in all the diagonals of the Brillouin zone. The results here presented clarify, with microscopical simulations, the origin of pseudospin orbit-coupling in a square plasmonic lattice. In particular, we have shown numerically that a TE-TM band splitting emerges at the diagonal of the Brillouin zone, but it is absent in the empty lattice results presented in Ref. [54]. Therefore, the band splitting at the diagonals is caused by the presence of the nanoparticles, and not by the lattice symmetry. This is in agreement with the statements and with the experimental observations in Ref. [54], and the polarization properties shown therein verify the TE and TM nature of the involved modes at the diagonal. Our numerical simulations presented here provide realistic values of the band splitting that are utilized as an input for the two band model in Ref. [54]. The results of all the QGT components show excellent qualitative agreement with those in Ref. [54]. Our numerical simulations also yield a non-zero antisymmetric Berry curvature in the diagonals of the Brillouin zone. We have found that the Berry curvature correlates with loss (or gain) in the system, corroborating its non-Hermitian origin. The Berry curvature is non-zero at the local level, but its distribution strongly suggests that the Chern number is zero. Thus, in this case, the breaking of time-reversal symmetry generates non-trivial quantum geometry locally but without topologically non-trivial behavior. Overall, our results provide a microscopical understanding to the pioneering observation of non-Hermitian Berry curvature in a plasmonic lattice [54]. They provide a basis for extending the studies of topological photonics in plasmonic lattices and similar long-range coupled platforms, to include effects of various geometric, particle shape, magnetic field, and gain. With such symmetry breaking mechanisms, a rich variety of novel topological and quantum geometric phenomena can be expected. ###### Acknowledgements. We acknowledge useful discussions with Mikko Rosenberg and Marek Necada. This work was supported by the Academy of Finland under Project No. 349313, Project No. 318937 (PROFI), and the Academy of Finland Flagship Programme in Photonics Research and Innovation (PREIN) Project No. 320167. J.C. acknowledges former support by the Academy of Finland under project No. 325608. J.M.T. acknowledges financial support by the Magnus Ehrnrooth Foundation. Part of the research was performed at the OtaNano Nanofab cleanroom (Micronova Nanofabrication Centre), supported by Aalto University. ## Appendix A T-matrix formalism and simulations We use a frequency-domain T-matrix method to formulate the multiple-scattering problem of a lattice of metallic nanoparticles interacting with light. We model a two-dimensional lattice in the three-dimensional space, embedded in a dielectric with refractive index \(n_{h}\). The optical response of the metallic (Au) nanoparticles is introduced with a Drude-Lorentz model. The T-matrix method numerically solves the wave equation in this system, by expanding the incident and the scattered electric fields in a basis of vector spherical wavefunctions (VSWFs) [93; 95]: \[\mathbf{E}(\omega,\mathbf{r})=\sum_{\tau=1,2}\sum_{l=1}^{\infty} \sum_{m=-l}^{l}(a_{\tau lm}\mathbf{v}_{\tau lm}(\kappa\mathbf{r})\\ +f_{\tau lm}\mathbf{u}_{\tau lm}(\kappa\mathbf{r})). \tag{10}\] Here, \(\mathbf{v}_{\tau lm}(\kappa\mathbf{r})\) are the so-called _regular_ VSWFs, whereas \(\mathbf{u}_{\tau lm}(\kappa\mathbf{r})\) are named _outgoing_ VSWFs. The subindex \(\tau\) refers to whether the VSWF is _electric_ or _magnetic_-type; \(l\) is the multipole degree and \(m\) is the multipole azimuthal number. Ref. [93] contains complete expressions of the VSWFs. Without the presence of the scatterer, the regular VSWFs would suffice to form a basis that solves the Maxwell equations for a homogeneous dielectric background; hence we identify the \(a_{\tau lm}\) coefficients as those corresponding to the incident electric field: \(\mathbf{E}_{inc}(\omega,\mathbf{r})=\sum_{\tau lm}a_{\tau lm}\mathbf{v}_{ \tau lm}(\kappa\mathbf{r})\). Similarly, the \(f_{\tau lm}\) coefficients correspond to the scattered electric field \(\mathbf{E}_{scat}(\omega,\mathbf{r})\). Both sets of coefficients are connected by the _transition_ (T-)matrix [96]: \[f_{\tau lm}=\sum_{\tau^{\prime}l^{\prime}m^{\prime}}T^{\tau^{\prime}l^{ \prime}m^{\prime}}_{\tau lm}(\omega)a_{\tau^{\prime}l^{\prime}m^{\prime}}, \tag{11}\] where we have highlighted that the T-matrix depends on frequency. Moving to the problem of scatterers arranged in a square lattice, we may similarly expand the electric field as in Eq. (10) for each scatterer \(p\) placed in position \(\mathbf{r}_{p}\) on the two-dimensional plane of the lattice: \[\mathbf{E}_{p}(\omega,\mathbf{r})=\sum_{\tau=1,2}\sum_{l=1}^{ \infty}\sum_{m=-l}^{l}(a_{p,\tau lm}\mathbf{v}_{\tau lm}(\kappa(\mathbf{r}- \mathbf{r}_{p}))\\ +f_{p,\tau lm}\mathbf{u}_{\tau lm}(\kappa(\mathbf{r}-\mathbf{r}_{p }))). \tag{12}\] Due to the presence of other scatterers, the \(a_{p,\tau lm}\) coefficients contain not only the contribution of the incident electric field, but also from scattered fields by other lattice sites. We may re-expand scattered fields at other lattice sites \(\mathbf{r}_{q}\neq\mathbf{r}_{p}\) into the basis of regular VSWFs \(\mathbf{v}_{\tau lm}\) around the site \(p\), by means of a translation operator \(S_{p\gets q}\)[93]. As a result, the expression for the coefficients \(a_{p,\tau lm}\) now reads as follows: \[a_{p,\tau lm}=\tilde{a}_{p,\tau lm}+\sum_{q\neq p}S_{p\gets q}f_{q,\tau lm}. \tag{13}\] Multiplying Eq. (10) by \(f_{q,\tau lm}\) from the left, and using Eq. (11), we find: \[f_{p}-T_{p}\sum_{q\neq p}S_{p\gets q}f_{q}=T_{p}\tilde{a}_{p}, \tag{12}\] where we have dropped the subindices \(\tau\), \(l\), and \(m\) for clarity. Eq. (12) formally solves the scattering problem for a set of particles in positions \(\mathbf{r}_{p}\) on a two-dimensional plane. For a Bravais lattice of scatterers, the positions of the particles are labelled by \(p\rightarrow\mathbf{n},\alpha\); where \(\mathbf{r}_{\mathbf{n},\alpha}=\mathbf{R}_{\mathbf{n}}+\mathbf{r}_{\alpha}\), and \(\mathbf{R}_{\mathbf{n}}=n_{1}\mathbf{a}_{1}+n_{2}\mathbf{a}_{2}\), where \(\mathbf{a}_{1,2}\) are the vectors of the Bravais lattice and \(n_{1,2}\) are integers. We similarly relabel \(q\rightarrow\mathbf{m},\beta\) and redefine the origin of coordinates using periodicity; therefore \(S_{\mathbf{n},\alpha\leftarrow\mathbf{m},\beta}=S_{\mathbf{0},\alpha \leftarrow\mathbf{m}-\mathbf{n},\beta}\). Further, we use the Bloch theorem to write \(\tilde{a}_{\mathbf{n},\alpha}=\tilde{a}_{\mathbf{0},\alpha}(\mathbf{k})e^{i \mathbf{k}\cdot\mathbf{R}_{\mathbf{n}}}\), and similarly for \(f_{\mathbf{n},\alpha}\). Hence Eq. (12) is rewritten as \[f_{\mathbf{0},\alpha}(\mathbf{k})-T_{\alpha}\sum_{\beta}W_{\alpha\beta}( \mathbf{k})f_{\mathbf{0},\beta}(\mathbf{k})=T_{\alpha}\tilde{a}_{\mathbf{0}, \alpha}(\mathbf{k}), \tag{13}\] where the term \(W_{\alpha\beta}(\mathbf{k})\equiv\sum_{\mathbf{m}}(1-\delta_{\alpha\beta} \delta_{\mathbf{m}\mathbf{0}})S_{\mathbf{0},\alpha\leftarrow\mathbf{m},\beta }e^{i\mathbf{k}\cdot\mathbf{R}_{\mathbf{m}}}\) computes the infinite lattice (Ewald) sum, and hence implements the long-range radiative interactions that characterize our lattice [93]. Writing Eq. (13) in the matrix form, we end up with \[(I-TW)f_{\mathbf{0}}(\mathbf{k})=T\tilde{a}_{\mathbf{0}}(\mathbf{k}). \tag{14}\] Expression (14) solves for the scattering problem and has a clear interpretation: the right-hand side contains a forcing vector with coefficients \(\tilde{a}_{\mathbf{0},\tau lm}\) of the incident electric field; the left-hand side accounts for both the single particle response via the \(T\) matrix and the contribution from the lattice geometry via \(W\). The coefficients \(f_{\mathbf{0},\tau lm}(\mathbf{k})\) are the unknowns that solve the scattering problem. If the forcing term is set to zero \(T\tilde{a}_{\mathbf{0},\tau lm}=\mathbf{0}\), Eq. (14) turns into an eigenmode and eigenvalue problem. The QPMS suite utilized in this paper [94] provides an implementation of the Beyn's algorithm [97] that solves this eigenvalue problem and provides the eigenfunctions for the direct calculation of the QGT using Eqs. (11), (12), and (6).